problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_35213 | rasdani/github-patches | git_diff | saleor__saleor-5590 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
psycopg2.errors.NotNullViolation: column "slug" contains null values
### What I'm trying to achieve
Upgrade from version 2.9.0 to 2.10.0-rc.1. Running the migrate command successfully.
Result:
```
Applying product.0111_auto_20191209_0437... OK
Applying product.0112_auto_20200129_0050...Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 86, in _execute
return self.cursor.execute(sql, params)
psycopg2.errors.NotNullViolation: column "slug" contains null values
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.8/site-packages/django/core/management/__init__.py", line 401, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.8/site-packages/django/core/management/__init__.py", line 395, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.8/site-packages/django/core/management/base.py", line 328, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python3.8/site-packages/django/core/management/base.py", line 369, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python3.8/site-packages/django/core/management/base.py", line 83, in wrapped
res = handle_func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/core/management/commands/migrate.py", line 231, in handle
post_migrate_state = executor.migrate(
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/executor.py", line 117, in migrate
state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_initial=fake_initial)
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/executor.py", line 147, in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/executor.py", line 245, in apply_migration
state = migration.apply(state, schema_editor)
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/migration.py", line 124, in apply
operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/operations/fields.py", line 249, in database_forwards
schema_editor.alter_field(from_model, from_field, to_field)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/schema.py", line 564, in alter_field
self._alter_field(model, old_field, new_field, old_type, new_type,
File "/usr/local/lib/python3.8/site-packages/django/db/backends/postgresql/schema.py", line 152, in _alter_field
super()._alter_field(
File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/schema.py", line 710, in _alter_field
self.execute(
File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/schema.py", line 142, in execute
cursor.execute(sql, params)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 100, in execute
return super().execute(sql, params)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 68, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 77, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 86, in _execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.8/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 86, in _execute
return self.cursor.execute(sql, params)
django.db.utils.IntegrityError: column "slug" contains null values
```
### Steps to reproduce the problem
1. Running 2.9.0 version with some data included
2. Upgrade the docker container, try to run the migrate command
### What I expected to happen
Run the migrate command successfully. The migration still seems to be very buggy. We already had issue before as seen here: #5391
### Screenshots
<!-- If applicable, add screenshots to help explain your problem. -->
**System information**
Operating system:
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/warehouse/migrations/0003_warehouse_slug.py`
Content:
```
1 # Generated by Django 2.2.9 on 2020-01-29 06:52
2
3 from django.db import migrations, models
4 from django.db.models.functions import Lower
5 from django.utils.text import slugify
6
7
8 def create_unique_slug_for_warehouses(apps, schema_editor):
9 Warehouse = apps.get_model("warehouse", "Warehouse")
10
11 warehouses = (
12 Warehouse.objects.filter(slug__isnull=True).order_by(Lower("name")).iterator()
13 )
14 previous_char = ""
15 slug_values = []
16 for warehouse in warehouses:
17 first_char = warehouse.name[0].lower()
18 if first_char != previous_char:
19 previous_char = first_char
20 slug_values = Warehouse.objects.filter(
21 slug__istartswith=first_char
22 ).values_list("slug", flat=True)
23
24 slug = generate_unique_slug(warehouse, slug_values)
25 warehouse.slug = slug
26 slug_values.append(slug)
27
28
29 def generate_unique_slug(instance, slug_values):
30 slug = slugify(instance.name)
31 unique_slug = slug
32 extension = 1
33
34 while unique_slug in slug_values:
35 extension += 1
36 unique_slug = f"{slug}-{extension}"
37
38 return unique_slug
39
40
41 class Migration(migrations.Migration):
42
43 dependencies = [
44 ("warehouse", "0002_auto_20200123_0036"),
45 ]
46
47 operations = [
48 migrations.AddField(
49 model_name="warehouse",
50 name="slug",
51 field=models.SlugField(null=True, max_length=255, unique=True),
52 preserve_default=False,
53 ),
54 migrations.RunPython(
55 create_unique_slug_for_warehouses, migrations.RunPython.noop
56 ),
57 migrations.AlterField(
58 model_name="warehouse",
59 name="slug",
60 field=models.SlugField(max_length=255, unique=True),
61 ),
62 ]
63
```
Path: `saleor/product/migrations/0114_auto_20200129_0815.py`
Content:
```
1 # Generated by Django 2.2.9 on 2020-01-29 14:15
2
3 from django.db import migrations, models
4 from django.db.models.functions import Lower
5 from django.utils.text import slugify
6
7
8 def create_unique_slug_for_products(apps, schema_editor):
9 Product = apps.get_model("product", "Product")
10
11 products = (
12 Product.objects.filter(slug__isnull=True).order_by(Lower("name")).iterator()
13 )
14 previous_char = ""
15 slug_values = []
16 for product in products:
17 first_char = product.name[0].lower()
18 if first_char != previous_char:
19 previous_char = first_char
20 slug_values = Product.objects.filter(
21 slug__istartswith=first_char
22 ).values_list("slug", flat=True)
23
24 slug = generate_unique_slug(product, slug_values)
25 product.slug = slug
26 slug_values.append(slug)
27
28
29 def generate_unique_slug(instance, slug_values):
30 slug = slugify(instance.name)
31 unique_slug = slug
32 extension = 1
33
34 while unique_slug in slug_values:
35 extension += 1
36 unique_slug = f"{slug}-{extension}"
37
38 return unique_slug
39
40
41 class Migration(migrations.Migration):
42
43 dependencies = [
44 ("product", "0113_auto_20200129_0717"),
45 ]
46
47 operations = [
48 migrations.AddField(
49 model_name="product",
50 name="slug",
51 field=models.SlugField(null=True, max_length=255, unique=True),
52 preserve_default=False,
53 ),
54 migrations.AlterField(
55 model_name="product", name="name", field=models.CharField(max_length=250),
56 ),
57 migrations.RunPython(
58 create_unique_slug_for_products, migrations.RunPython.noop
59 ),
60 migrations.AlterField(
61 model_name="product",
62 name="slug",
63 field=models.SlugField(max_length=255, unique=True),
64 ),
65 ]
66
```
Path: `saleor/product/migrations/0112_auto_20200129_0050.py`
Content:
```
1 # Generated by Django 2.2.9 on 2020-01-29 06:50
2
3 from collections import defaultdict
4
5 from django.db import migrations, models
6 from django.db.models.functions import Lower
7 from django.utils.text import slugify
8
9
10 def create_unique_slugs_for_producttypes(apps, schema_editor):
11 ProductType = apps.get_model("product", "ProductType")
12
13 product_types = (
14 ProductType.objects.filter(slug__isnull=True).order_by(Lower("name")).iterator()
15 )
16 previous_char = ""
17 slug_values = []
18 for product_type in product_types:
19 first_char = product_type.name[0].lower()
20 if first_char != previous_char:
21 previous_char = first_char
22 slug_values = list(
23 ProductType.objects.filter(slug__istartswith=first_char).values_list(
24 "slug", flat=True
25 )
26 )
27
28 slug = generate_unique_slug(product_type, slug_values)
29 product_type.slug = slug
30 slug_values.append(slug)
31
32
33 def generate_unique_slug(instance, slug_values_list):
34 slug = slugify(instance.name)
35 unique_slug = slug
36
37 extension = 1
38
39 while unique_slug in slug_values_list:
40 extension += 1
41 unique_slug = f"{slug}-{extension}"
42
43 return unique_slug
44
45
46 def update_non_unique_slugs_for_models(apps, schema_editor):
47 models_to_update = ["Category", "Collection"]
48
49 for model in models_to_update:
50 Model = apps.get_model("product", model)
51
52 duplicated_slugs = (
53 Model.objects.all()
54 .values("slug")
55 .annotate(duplicated_slug_num=models.Count("slug"))
56 .filter(duplicated_slug_num__gt=1)
57 )
58
59 slugs_counter = defaultdict(int)
60 for data in duplicated_slugs:
61 slugs_counter[data["slug"]] = data["duplicated_slug_num"]
62
63 queryset = Model.objects.filter(slug__in=slugs_counter.keys()).order_by("name")
64
65 for instance in queryset:
66 slugs_counter[instance.slug] -= 1
67 slug = update_slug_to_unique_value(instance.slug, slugs_counter)
68 instance.slug = slug
69 slugs_counter[slug] += 1
70
71
72 def update_slug_to_unique_value(slug_value, slugs_counter):
73 unique_slug = slug_value
74 extension = 1
75
76 while unique_slug in slugs_counter and slugs_counter[unique_slug] > 0:
77 extension += 1
78 unique_slug = f"{slug_value}-{extension}"
79
80 return unique_slug
81
82
83 class Migration(migrations.Migration):
84
85 dependencies = [
86 ("product", "0111_auto_20191209_0437"),
87 ]
88
89 operations = [
90 migrations.RunPython(
91 update_non_unique_slugs_for_models, migrations.RunPython.noop
92 ),
93 migrations.AddField(
94 model_name="producttype",
95 name="slug",
96 field=models.SlugField(null=True, max_length=128, unique=True),
97 preserve_default=False,
98 ),
99 migrations.AlterField(
100 model_name="category",
101 name="slug",
102 field=models.SlugField(max_length=128, unique=True),
103 ),
104 migrations.AlterField(
105 model_name="collection",
106 name="slug",
107 field=models.SlugField(max_length=128, unique=True),
108 ),
109 migrations.RunPython(
110 create_unique_slugs_for_producttypes, migrations.RunPython.noop
111 ),
112 migrations.AlterField(
113 model_name="producttype",
114 name="slug",
115 field=models.SlugField(max_length=128, unique=True),
116 ),
117 ]
118
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/saleor/product/migrations/0112_auto_20200129_0050.py b/saleor/product/migrations/0112_auto_20200129_0050.py
--- a/saleor/product/migrations/0112_auto_20200129_0050.py
+++ b/saleor/product/migrations/0112_auto_20200129_0050.py
@@ -27,6 +27,7 @@
slug = generate_unique_slug(product_type, slug_values)
product_type.slug = slug
+ product_type.save(update_fields=["slug"])
slug_values.append(slug)
@@ -66,6 +67,7 @@
slugs_counter[instance.slug] -= 1
slug = update_slug_to_unique_value(instance.slug, slugs_counter)
instance.slug = slug
+ instance.save(update_fields=["slug"])
slugs_counter[slug] += 1
diff --git a/saleor/product/migrations/0114_auto_20200129_0815.py b/saleor/product/migrations/0114_auto_20200129_0815.py
--- a/saleor/product/migrations/0114_auto_20200129_0815.py
+++ b/saleor/product/migrations/0114_auto_20200129_0815.py
@@ -17,12 +17,15 @@
first_char = product.name[0].lower()
if first_char != previous_char:
previous_char = first_char
- slug_values = Product.objects.filter(
- slug__istartswith=first_char
- ).values_list("slug", flat=True)
+ slug_values = list(
+ Product.objects.filter(slug__istartswith=first_char).values_list(
+ "slug", flat=True
+ )
+ )
slug = generate_unique_slug(product, slug_values)
product.slug = slug
+ product.save(update_fields=["slug"])
slug_values.append(slug)
diff --git a/saleor/warehouse/migrations/0003_warehouse_slug.py b/saleor/warehouse/migrations/0003_warehouse_slug.py
--- a/saleor/warehouse/migrations/0003_warehouse_slug.py
+++ b/saleor/warehouse/migrations/0003_warehouse_slug.py
@@ -17,12 +17,15 @@
first_char = warehouse.name[0].lower()
if first_char != previous_char:
previous_char = first_char
- slug_values = Warehouse.objects.filter(
- slug__istartswith=first_char
- ).values_list("slug", flat=True)
+ slug_values = list(
+ Warehouse.objects.filter(slug__istartswith=first_char).values_list(
+ "slug", flat=True
+ )
+ )
slug = generate_unique_slug(warehouse, slug_values)
warehouse.slug = slug
+ warehouse.save(update_fields=["slug"])
slug_values.append(slug)
| {"golden_diff": "diff --git a/saleor/product/migrations/0112_auto_20200129_0050.py b/saleor/product/migrations/0112_auto_20200129_0050.py\n--- a/saleor/product/migrations/0112_auto_20200129_0050.py\n+++ b/saleor/product/migrations/0112_auto_20200129_0050.py\n@@ -27,6 +27,7 @@\n \n slug = generate_unique_slug(product_type, slug_values)\n product_type.slug = slug\n+ product_type.save(update_fields=[\"slug\"])\n slug_values.append(slug)\n \n \n@@ -66,6 +67,7 @@\n slugs_counter[instance.slug] -= 1\n slug = update_slug_to_unique_value(instance.slug, slugs_counter)\n instance.slug = slug\n+ instance.save(update_fields=[\"slug\"])\n slugs_counter[slug] += 1\n \n \ndiff --git a/saleor/product/migrations/0114_auto_20200129_0815.py b/saleor/product/migrations/0114_auto_20200129_0815.py\n--- a/saleor/product/migrations/0114_auto_20200129_0815.py\n+++ b/saleor/product/migrations/0114_auto_20200129_0815.py\n@@ -17,12 +17,15 @@\n first_char = product.name[0].lower()\n if first_char != previous_char:\n previous_char = first_char\n- slug_values = Product.objects.filter(\n- slug__istartswith=first_char\n- ).values_list(\"slug\", flat=True)\n+ slug_values = list(\n+ Product.objects.filter(slug__istartswith=first_char).values_list(\n+ \"slug\", flat=True\n+ )\n+ )\n \n slug = generate_unique_slug(product, slug_values)\n product.slug = slug\n+ product.save(update_fields=[\"slug\"])\n slug_values.append(slug)\n \n \ndiff --git a/saleor/warehouse/migrations/0003_warehouse_slug.py b/saleor/warehouse/migrations/0003_warehouse_slug.py\n--- a/saleor/warehouse/migrations/0003_warehouse_slug.py\n+++ b/saleor/warehouse/migrations/0003_warehouse_slug.py\n@@ -17,12 +17,15 @@\n first_char = warehouse.name[0].lower()\n if first_char != previous_char:\n previous_char = first_char\n- slug_values = Warehouse.objects.filter(\n- slug__istartswith=first_char\n- ).values_list(\"slug\", flat=True)\n+ slug_values = list(\n+ Warehouse.objects.filter(slug__istartswith=first_char).values_list(\n+ \"slug\", flat=True\n+ )\n+ )\n \n slug = generate_unique_slug(warehouse, slug_values)\n warehouse.slug = slug\n+ warehouse.save(update_fields=[\"slug\"])\n slug_values.append(slug)\n", "issue": "psycopg2.errors.NotNullViolation: column \"slug\" contains null values\n### What I'm trying to achieve\r\nUpgrade from version 2.9.0 to 2.10.0-rc.1. Running the migrate command successfully.\r\n\r\nResult:\r\n```\r\n Applying product.0111_auto_20191209_0437... OK\r\n Applying product.0112_auto_20200129_0050...Traceback (most recent call last):\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py\", line 86, in _execute\r\n return self.cursor.execute(sql, params)\r\npsycopg2.errors.NotNullViolation: column \"slug\" contains null values\r\n\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"manage.py\", line 10, in <module>\r\n execute_from_command_line(sys.argv)\r\n File \"/usr/local/lib/python3.8/site-packages/django/core/management/__init__.py\", line 401, in execute_from_command_line\r\n utility.execute()\r\n File \"/usr/local/lib/python3.8/site-packages/django/core/management/__init__.py\", line 395, in execute\r\n self.fetch_command(subcommand).run_from_argv(self.argv)\r\n File \"/usr/local/lib/python3.8/site-packages/django/core/management/base.py\", line 328, in run_from_argv\r\n self.execute(*args, **cmd_options)\r\n File \"/usr/local/lib/python3.8/site-packages/django/core/management/base.py\", line 369, in execute\r\n output = self.handle(*args, **options)\r\n File \"/usr/local/lib/python3.8/site-packages/django/core/management/base.py\", line 83, in wrapped\r\n res = handle_func(*args, **kwargs)\r\n File \"/usr/local/lib/python3.8/site-packages/django/core/management/commands/migrate.py\", line 231, in handle\r\n post_migrate_state = executor.migrate(\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/migrations/executor.py\", line 117, in migrate\r\n state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_initial=fake_initial)\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/migrations/executor.py\", line 147, in _migrate_all_forwards\r\n state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/migrations/executor.py\", line 245, in apply_migration\r\n state = migration.apply(state, schema_editor)\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/migrations/migration.py\", line 124, in apply\r\n operation.database_forwards(self.app_label, schema_editor, old_state, project_state)\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/migrations/operations/fields.py\", line 249, in database_forwards\r\n schema_editor.alter_field(from_model, from_field, to_field)\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/backends/base/schema.py\", line 564, in alter_field\r\n self._alter_field(model, old_field, new_field, old_type, new_type,\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/backends/postgresql/schema.py\", line 152, in _alter_field\r\n super()._alter_field(\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/backends/base/schema.py\", line 710, in _alter_field\r\n self.execute(\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/backends/base/schema.py\", line 142, in execute\r\n cursor.execute(sql, params)\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py\", line 100, in execute\r\n return super().execute(sql, params)\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py\", line 68, in execute\r\n return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py\", line 77, in _execute_with_wrappers\r\n return executor(sql, params, many, context)\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py\", line 86, in _execute\r\n return self.cursor.execute(sql, params)\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/utils.py\", line 90, in __exit__\r\n raise dj_exc_value.with_traceback(traceback) from exc_value\r\n File \"/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py\", line 86, in _execute\r\n return self.cursor.execute(sql, params)\r\ndjango.db.utils.IntegrityError: column \"slug\" contains null values\r\n``` \r\n\r\n\r\n\r\n### Steps to reproduce the problem\r\n1. Running 2.9.0 version with some data included\r\n2. Upgrade the docker container, try to run the migrate command\r\n\r\n### What I expected to happen\r\nRun the migrate command successfully. The migration still seems to be very buggy. We already had issue before as seen here: #5391 \r\n\r\n### Screenshots\r\n<!-- If applicable, add screenshots to help explain your problem. -->\r\n\r\n**System information**\r\nOperating system:\r\n\n", "before_files": [{"content": "# Generated by Django 2.2.9 on 2020-01-29 06:52\n\nfrom django.db import migrations, models\nfrom django.db.models.functions import Lower\nfrom django.utils.text import slugify\n\n\ndef create_unique_slug_for_warehouses(apps, schema_editor):\n Warehouse = apps.get_model(\"warehouse\", \"Warehouse\")\n\n warehouses = (\n Warehouse.objects.filter(slug__isnull=True).order_by(Lower(\"name\")).iterator()\n )\n previous_char = \"\"\n slug_values = []\n for warehouse in warehouses:\n first_char = warehouse.name[0].lower()\n if first_char != previous_char:\n previous_char = first_char\n slug_values = Warehouse.objects.filter(\n slug__istartswith=first_char\n ).values_list(\"slug\", flat=True)\n\n slug = generate_unique_slug(warehouse, slug_values)\n warehouse.slug = slug\n slug_values.append(slug)\n\n\ndef generate_unique_slug(instance, slug_values):\n slug = slugify(instance.name)\n unique_slug = slug\n extension = 1\n\n while unique_slug in slug_values:\n extension += 1\n unique_slug = f\"{slug}-{extension}\"\n\n return unique_slug\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n (\"warehouse\", \"0002_auto_20200123_0036\"),\n ]\n\n operations = [\n migrations.AddField(\n model_name=\"warehouse\",\n name=\"slug\",\n field=models.SlugField(null=True, max_length=255, unique=True),\n preserve_default=False,\n ),\n migrations.RunPython(\n create_unique_slug_for_warehouses, migrations.RunPython.noop\n ),\n migrations.AlterField(\n model_name=\"warehouse\",\n name=\"slug\",\n field=models.SlugField(max_length=255, unique=True),\n ),\n ]\n", "path": "saleor/warehouse/migrations/0003_warehouse_slug.py"}, {"content": "# Generated by Django 2.2.9 on 2020-01-29 14:15\n\nfrom django.db import migrations, models\nfrom django.db.models.functions import Lower\nfrom django.utils.text import slugify\n\n\ndef create_unique_slug_for_products(apps, schema_editor):\n Product = apps.get_model(\"product\", \"Product\")\n\n products = (\n Product.objects.filter(slug__isnull=True).order_by(Lower(\"name\")).iterator()\n )\n previous_char = \"\"\n slug_values = []\n for product in products:\n first_char = product.name[0].lower()\n if first_char != previous_char:\n previous_char = first_char\n slug_values = Product.objects.filter(\n slug__istartswith=first_char\n ).values_list(\"slug\", flat=True)\n\n slug = generate_unique_slug(product, slug_values)\n product.slug = slug\n slug_values.append(slug)\n\n\ndef generate_unique_slug(instance, slug_values):\n slug = slugify(instance.name)\n unique_slug = slug\n extension = 1\n\n while unique_slug in slug_values:\n extension += 1\n unique_slug = f\"{slug}-{extension}\"\n\n return unique_slug\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n (\"product\", \"0113_auto_20200129_0717\"),\n ]\n\n operations = [\n migrations.AddField(\n model_name=\"product\",\n name=\"slug\",\n field=models.SlugField(null=True, max_length=255, unique=True),\n preserve_default=False,\n ),\n migrations.AlterField(\n model_name=\"product\", name=\"name\", field=models.CharField(max_length=250),\n ),\n migrations.RunPython(\n create_unique_slug_for_products, migrations.RunPython.noop\n ),\n migrations.AlterField(\n model_name=\"product\",\n name=\"slug\",\n field=models.SlugField(max_length=255, unique=True),\n ),\n ]\n", "path": "saleor/product/migrations/0114_auto_20200129_0815.py"}, {"content": "# Generated by Django 2.2.9 on 2020-01-29 06:50\n\nfrom collections import defaultdict\n\nfrom django.db import migrations, models\nfrom django.db.models.functions import Lower\nfrom django.utils.text import slugify\n\n\ndef create_unique_slugs_for_producttypes(apps, schema_editor):\n ProductType = apps.get_model(\"product\", \"ProductType\")\n\n product_types = (\n ProductType.objects.filter(slug__isnull=True).order_by(Lower(\"name\")).iterator()\n )\n previous_char = \"\"\n slug_values = []\n for product_type in product_types:\n first_char = product_type.name[0].lower()\n if first_char != previous_char:\n previous_char = first_char\n slug_values = list(\n ProductType.objects.filter(slug__istartswith=first_char).values_list(\n \"slug\", flat=True\n )\n )\n\n slug = generate_unique_slug(product_type, slug_values)\n product_type.slug = slug\n slug_values.append(slug)\n\n\ndef generate_unique_slug(instance, slug_values_list):\n slug = slugify(instance.name)\n unique_slug = slug\n\n extension = 1\n\n while unique_slug in slug_values_list:\n extension += 1\n unique_slug = f\"{slug}-{extension}\"\n\n return unique_slug\n\n\ndef update_non_unique_slugs_for_models(apps, schema_editor):\n models_to_update = [\"Category\", \"Collection\"]\n\n for model in models_to_update:\n Model = apps.get_model(\"product\", model)\n\n duplicated_slugs = (\n Model.objects.all()\n .values(\"slug\")\n .annotate(duplicated_slug_num=models.Count(\"slug\"))\n .filter(duplicated_slug_num__gt=1)\n )\n\n slugs_counter = defaultdict(int)\n for data in duplicated_slugs:\n slugs_counter[data[\"slug\"]] = data[\"duplicated_slug_num\"]\n\n queryset = Model.objects.filter(slug__in=slugs_counter.keys()).order_by(\"name\")\n\n for instance in queryset:\n slugs_counter[instance.slug] -= 1\n slug = update_slug_to_unique_value(instance.slug, slugs_counter)\n instance.slug = slug\n slugs_counter[slug] += 1\n\n\ndef update_slug_to_unique_value(slug_value, slugs_counter):\n unique_slug = slug_value\n extension = 1\n\n while unique_slug in slugs_counter and slugs_counter[unique_slug] > 0:\n extension += 1\n unique_slug = f\"{slug_value}-{extension}\"\n\n return unique_slug\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n (\"product\", \"0111_auto_20191209_0437\"),\n ]\n\n operations = [\n migrations.RunPython(\n update_non_unique_slugs_for_models, migrations.RunPython.noop\n ),\n migrations.AddField(\n model_name=\"producttype\",\n name=\"slug\",\n field=models.SlugField(null=True, max_length=128, unique=True),\n preserve_default=False,\n ),\n migrations.AlterField(\n model_name=\"category\",\n name=\"slug\",\n field=models.SlugField(max_length=128, unique=True),\n ),\n migrations.AlterField(\n model_name=\"collection\",\n name=\"slug\",\n field=models.SlugField(max_length=128, unique=True),\n ),\n migrations.RunPython(\n create_unique_slugs_for_producttypes, migrations.RunPython.noop\n ),\n migrations.AlterField(\n model_name=\"producttype\",\n name=\"slug\",\n field=models.SlugField(max_length=128, unique=True),\n ),\n ]\n", "path": "saleor/product/migrations/0112_auto_20200129_0050.py"}], "after_files": [{"content": "# Generated by Django 2.2.9 on 2020-01-29 06:52\n\nfrom django.db import migrations, models\nfrom django.db.models.functions import Lower\nfrom django.utils.text import slugify\n\n\ndef create_unique_slug_for_warehouses(apps, schema_editor):\n Warehouse = apps.get_model(\"warehouse\", \"Warehouse\")\n\n warehouses = (\n Warehouse.objects.filter(slug__isnull=True).order_by(Lower(\"name\")).iterator()\n )\n previous_char = \"\"\n slug_values = []\n for warehouse in warehouses:\n first_char = warehouse.name[0].lower()\n if first_char != previous_char:\n previous_char = first_char\n slug_values = list(\n Warehouse.objects.filter(slug__istartswith=first_char).values_list(\n \"slug\", flat=True\n )\n )\n\n slug = generate_unique_slug(warehouse, slug_values)\n warehouse.slug = slug\n warehouse.save(update_fields=[\"slug\"])\n slug_values.append(slug)\n\n\ndef generate_unique_slug(instance, slug_values):\n slug = slugify(instance.name)\n unique_slug = slug\n extension = 1\n\n while unique_slug in slug_values:\n extension += 1\n unique_slug = f\"{slug}-{extension}\"\n\n return unique_slug\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n (\"warehouse\", \"0002_auto_20200123_0036\"),\n ]\n\n operations = [\n migrations.AddField(\n model_name=\"warehouse\",\n name=\"slug\",\n field=models.SlugField(null=True, max_length=255, unique=True),\n preserve_default=False,\n ),\n migrations.RunPython(\n create_unique_slug_for_warehouses, migrations.RunPython.noop\n ),\n migrations.AlterField(\n model_name=\"warehouse\",\n name=\"slug\",\n field=models.SlugField(max_length=255, unique=True),\n ),\n ]\n", "path": "saleor/warehouse/migrations/0003_warehouse_slug.py"}, {"content": "# Generated by Django 2.2.9 on 2020-01-29 14:15\n\nfrom django.db import migrations, models\nfrom django.db.models.functions import Lower\nfrom django.utils.text import slugify\n\n\ndef create_unique_slug_for_products(apps, schema_editor):\n Product = apps.get_model(\"product\", \"Product\")\n\n products = (\n Product.objects.filter(slug__isnull=True).order_by(Lower(\"name\")).iterator()\n )\n previous_char = \"\"\n slug_values = []\n for product in products:\n first_char = product.name[0].lower()\n if first_char != previous_char:\n previous_char = first_char\n slug_values = list(\n Product.objects.filter(slug__istartswith=first_char).values_list(\n \"slug\", flat=True\n )\n )\n\n slug = generate_unique_slug(product, slug_values)\n product.slug = slug\n product.save(update_fields=[\"slug\"])\n slug_values.append(slug)\n\n\ndef generate_unique_slug(instance, slug_values):\n slug = slugify(instance.name)\n unique_slug = slug\n extension = 1\n\n while unique_slug in slug_values:\n extension += 1\n unique_slug = f\"{slug}-{extension}\"\n\n return unique_slug\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n (\"product\", \"0113_auto_20200129_0717\"),\n ]\n\n operations = [\n migrations.AddField(\n model_name=\"product\",\n name=\"slug\",\n field=models.SlugField(null=True, max_length=255, unique=True),\n preserve_default=False,\n ),\n migrations.AlterField(\n model_name=\"product\", name=\"name\", field=models.CharField(max_length=250),\n ),\n migrations.RunPython(\n create_unique_slug_for_products, migrations.RunPython.noop\n ),\n migrations.AlterField(\n model_name=\"product\",\n name=\"slug\",\n field=models.SlugField(max_length=255, unique=True),\n ),\n ]\n", "path": "saleor/product/migrations/0114_auto_20200129_0815.py"}, {"content": "# Generated by Django 2.2.9 on 2020-01-29 06:50\n\nfrom collections import defaultdict\n\nfrom django.db import migrations, models\nfrom django.db.models.functions import Lower\nfrom django.utils.text import slugify\n\n\ndef create_unique_slugs_for_producttypes(apps, schema_editor):\n ProductType = apps.get_model(\"product\", \"ProductType\")\n\n product_types = (\n ProductType.objects.filter(slug__isnull=True).order_by(Lower(\"name\")).iterator()\n )\n previous_char = \"\"\n slug_values = []\n for product_type in product_types:\n first_char = product_type.name[0].lower()\n if first_char != previous_char:\n previous_char = first_char\n slug_values = list(\n ProductType.objects.filter(slug__istartswith=first_char).values_list(\n \"slug\", flat=True\n )\n )\n\n slug = generate_unique_slug(product_type, slug_values)\n product_type.slug = slug\n product_type.save(update_fields=[\"slug\"])\n slug_values.append(slug)\n\n\ndef generate_unique_slug(instance, slug_values_list):\n slug = slugify(instance.name)\n unique_slug = slug\n\n extension = 1\n\n while unique_slug in slug_values_list:\n extension += 1\n unique_slug = f\"{slug}-{extension}\"\n\n return unique_slug\n\n\ndef update_non_unique_slugs_for_models(apps, schema_editor):\n models_to_update = [\"Category\", \"Collection\"]\n\n for model in models_to_update:\n Model = apps.get_model(\"product\", model)\n\n duplicated_slugs = (\n Model.objects.all()\n .values(\"slug\")\n .annotate(duplicated_slug_num=models.Count(\"slug\"))\n .filter(duplicated_slug_num__gt=1)\n )\n\n slugs_counter = defaultdict(int)\n for data in duplicated_slugs:\n slugs_counter[data[\"slug\"]] = data[\"duplicated_slug_num\"]\n\n queryset = Model.objects.filter(slug__in=slugs_counter.keys()).order_by(\"name\")\n\n for instance in queryset:\n slugs_counter[instance.slug] -= 1\n slug = update_slug_to_unique_value(instance.slug, slugs_counter)\n instance.slug = slug\n instance.save(update_fields=[\"slug\"])\n slugs_counter[slug] += 1\n\n\ndef update_slug_to_unique_value(slug_value, slugs_counter):\n unique_slug = slug_value\n extension = 1\n\n while unique_slug in slugs_counter and slugs_counter[unique_slug] > 0:\n extension += 1\n unique_slug = f\"{slug_value}-{extension}\"\n\n return unique_slug\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n (\"product\", \"0111_auto_20191209_0437\"),\n ]\n\n operations = [\n migrations.RunPython(\n update_non_unique_slugs_for_models, migrations.RunPython.noop\n ),\n migrations.AddField(\n model_name=\"producttype\",\n name=\"slug\",\n field=models.SlugField(null=True, max_length=128, unique=True),\n preserve_default=False,\n ),\n migrations.AlterField(\n model_name=\"category\",\n name=\"slug\",\n field=models.SlugField(max_length=128, unique=True),\n ),\n migrations.AlterField(\n model_name=\"collection\",\n name=\"slug\",\n field=models.SlugField(max_length=128, unique=True),\n ),\n migrations.RunPython(\n create_unique_slugs_for_producttypes, migrations.RunPython.noop\n ),\n migrations.AlterField(\n model_name=\"producttype\",\n name=\"slug\",\n field=models.SlugField(max_length=128, unique=True),\n ),\n ]\n", "path": "saleor/product/migrations/0112_auto_20200129_0050.py"}]} | 3,692 | 717 |
gh_patches_debug_16839 | rasdani/github-patches | git_diff | saleor__saleor-3159 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Getting error when category has no background image
### Steps to reproduce the problem
1. Create category with no background image
2. Exec query
```
{
category {
backgroundImage {
url
}
}
}
```
3. Get an error
```
{
"errors": [
{
"message": "The 'background_image' attribute has no file associated with it.",
"locations": [
{
"line": 6,
"column": 7
}
],
"path": [
"categories",
"edges",
1,
"node",
"backgroundImage",
"url"
]
}
]
}
```
### What I expected to happen
To successfully fetch category data.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/graphql/product/types.py`
Content:
```
1 import re
2
3 import graphene
4 from graphene import relay
5 from graphene_django.filter import DjangoFilterConnectionField
6 from graphql.error import GraphQLError
7
8 from ...product import models
9 from ...product.templatetags.product_images import get_thumbnail
10 from ...product.utils import products_with_details
11 from ...product.utils.availability import get_availability
12 from ...product.utils.costs import (
13 get_margin_for_variant, get_product_costs_data)
14 from ..core.decorators import permission_required
15 from ..core.types.common import CountableDjangoObjectType
16 from ..core.types.money import (
17 Money, MoneyRange, TaxedMoney, TaxedMoneyRange, TaxRateType)
18 from ..utils import get_database_id
19 from .descriptions import AttributeDescriptions, AttributeValueDescriptions
20 from .filters import ProductFilterSet
21
22 COLOR_PATTERN = r'^(#[0-9a-fA-F]{3}|#(?:[0-9a-fA-F]{2}){2,4}|(rgb|hsl)a?\((-?\d+%?[,\s]+){2,3}\s*[\d\.]+%?\))$' # noqa
23 color_pattern = re.compile(COLOR_PATTERN)
24
25
26 class AttributeValueType(graphene.Enum):
27 COLOR = 'COLOR'
28 GRADIENT = 'GRADIENT'
29 URL = 'URL'
30 STRING = 'STRING'
31
32
33 def resolve_attribute_list(attributes):
34 keys = list(attributes.keys())
35 values = list(attributes.values())
36
37 attributes_map = {
38 att.pk: att for att in models.Attribute.objects.filter(
39 pk__in=keys)}
40 values_map = {
41 val.pk: val for val in models.AttributeValue.objects.filter(
42 pk__in=values)}
43
44 attributes_list = [SelectedAttribute(
45 attribute=attributes_map.get(int(k)),
46 value=values_map.get(int(v)))
47 for k, v in attributes.items()]
48 return attributes_list
49
50
51 def resolve_attribute_value_type(attribute_value):
52 if color_pattern.match(attribute_value):
53 return AttributeValueType.COLOR
54 if 'gradient(' in attribute_value:
55 return AttributeValueType.GRADIENT
56 if '://' in attribute_value:
57 return AttributeValueType.URL
58 return AttributeValueType.STRING
59
60
61 class AttributeValue(CountableDjangoObjectType):
62 name = graphene.String(description=AttributeValueDescriptions.NAME)
63 slug = graphene.String(description=AttributeValueDescriptions.SLUG)
64 type = AttributeValueType(description=AttributeValueDescriptions.TYPE)
65 value = graphene.String(description=AttributeValueDescriptions.VALUE)
66
67 class Meta:
68 description = 'Represents a value of an attribute.'
69 exclude_fields = ['attribute']
70 interfaces = [relay.Node]
71 model = models.AttributeValue
72
73 def resolve_type(self, info):
74 return resolve_attribute_value_type(self.value)
75
76
77 class Attribute(CountableDjangoObjectType):
78 name = graphene.String(description=AttributeDescriptions.NAME)
79 slug = graphene.String(description=AttributeDescriptions.SLUG)
80 values = graphene.List(
81 AttributeValue, description=AttributeDescriptions.VALUES)
82
83 class Meta:
84 description = """Custom attribute of a product. Attributes can be
85 assigned to products and variants at the product type level."""
86 exclude_fields = []
87 interfaces = [relay.Node]
88 filter_fields = ['id', 'slug']
89 model = models.Attribute
90
91 def resolve_values(self, info):
92 return self.values.all()
93
94
95 class AttributeTypeEnum(graphene.Enum):
96 PRODUCT = 'PRODUCT'
97 VARIANT = 'VARIANT'
98
99
100 class Margin(graphene.ObjectType):
101 start = graphene.Int()
102 stop = graphene.Int()
103
104
105 class SelectedAttribute(graphene.ObjectType):
106 attribute = graphene.Field(
107 Attribute, default_value=None, description=AttributeDescriptions.NAME)
108 value = graphene.Field(
109 AttributeValue,
110 default_value=None, description='Value of an attribute.')
111
112 class Meta:
113 description = 'Represents a custom attribute.'
114
115
116 class ProductVariant(CountableDjangoObjectType):
117 stock_quantity = graphene.Int(
118 required=True, description='Quantity of a product available for sale.')
119 price_override = graphene.Field(
120 Money,
121 description="""Override the base price of a product if necessary.
122 A value of `null` indicates that the default product price is used.""")
123 price = graphene.Field(Money, description="Price of the product variant.")
124 attributes = graphene.List(
125 SelectedAttribute,
126 description='List of attributes assigned to this variant.')
127 cost_price = graphene.Field(
128 Money, description='Cost price of the variant.')
129 margin = graphene.Int(description='Gross margin percentage value.')
130
131 class Meta:
132 description = """Represents a version of a product such as different
133 size or color."""
134 exclude_fields = ['variant_images']
135 interfaces = [relay.Node]
136 model = models.ProductVariant
137 filter_fields = ['id']
138
139 def resolve_stock_quantity(self, info):
140 return self.quantity_available
141
142 def resolve_attributes(self, info):
143 return resolve_attribute_list(self.attributes)
144
145 def resolve_margin(self, info):
146 return get_margin_for_variant(self)
147
148 def resolve_price(self, info):
149 return (
150 self.price_override
151 if self.price_override is not None else self.product.price)
152
153 @permission_required('product.manage_products')
154 def resolve_price_override(self, info):
155 return self.price_override
156
157
158 class ProductAvailability(graphene.ObjectType):
159 available = graphene.Boolean()
160 on_sale = graphene.Boolean()
161 discount = graphene.Field(TaxedMoney)
162 discount_local_currency = graphene.Field(TaxedMoney)
163 price_range = graphene.Field(TaxedMoneyRange)
164 price_range_undiscounted = graphene.Field(TaxedMoneyRange)
165 price_range_local_currency = graphene.Field(TaxedMoneyRange)
166
167 class Meta:
168 description = 'Represents availability of a product in the storefront.'
169
170
171 class Image(graphene.ObjectType):
172 url = graphene.String(
173 required=True,
174 description='The URL of the image.',
175 size=graphene.Int(description='Size of the image'))
176
177 class Meta:
178 description = 'Represents an image.'
179
180 def resolve_url(self, info, size=None):
181 if size:
182 return get_thumbnail(self, size, method='thumbnail')
183 return self.url
184
185
186 class Product(CountableDjangoObjectType):
187 url = graphene.String(
188 description='The storefront URL for the product.', required=True)
189 thumbnail_url = graphene.String(
190 description='The URL of a main thumbnail for a product.',
191 size=graphene.Argument(graphene.Int, description='Size of thumbnail'))
192 availability = graphene.Field(
193 ProductAvailability,
194 description="""Informs about product's availability in the storefront,
195 current price and discounts.""")
196 price = graphene.Field(
197 Money,
198 description="""The product's base price (without any discounts
199 applied).""")
200 attributes = graphene.List(
201 SelectedAttribute,
202 description='List of attributes assigned to this product.')
203 purchase_cost = graphene.Field(MoneyRange)
204 margin = graphene.Field(Margin)
205 image_by_id = graphene.Field(
206 lambda: ProductImage,
207 id=graphene.Argument(
208 graphene.ID, description='ID of a product image.'),
209 description='Get a single product image by ID')
210
211 class Meta:
212 description = """Represents an individual item for sale in the
213 storefront."""
214 interfaces = [relay.Node]
215 model = models.Product
216
217 def resolve_thumbnail_url(self, info, *, size=None):
218 if not size:
219 size = 255
220 return get_thumbnail(self.get_first_image(), size, method='thumbnail')
221
222 def resolve_url(self, info):
223 return self.get_absolute_url()
224
225 def resolve_availability(self, info):
226 context = info.context
227 availability = get_availability(
228 self, context.discounts, context.taxes, context.currency)
229 return ProductAvailability(**availability._asdict())
230
231 def resolve_attributes(self, info):
232 return resolve_attribute_list(self.attributes)
233
234 def resolve_product_type(self, info):
235 return self.product_type
236
237 @permission_required('product.manage_products')
238 def resolve_purchase_cost(self, info):
239 purchase_cost, _ = get_product_costs_data(self)
240 return purchase_cost
241
242 @permission_required('product.manage_products')
243 def resolve_margin(self, info):
244 _, margin = get_product_costs_data(self)
245 return Margin(margin[0], margin[1])
246
247 def resolve_image_by_id(self, info, id):
248 pk = get_database_id(info, id, ProductImage)
249 try:
250 return self.images.get(pk=pk)
251 except models.ProductImage.DoesNotExist:
252 raise GraphQLError('Product image not found.')
253
254
255 class ProductType(CountableDjangoObjectType):
256 products = DjangoFilterConnectionField(
257 Product,
258 filterset_class=ProductFilterSet,
259 description='List of products of this type.')
260 tax_rate = TaxRateType(description='A type of tax rate.')
261 variant_attributes = graphene.List(
262 Attribute, description='Variant attributes of that product type.')
263 product_attributes = graphene.List(
264 Attribute, description='Product attributes of that product type.')
265
266 class Meta:
267 description = """Represents a type of product. It defines what
268 attributes are available to products of this type."""
269 interfaces = [relay.Node]
270 model = models.ProductType
271 filter_fields = ['id']
272
273 def resolve_products(self, info, **kwargs):
274 user = info.context.user
275 return products_with_details(
276 user=user).filter(product_type=self).distinct()
277
278 def resolve_variant_attributes(self, info):
279 return self.variant_attributes.prefetch_related('values')
280
281 def resolve_product_attributes(self, info):
282 return self.product_attributes.prefetch_related('values')
283
284
285 class Collection(CountableDjangoObjectType):
286 products = DjangoFilterConnectionField(
287 Product, filterset_class=ProductFilterSet,
288 description='List of collection products.')
289 background_image = graphene.Field(Image)
290
291 class Meta:
292 description = "Represents a collection of products."
293 exclude_fields = ['voucher_set', 'sale_set', 'menuitem_set']
294 filter_fields = {
295 'name': ['exact', 'icontains', 'istartswith']}
296 interfaces = [relay.Node]
297 model = models.Collection
298
299 def resolve_products(self, info, **kwargs):
300 user = info.context.user
301 return products_with_details(
302 user=user).filter(collections=self).distinct()
303
304
305 class Category(CountableDjangoObjectType):
306 products = DjangoFilterConnectionField(
307 Product,
308 filterset_class=ProductFilterSet,
309 description='List of products in the category.')
310 url = graphene.String(
311 description='The storefront\'s URL for the category.')
312 ancestors = DjangoFilterConnectionField(
313 lambda: Category,
314 description='List of ancestors of the category.')
315 children = DjangoFilterConnectionField(
316 lambda: Category,
317 description='List of children of the category.')
318 background_image = graphene.Field(Image)
319
320 class Meta:
321 description = """Represents a single category of products. Categories
322 allow to organize products in a tree-hierarchies which can be used for
323 navigation in the storefront."""
324 exclude_fields = [
325 'lft', 'rght', 'tree_id', 'voucher_set', 'sale_set',
326 'menuitem_set']
327 interfaces = [relay.Node]
328 filter_fields = ['id', 'name']
329 model = models.Category
330
331 def resolve_ancestors(self, info, **kwargs):
332 return self.get_ancestors().distinct()
333
334 def resolve_children(self, info, **kwargs):
335 return self.children.distinct()
336
337 def resolve_url(self, info):
338 return self.get_absolute_url()
339
340 def resolve_products(self, info, **kwargs):
341 qs = models.Product.objects.available_products().prefetch_related(
342 'variants', 'images', 'product_type')
343 categories_tree = self.get_descendants(include_self=True)
344 qs = qs.filter(category__in=categories_tree)
345 return qs.distinct()
346
347
348 class ProductImage(CountableDjangoObjectType):
349 url = graphene.String(
350 required=True,
351 description='The URL of the image.',
352 size=graphene.Int(description='Size of the image'))
353
354 class Meta:
355 description = 'Represents a product image.'
356 exclude_fields = [
357 'image', 'product', 'ppoi', 'productvariant_set',
358 'variant_images']
359 interfaces = [relay.Node]
360 model = models.ProductImage
361
362 def resolve_url(self, info, *, size=None):
363 if size:
364 return get_thumbnail(self.image, size, method='thumbnail')
365 return self.image.url
366
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/saleor/graphql/product/types.py b/saleor/graphql/product/types.py
--- a/saleor/graphql/product/types.py
+++ b/saleor/graphql/product/types.py
@@ -296,6 +296,9 @@
interfaces = [relay.Node]
model = models.Collection
+ def resolve_background_image(self, info, **kwargs):
+ return self.background_image or None
+
def resolve_products(self, info, **kwargs):
user = info.context.user
return products_with_details(
@@ -331,6 +334,9 @@
def resolve_ancestors(self, info, **kwargs):
return self.get_ancestors().distinct()
+ def resolve_background_image(self, info, **kwargs):
+ return self.background_image or None
+
def resolve_children(self, info, **kwargs):
return self.children.distinct()
| {"golden_diff": "diff --git a/saleor/graphql/product/types.py b/saleor/graphql/product/types.py\n--- a/saleor/graphql/product/types.py\n+++ b/saleor/graphql/product/types.py\n@@ -296,6 +296,9 @@\n interfaces = [relay.Node]\n model = models.Collection\n \n+ def resolve_background_image(self, info, **kwargs):\n+ return self.background_image or None\n+\n def resolve_products(self, info, **kwargs):\n user = info.context.user\n return products_with_details(\n@@ -331,6 +334,9 @@\n def resolve_ancestors(self, info, **kwargs):\n return self.get_ancestors().distinct()\n \n+ def resolve_background_image(self, info, **kwargs):\n+ return self.background_image or None\n+\n def resolve_children(self, info, **kwargs):\n return self.children.distinct()\n", "issue": "Getting error when category has no background image\n### Steps to reproduce the problem\r\n1. Create category with no background image\r\n2. Exec query\r\n```\r\n{\r\n category {\r\n backgroundImage {\r\n url\r\n }\r\n }\r\n}\r\n```\r\n3. Get an error\r\n```\r\n{\r\n \"errors\": [\r\n {\r\n \"message\": \"The 'background_image' attribute has no file associated with it.\",\r\n \"locations\": [\r\n {\r\n \"line\": 6,\r\n \"column\": 7\r\n }\r\n ],\r\n \"path\": [\r\n \"categories\",\r\n \"edges\",\r\n 1,\r\n \"node\",\r\n \"backgroundImage\",\r\n \"url\"\r\n ]\r\n }\r\n ]\r\n}\r\n```\r\n\r\n### What I expected to happen\r\nTo successfully fetch category data.\n", "before_files": [{"content": "import re\n\nimport graphene\nfrom graphene import relay\nfrom graphene_django.filter import DjangoFilterConnectionField\nfrom graphql.error import GraphQLError\n\nfrom ...product import models\nfrom ...product.templatetags.product_images import get_thumbnail\nfrom ...product.utils import products_with_details\nfrom ...product.utils.availability import get_availability\nfrom ...product.utils.costs import (\n get_margin_for_variant, get_product_costs_data)\nfrom ..core.decorators import permission_required\nfrom ..core.types.common import CountableDjangoObjectType\nfrom ..core.types.money import (\n Money, MoneyRange, TaxedMoney, TaxedMoneyRange, TaxRateType)\nfrom ..utils import get_database_id\nfrom .descriptions import AttributeDescriptions, AttributeValueDescriptions\nfrom .filters import ProductFilterSet\n\nCOLOR_PATTERN = r'^(#[0-9a-fA-F]{3}|#(?:[0-9a-fA-F]{2}){2,4}|(rgb|hsl)a?\\((-?\\d+%?[,\\s]+){2,3}\\s*[\\d\\.]+%?\\))$' # noqa\ncolor_pattern = re.compile(COLOR_PATTERN)\n\n\nclass AttributeValueType(graphene.Enum):\n COLOR = 'COLOR'\n GRADIENT = 'GRADIENT'\n URL = 'URL'\n STRING = 'STRING'\n\n\ndef resolve_attribute_list(attributes):\n keys = list(attributes.keys())\n values = list(attributes.values())\n\n attributes_map = {\n att.pk: att for att in models.Attribute.objects.filter(\n pk__in=keys)}\n values_map = {\n val.pk: val for val in models.AttributeValue.objects.filter(\n pk__in=values)}\n\n attributes_list = [SelectedAttribute(\n attribute=attributes_map.get(int(k)),\n value=values_map.get(int(v)))\n for k, v in attributes.items()]\n return attributes_list\n\n\ndef resolve_attribute_value_type(attribute_value):\n if color_pattern.match(attribute_value):\n return AttributeValueType.COLOR\n if 'gradient(' in attribute_value:\n return AttributeValueType.GRADIENT\n if '://' in attribute_value:\n return AttributeValueType.URL\n return AttributeValueType.STRING\n\n\nclass AttributeValue(CountableDjangoObjectType):\n name = graphene.String(description=AttributeValueDescriptions.NAME)\n slug = graphene.String(description=AttributeValueDescriptions.SLUG)\n type = AttributeValueType(description=AttributeValueDescriptions.TYPE)\n value = graphene.String(description=AttributeValueDescriptions.VALUE)\n\n class Meta:\n description = 'Represents a value of an attribute.'\n exclude_fields = ['attribute']\n interfaces = [relay.Node]\n model = models.AttributeValue\n\n def resolve_type(self, info):\n return resolve_attribute_value_type(self.value)\n\n\nclass Attribute(CountableDjangoObjectType):\n name = graphene.String(description=AttributeDescriptions.NAME)\n slug = graphene.String(description=AttributeDescriptions.SLUG)\n values = graphene.List(\n AttributeValue, description=AttributeDescriptions.VALUES)\n\n class Meta:\n description = \"\"\"Custom attribute of a product. Attributes can be\n assigned to products and variants at the product type level.\"\"\"\n exclude_fields = []\n interfaces = [relay.Node]\n filter_fields = ['id', 'slug']\n model = models.Attribute\n\n def resolve_values(self, info):\n return self.values.all()\n\n\nclass AttributeTypeEnum(graphene.Enum):\n PRODUCT = 'PRODUCT'\n VARIANT = 'VARIANT'\n\n\nclass Margin(graphene.ObjectType):\n start = graphene.Int()\n stop = graphene.Int()\n\n\nclass SelectedAttribute(graphene.ObjectType):\n attribute = graphene.Field(\n Attribute, default_value=None, description=AttributeDescriptions.NAME)\n value = graphene.Field(\n AttributeValue,\n default_value=None, description='Value of an attribute.')\n\n class Meta:\n description = 'Represents a custom attribute.'\n\n\nclass ProductVariant(CountableDjangoObjectType):\n stock_quantity = graphene.Int(\n required=True, description='Quantity of a product available for sale.')\n price_override = graphene.Field(\n Money,\n description=\"\"\"Override the base price of a product if necessary.\n A value of `null` indicates that the default product price is used.\"\"\")\n price = graphene.Field(Money, description=\"Price of the product variant.\")\n attributes = graphene.List(\n SelectedAttribute,\n description='List of attributes assigned to this variant.')\n cost_price = graphene.Field(\n Money, description='Cost price of the variant.')\n margin = graphene.Int(description='Gross margin percentage value.')\n\n class Meta:\n description = \"\"\"Represents a version of a product such as different\n size or color.\"\"\"\n exclude_fields = ['variant_images']\n interfaces = [relay.Node]\n model = models.ProductVariant\n filter_fields = ['id']\n\n def resolve_stock_quantity(self, info):\n return self.quantity_available\n\n def resolve_attributes(self, info):\n return resolve_attribute_list(self.attributes)\n\n def resolve_margin(self, info):\n return get_margin_for_variant(self)\n\n def resolve_price(self, info):\n return (\n self.price_override\n if self.price_override is not None else self.product.price)\n\n @permission_required('product.manage_products')\n def resolve_price_override(self, info):\n return self.price_override\n\n\nclass ProductAvailability(graphene.ObjectType):\n available = graphene.Boolean()\n on_sale = graphene.Boolean()\n discount = graphene.Field(TaxedMoney)\n discount_local_currency = graphene.Field(TaxedMoney)\n price_range = graphene.Field(TaxedMoneyRange)\n price_range_undiscounted = graphene.Field(TaxedMoneyRange)\n price_range_local_currency = graphene.Field(TaxedMoneyRange)\n\n class Meta:\n description = 'Represents availability of a product in the storefront.'\n\n\nclass Image(graphene.ObjectType):\n url = graphene.String(\n required=True,\n description='The URL of the image.',\n size=graphene.Int(description='Size of the image'))\n\n class Meta:\n description = 'Represents an image.'\n\n def resolve_url(self, info, size=None):\n if size:\n return get_thumbnail(self, size, method='thumbnail')\n return self.url\n\n\nclass Product(CountableDjangoObjectType):\n url = graphene.String(\n description='The storefront URL for the product.', required=True)\n thumbnail_url = graphene.String(\n description='The URL of a main thumbnail for a product.',\n size=graphene.Argument(graphene.Int, description='Size of thumbnail'))\n availability = graphene.Field(\n ProductAvailability,\n description=\"\"\"Informs about product's availability in the storefront,\n current price and discounts.\"\"\")\n price = graphene.Field(\n Money,\n description=\"\"\"The product's base price (without any discounts\n applied).\"\"\")\n attributes = graphene.List(\n SelectedAttribute,\n description='List of attributes assigned to this product.')\n purchase_cost = graphene.Field(MoneyRange)\n margin = graphene.Field(Margin)\n image_by_id = graphene.Field(\n lambda: ProductImage,\n id=graphene.Argument(\n graphene.ID, description='ID of a product image.'),\n description='Get a single product image by ID')\n\n class Meta:\n description = \"\"\"Represents an individual item for sale in the\n storefront.\"\"\"\n interfaces = [relay.Node]\n model = models.Product\n\n def resolve_thumbnail_url(self, info, *, size=None):\n if not size:\n size = 255\n return get_thumbnail(self.get_first_image(), size, method='thumbnail')\n\n def resolve_url(self, info):\n return self.get_absolute_url()\n\n def resolve_availability(self, info):\n context = info.context\n availability = get_availability(\n self, context.discounts, context.taxes, context.currency)\n return ProductAvailability(**availability._asdict())\n\n def resolve_attributes(self, info):\n return resolve_attribute_list(self.attributes)\n\n def resolve_product_type(self, info):\n return self.product_type\n\n @permission_required('product.manage_products')\n def resolve_purchase_cost(self, info):\n purchase_cost, _ = get_product_costs_data(self)\n return purchase_cost\n\n @permission_required('product.manage_products')\n def resolve_margin(self, info):\n _, margin = get_product_costs_data(self)\n return Margin(margin[0], margin[1])\n\n def resolve_image_by_id(self, info, id):\n pk = get_database_id(info, id, ProductImage)\n try:\n return self.images.get(pk=pk)\n except models.ProductImage.DoesNotExist:\n raise GraphQLError('Product image not found.')\n\n\nclass ProductType(CountableDjangoObjectType):\n products = DjangoFilterConnectionField(\n Product,\n filterset_class=ProductFilterSet,\n description='List of products of this type.')\n tax_rate = TaxRateType(description='A type of tax rate.')\n variant_attributes = graphene.List(\n Attribute, description='Variant attributes of that product type.')\n product_attributes = graphene.List(\n Attribute, description='Product attributes of that product type.')\n\n class Meta:\n description = \"\"\"Represents a type of product. It defines what\n attributes are available to products of this type.\"\"\"\n interfaces = [relay.Node]\n model = models.ProductType\n filter_fields = ['id']\n\n def resolve_products(self, info, **kwargs):\n user = info.context.user\n return products_with_details(\n user=user).filter(product_type=self).distinct()\n\n def resolve_variant_attributes(self, info):\n return self.variant_attributes.prefetch_related('values')\n\n def resolve_product_attributes(self, info):\n return self.product_attributes.prefetch_related('values')\n\n\nclass Collection(CountableDjangoObjectType):\n products = DjangoFilterConnectionField(\n Product, filterset_class=ProductFilterSet,\n description='List of collection products.')\n background_image = graphene.Field(Image)\n\n class Meta:\n description = \"Represents a collection of products.\"\n exclude_fields = ['voucher_set', 'sale_set', 'menuitem_set']\n filter_fields = {\n 'name': ['exact', 'icontains', 'istartswith']}\n interfaces = [relay.Node]\n model = models.Collection\n\n def resolve_products(self, info, **kwargs):\n user = info.context.user\n return products_with_details(\n user=user).filter(collections=self).distinct()\n\n\nclass Category(CountableDjangoObjectType):\n products = DjangoFilterConnectionField(\n Product,\n filterset_class=ProductFilterSet,\n description='List of products in the category.')\n url = graphene.String(\n description='The storefront\\'s URL for the category.')\n ancestors = DjangoFilterConnectionField(\n lambda: Category,\n description='List of ancestors of the category.')\n children = DjangoFilterConnectionField(\n lambda: Category,\n description='List of children of the category.')\n background_image = graphene.Field(Image)\n\n class Meta:\n description = \"\"\"Represents a single category of products. Categories\n allow to organize products in a tree-hierarchies which can be used for\n navigation in the storefront.\"\"\"\n exclude_fields = [\n 'lft', 'rght', 'tree_id', 'voucher_set', 'sale_set',\n 'menuitem_set']\n interfaces = [relay.Node]\n filter_fields = ['id', 'name']\n model = models.Category\n\n def resolve_ancestors(self, info, **kwargs):\n return self.get_ancestors().distinct()\n\n def resolve_children(self, info, **kwargs):\n return self.children.distinct()\n\n def resolve_url(self, info):\n return self.get_absolute_url()\n\n def resolve_products(self, info, **kwargs):\n qs = models.Product.objects.available_products().prefetch_related(\n 'variants', 'images', 'product_type')\n categories_tree = self.get_descendants(include_self=True)\n qs = qs.filter(category__in=categories_tree)\n return qs.distinct()\n\n\nclass ProductImage(CountableDjangoObjectType):\n url = graphene.String(\n required=True,\n description='The URL of the image.',\n size=graphene.Int(description='Size of the image'))\n\n class Meta:\n description = 'Represents a product image.'\n exclude_fields = [\n 'image', 'product', 'ppoi', 'productvariant_set',\n 'variant_images']\n interfaces = [relay.Node]\n model = models.ProductImage\n\n def resolve_url(self, info, *, size=None):\n if size:\n return get_thumbnail(self.image, size, method='thumbnail')\n return self.image.url\n", "path": "saleor/graphql/product/types.py"}], "after_files": [{"content": "import re\n\nimport graphene\nfrom graphene import relay\nfrom graphene_django.filter import DjangoFilterConnectionField\nfrom graphql.error import GraphQLError\n\nfrom ...product import models\nfrom ...product.templatetags.product_images import get_thumbnail\nfrom ...product.utils import products_with_details\nfrom ...product.utils.availability import get_availability\nfrom ...product.utils.costs import (\n get_margin_for_variant, get_product_costs_data)\nfrom ..core.decorators import permission_required\nfrom ..core.types.common import CountableDjangoObjectType\nfrom ..core.types.money import (\n Money, MoneyRange, TaxedMoney, TaxedMoneyRange, TaxRateType)\nfrom ..utils import get_database_id\nfrom .descriptions import AttributeDescriptions, AttributeValueDescriptions\nfrom .filters import ProductFilterSet\n\nCOLOR_PATTERN = r'^(#[0-9a-fA-F]{3}|#(?:[0-9a-fA-F]{2}){2,4}|(rgb|hsl)a?\\((-?\\d+%?[,\\s]+){2,3}\\s*[\\d\\.]+%?\\))$' # noqa\ncolor_pattern = re.compile(COLOR_PATTERN)\n\n\nclass AttributeValueType(graphene.Enum):\n COLOR = 'COLOR'\n GRADIENT = 'GRADIENT'\n URL = 'URL'\n STRING = 'STRING'\n\n\ndef resolve_attribute_list(attributes):\n keys = list(attributes.keys())\n values = list(attributes.values())\n\n attributes_map = {\n att.pk: att for att in models.Attribute.objects.filter(\n pk__in=keys)}\n values_map = {\n val.pk: val for val in models.AttributeValue.objects.filter(\n pk__in=values)}\n\n attributes_list = [SelectedAttribute(\n attribute=attributes_map.get(int(k)),\n value=values_map.get(int(v)))\n for k, v in attributes.items()]\n return attributes_list\n\n\ndef resolve_attribute_value_type(attribute_value):\n if color_pattern.match(attribute_value):\n return AttributeValueType.COLOR\n if 'gradient(' in attribute_value:\n return AttributeValueType.GRADIENT\n if '://' in attribute_value:\n return AttributeValueType.URL\n return AttributeValueType.STRING\n\n\nclass AttributeValue(CountableDjangoObjectType):\n name = graphene.String(description=AttributeValueDescriptions.NAME)\n slug = graphene.String(description=AttributeValueDescriptions.SLUG)\n type = AttributeValueType(description=AttributeValueDescriptions.TYPE)\n value = graphene.String(description=AttributeValueDescriptions.VALUE)\n\n class Meta:\n description = 'Represents a value of an attribute.'\n exclude_fields = ['attribute']\n interfaces = [relay.Node]\n model = models.AttributeValue\n\n def resolve_type(self, info):\n return resolve_attribute_value_type(self.value)\n\n\nclass Attribute(CountableDjangoObjectType):\n name = graphene.String(description=AttributeDescriptions.NAME)\n slug = graphene.String(description=AttributeDescriptions.SLUG)\n values = graphene.List(\n AttributeValue, description=AttributeDescriptions.VALUES)\n\n class Meta:\n description = \"\"\"Custom attribute of a product. Attributes can be\n assigned to products and variants at the product type level.\"\"\"\n exclude_fields = []\n interfaces = [relay.Node]\n filter_fields = ['id', 'slug']\n model = models.Attribute\n\n def resolve_values(self, info):\n return self.values.all()\n\n\nclass AttributeTypeEnum(graphene.Enum):\n PRODUCT = 'PRODUCT'\n VARIANT = 'VARIANT'\n\n\nclass Margin(graphene.ObjectType):\n start = graphene.Int()\n stop = graphene.Int()\n\n\nclass SelectedAttribute(graphene.ObjectType):\n attribute = graphene.Field(\n Attribute, default_value=None, description=AttributeDescriptions.NAME)\n value = graphene.Field(\n AttributeValue,\n default_value=None, description='Value of an attribute.')\n\n class Meta:\n description = 'Represents a custom attribute.'\n\n\nclass ProductVariant(CountableDjangoObjectType):\n stock_quantity = graphene.Int(\n required=True, description='Quantity of a product available for sale.')\n price_override = graphene.Field(\n Money,\n description=\"\"\"Override the base price of a product if necessary.\n A value of `null` indicates that the default product price is used.\"\"\")\n price = graphene.Field(Money, description=\"Price of the product variant.\")\n attributes = graphene.List(\n SelectedAttribute,\n description='List of attributes assigned to this variant.')\n cost_price = graphene.Field(\n Money, description='Cost price of the variant.')\n margin = graphene.Int(description='Gross margin percentage value.')\n\n class Meta:\n description = \"\"\"Represents a version of a product such as different\n size or color.\"\"\"\n exclude_fields = ['variant_images']\n interfaces = [relay.Node]\n model = models.ProductVariant\n filter_fields = ['id']\n\n def resolve_stock_quantity(self, info):\n return self.quantity_available\n\n def resolve_attributes(self, info):\n return resolve_attribute_list(self.attributes)\n\n def resolve_margin(self, info):\n return get_margin_for_variant(self)\n\n def resolve_price(self, info):\n return (\n self.price_override\n if self.price_override is not None else self.product.price)\n\n @permission_required('product.manage_products')\n def resolve_price_override(self, info):\n return self.price_override\n\n\nclass ProductAvailability(graphene.ObjectType):\n available = graphene.Boolean()\n on_sale = graphene.Boolean()\n discount = graphene.Field(TaxedMoney)\n discount_local_currency = graphene.Field(TaxedMoney)\n price_range = graphene.Field(TaxedMoneyRange)\n price_range_undiscounted = graphene.Field(TaxedMoneyRange)\n price_range_local_currency = graphene.Field(TaxedMoneyRange)\n\n class Meta:\n description = 'Represents availability of a product in the storefront.'\n\n\nclass Image(graphene.ObjectType):\n url = graphene.String(\n required=True,\n description='The URL of the image.',\n size=graphene.Int(description='Size of the image'))\n\n class Meta:\n description = 'Represents an image.'\n\n def resolve_url(self, info, size=None):\n if size:\n return get_thumbnail(self, size, method='thumbnail')\n return self.url\n\n\nclass Product(CountableDjangoObjectType):\n url = graphene.String(\n description='The storefront URL for the product.', required=True)\n thumbnail_url = graphene.String(\n description='The URL of a main thumbnail for a product.',\n size=graphene.Argument(graphene.Int, description='Size of thumbnail'))\n availability = graphene.Field(\n ProductAvailability,\n description=\"\"\"Informs about product's availability in the storefront,\n current price and discounts.\"\"\")\n price = graphene.Field(\n Money,\n description=\"\"\"The product's base price (without any discounts\n applied).\"\"\")\n attributes = graphene.List(\n SelectedAttribute,\n description='List of attributes assigned to this product.')\n purchase_cost = graphene.Field(MoneyRange)\n margin = graphene.Field(Margin)\n image_by_id = graphene.Field(\n lambda: ProductImage,\n id=graphene.Argument(\n graphene.ID, description='ID of a product image.'),\n description='Get a single product image by ID')\n\n class Meta:\n description = \"\"\"Represents an individual item for sale in the\n storefront.\"\"\"\n interfaces = [relay.Node]\n model = models.Product\n\n def resolve_thumbnail_url(self, info, *, size=None):\n if not size:\n size = 255\n return get_thumbnail(self.get_first_image(), size, method='thumbnail')\n\n def resolve_url(self, info):\n return self.get_absolute_url()\n\n def resolve_availability(self, info):\n context = info.context\n availability = get_availability(\n self, context.discounts, context.taxes, context.currency)\n return ProductAvailability(**availability._asdict())\n\n def resolve_attributes(self, info):\n return resolve_attribute_list(self.attributes)\n\n def resolve_product_type(self, info):\n return self.product_type\n\n @permission_required('product.manage_products')\n def resolve_purchase_cost(self, info):\n purchase_cost, _ = get_product_costs_data(self)\n return purchase_cost\n\n @permission_required('product.manage_products')\n def resolve_margin(self, info):\n _, margin = get_product_costs_data(self)\n return Margin(margin[0], margin[1])\n\n def resolve_image_by_id(self, info, id):\n pk = get_database_id(info, id, ProductImage)\n try:\n return self.images.get(pk=pk)\n except models.ProductImage.DoesNotExist:\n raise GraphQLError('Product image not found.')\n\n\nclass ProductType(CountableDjangoObjectType):\n products = DjangoFilterConnectionField(\n Product,\n filterset_class=ProductFilterSet,\n description='List of products of this type.')\n tax_rate = TaxRateType(description='A type of tax rate.')\n variant_attributes = graphene.List(\n Attribute, description='Variant attributes of that product type.')\n product_attributes = graphene.List(\n Attribute, description='Product attributes of that product type.')\n\n class Meta:\n description = \"\"\"Represents a type of product. It defines what\n attributes are available to products of this type.\"\"\"\n interfaces = [relay.Node]\n model = models.ProductType\n filter_fields = ['id']\n\n def resolve_products(self, info, **kwargs):\n user = info.context.user\n return products_with_details(\n user=user).filter(product_type=self).distinct()\n\n def resolve_variant_attributes(self, info):\n return self.variant_attributes.prefetch_related('values')\n\n def resolve_product_attributes(self, info):\n return self.product_attributes.prefetch_related('values')\n\n\nclass Collection(CountableDjangoObjectType):\n products = DjangoFilterConnectionField(\n Product, filterset_class=ProductFilterSet,\n description='List of collection products.')\n background_image = graphene.Field(Image)\n\n class Meta:\n description = \"Represents a collection of products.\"\n exclude_fields = ['voucher_set', 'sale_set', 'menuitem_set']\n filter_fields = {\n 'name': ['exact', 'icontains', 'istartswith']}\n interfaces = [relay.Node]\n model = models.Collection\n\n def resolve_background_image(self, info, **kwargs):\n return self.background_image or None\n\n def resolve_products(self, info, **kwargs):\n user = info.context.user\n return products_with_details(\n user=user).filter(collections=self).distinct()\n\n\nclass Category(CountableDjangoObjectType):\n products = DjangoFilterConnectionField(\n Product,\n filterset_class=ProductFilterSet,\n description='List of products in the category.')\n url = graphene.String(\n description='The storefront\\'s URL for the category.')\n ancestors = DjangoFilterConnectionField(\n lambda: Category,\n description='List of ancestors of the category.')\n children = DjangoFilterConnectionField(\n lambda: Category,\n description='List of children of the category.')\n background_image = graphene.Field(Image)\n\n class Meta:\n description = \"\"\"Represents a single category of products. Categories\n allow to organize products in a tree-hierarchies which can be used for\n navigation in the storefront.\"\"\"\n exclude_fields = [\n 'lft', 'rght', 'tree_id', 'voucher_set', 'sale_set',\n 'menuitem_set']\n interfaces = [relay.Node]\n filter_fields = ['id', 'name']\n model = models.Category\n\n def resolve_ancestors(self, info, **kwargs):\n return self.get_ancestors().distinct()\n\n def resolve_background_image(self, info, **kwargs):\n return self.background_image or None\n\n def resolve_children(self, info, **kwargs):\n return self.children.distinct()\n\n def resolve_url(self, info):\n return self.get_absolute_url()\n\n def resolve_products(self, info, **kwargs):\n qs = models.Product.objects.available_products().prefetch_related(\n 'variants', 'images', 'product_type')\n categories_tree = self.get_descendants(include_self=True)\n qs = qs.filter(category__in=categories_tree)\n return qs.distinct()\n\n\nclass ProductImage(CountableDjangoObjectType):\n url = graphene.String(\n required=True,\n description='The URL of the image.',\n size=graphene.Int(description='Size of the image'))\n\n class Meta:\n description = 'Represents a product image.'\n exclude_fields = [\n 'image', 'product', 'ppoi', 'productvariant_set',\n 'variant_images']\n interfaces = [relay.Node]\n model = models.ProductImage\n\n def resolve_url(self, info, *, size=None):\n if size:\n return get_thumbnail(self.image, size, method='thumbnail')\n return self.image.url\n", "path": "saleor/graphql/product/types.py"}]} | 4,059 | 195 |
gh_patches_debug_35340 | rasdani/github-patches | git_diff | microsoft__playwright-python-86 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update versions in README.md on Playwright roll
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `build_driver.py`
Content:
```
1 # Copyright (c) Microsoft Corporation.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import gzip
16 import os
17 import shutil
18 import subprocess
19
20 driver_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "driver")
21 package_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "playwright")
22 drivers_path = os.path.join(package_path, "drivers")
23
24 if os.path.exists(os.path.join(driver_path, "package-lock.json")):
25 os.remove(os.path.join(driver_path, "package-lock.json"))
26 if os.path.exists(os.path.join(driver_path, "node_modules")):
27 shutil.rmtree(os.path.join(driver_path, "node_modules"))
28 if os.path.exists(os.path.join(driver_path, "out")):
29 shutil.rmtree(os.path.join(driver_path, "out"))
30
31 subprocess.run("npm i", cwd=driver_path, shell=True)
32 subprocess.run("npm run bake", cwd=driver_path, shell=True)
33
34 for driver in ["driver-linux", "driver-macos", "driver-win.exe"]:
35 if os.path.exists(os.path.join(package_path, driver)):
36 os.remove(os.path.join(package_path, driver))
37
38 in_path = os.path.join(driver_path, "out", driver)
39 out_path = os.path.join(drivers_path, driver + ".gz")
40 with open(in_path, "rb") as f_in, gzip.open(out_path, "wb") as f_out:
41 shutil.copyfileobj(f_in, f_out)
42
43 shutil.copyfile(
44 os.path.join(driver_path, "node_modules", "playwright", "browsers.json"),
45 os.path.join(drivers_path, "browsers.json"),
46 )
47
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/build_driver.py b/build_driver.py
--- a/build_driver.py
+++ b/build_driver.py
@@ -14,33 +14,52 @@
import gzip
import os
+import re
import shutil
import subprocess
+from pathlib import Path
-driver_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "driver")
-package_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "playwright")
-drivers_path = os.path.join(package_path, "drivers")
+_dirname = Path(os.path.dirname(os.path.abspath(__file__)))
-if os.path.exists(os.path.join(driver_path, "package-lock.json")):
- os.remove(os.path.join(driver_path, "package-lock.json"))
-if os.path.exists(os.path.join(driver_path, "node_modules")):
- shutil.rmtree(os.path.join(driver_path, "node_modules"))
-if os.path.exists(os.path.join(driver_path, "out")):
- shutil.rmtree(os.path.join(driver_path, "out"))
+driver_path = _dirname / "driver"
+package_path = _dirname / "playwright"
+drivers_path = package_path / "drivers"
+
+if (driver_path / "package-lock.json").exists():
+ os.remove(driver_path / "package-lock.json")
+if (driver_path / "node_modules").exists():
+ shutil.rmtree(driver_path / "node_modules")
+if (driver_path / "out").exists():
+ shutil.rmtree(driver_path / "out")
subprocess.run("npm i", cwd=driver_path, shell=True)
subprocess.run("npm run bake", cwd=driver_path, shell=True)
for driver in ["driver-linux", "driver-macos", "driver-win.exe"]:
- if os.path.exists(os.path.join(package_path, driver)):
- os.remove(os.path.join(package_path, driver))
+ if (package_path / driver).exists():
+ os.remove((package_path / driver))
- in_path = os.path.join(driver_path, "out", driver)
- out_path = os.path.join(drivers_path, driver + ".gz")
+ in_path = driver_path / "out" / driver
+ out_path = drivers_path / (driver + ".gz")
with open(in_path, "rb") as f_in, gzip.open(out_path, "wb") as f_out:
shutil.copyfileobj(f_in, f_out)
+node_modules_playwright = driver_path / "node_modules" / "playwright"
+
shutil.copyfile(
- os.path.join(driver_path, "node_modules", "playwright", "browsers.json"),
- os.path.join(drivers_path, "browsers.json"),
+ node_modules_playwright / "browsers.json", drivers_path / "browsers.json",
)
+
+upstream_readme = (node_modules_playwright / "README.md").read_text()
+pw_python_readme = (_dirname / "README.md").read_text()
+
+matches = re.findall(r"<!-- GEN:(.*?) -->(.*?)<!-- GEN:stop -->", upstream_readme)
+
+for key, value in matches:
+ pw_python_readme = re.sub(
+ rf"(<!-- GEN:{key} -->).*?(<!-- GEN:stop -->)",
+ f"<!-- GEN:{key} -->{value}<!-- GEN:stop -->",
+ pw_python_readme,
+ )
+
+(_dirname / "README.md").write_text(pw_python_readme)
| {"golden_diff": "diff --git a/build_driver.py b/build_driver.py\n--- a/build_driver.py\n+++ b/build_driver.py\n@@ -14,33 +14,52 @@\n \n import gzip\n import os\n+import re\n import shutil\n import subprocess\n+from pathlib import Path\n \n-driver_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"driver\")\n-package_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"playwright\")\n-drivers_path = os.path.join(package_path, \"drivers\")\n+_dirname = Path(os.path.dirname(os.path.abspath(__file__)))\n \n-if os.path.exists(os.path.join(driver_path, \"package-lock.json\")):\n- os.remove(os.path.join(driver_path, \"package-lock.json\"))\n-if os.path.exists(os.path.join(driver_path, \"node_modules\")):\n- shutil.rmtree(os.path.join(driver_path, \"node_modules\"))\n-if os.path.exists(os.path.join(driver_path, \"out\")):\n- shutil.rmtree(os.path.join(driver_path, \"out\"))\n+driver_path = _dirname / \"driver\"\n+package_path = _dirname / \"playwright\"\n+drivers_path = package_path / \"drivers\"\n+\n+if (driver_path / \"package-lock.json\").exists():\n+ os.remove(driver_path / \"package-lock.json\")\n+if (driver_path / \"node_modules\").exists():\n+ shutil.rmtree(driver_path / \"node_modules\")\n+if (driver_path / \"out\").exists():\n+ shutil.rmtree(driver_path / \"out\")\n \n subprocess.run(\"npm i\", cwd=driver_path, shell=True)\n subprocess.run(\"npm run bake\", cwd=driver_path, shell=True)\n \n for driver in [\"driver-linux\", \"driver-macos\", \"driver-win.exe\"]:\n- if os.path.exists(os.path.join(package_path, driver)):\n- os.remove(os.path.join(package_path, driver))\n+ if (package_path / driver).exists():\n+ os.remove((package_path / driver))\n \n- in_path = os.path.join(driver_path, \"out\", driver)\n- out_path = os.path.join(drivers_path, driver + \".gz\")\n+ in_path = driver_path / \"out\" / driver\n+ out_path = drivers_path / (driver + \".gz\")\n with open(in_path, \"rb\") as f_in, gzip.open(out_path, \"wb\") as f_out:\n shutil.copyfileobj(f_in, f_out)\n \n+node_modules_playwright = driver_path / \"node_modules\" / \"playwright\"\n+\n shutil.copyfile(\n- os.path.join(driver_path, \"node_modules\", \"playwright\", \"browsers.json\"),\n- os.path.join(drivers_path, \"browsers.json\"),\n+ node_modules_playwright / \"browsers.json\", drivers_path / \"browsers.json\",\n )\n+\n+upstream_readme = (node_modules_playwright / \"README.md\").read_text()\n+pw_python_readme = (_dirname / \"README.md\").read_text()\n+\n+matches = re.findall(r\"<!-- GEN:(.*?) -->(.*?)<!-- GEN:stop -->\", upstream_readme)\n+\n+for key, value in matches:\n+ pw_python_readme = re.sub(\n+ rf\"(<!-- GEN:{key} -->).*?(<!-- GEN:stop -->)\",\n+ f\"<!-- GEN:{key} -->{value}<!-- GEN:stop -->\",\n+ pw_python_readme,\n+ )\n+\n+(_dirname / \"README.md\").write_text(pw_python_readme)\n", "issue": "Update versions in README.md on Playwright roll\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport gzip\nimport os\nimport shutil\nimport subprocess\n\ndriver_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"driver\")\npackage_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"playwright\")\ndrivers_path = os.path.join(package_path, \"drivers\")\n\nif os.path.exists(os.path.join(driver_path, \"package-lock.json\")):\n os.remove(os.path.join(driver_path, \"package-lock.json\"))\nif os.path.exists(os.path.join(driver_path, \"node_modules\")):\n shutil.rmtree(os.path.join(driver_path, \"node_modules\"))\nif os.path.exists(os.path.join(driver_path, \"out\")):\n shutil.rmtree(os.path.join(driver_path, \"out\"))\n\nsubprocess.run(\"npm i\", cwd=driver_path, shell=True)\nsubprocess.run(\"npm run bake\", cwd=driver_path, shell=True)\n\nfor driver in [\"driver-linux\", \"driver-macos\", \"driver-win.exe\"]:\n if os.path.exists(os.path.join(package_path, driver)):\n os.remove(os.path.join(package_path, driver))\n\n in_path = os.path.join(driver_path, \"out\", driver)\n out_path = os.path.join(drivers_path, driver + \".gz\")\n with open(in_path, \"rb\") as f_in, gzip.open(out_path, \"wb\") as f_out:\n shutil.copyfileobj(f_in, f_out)\n\nshutil.copyfile(\n os.path.join(driver_path, \"node_modules\", \"playwright\", \"browsers.json\"),\n os.path.join(drivers_path, \"browsers.json\"),\n)\n", "path": "build_driver.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport gzip\nimport os\nimport re\nimport shutil\nimport subprocess\nfrom pathlib import Path\n\n_dirname = Path(os.path.dirname(os.path.abspath(__file__)))\n\ndriver_path = _dirname / \"driver\"\npackage_path = _dirname / \"playwright\"\ndrivers_path = package_path / \"drivers\"\n\nif (driver_path / \"package-lock.json\").exists():\n os.remove(driver_path / \"package-lock.json\")\nif (driver_path / \"node_modules\").exists():\n shutil.rmtree(driver_path / \"node_modules\")\nif (driver_path / \"out\").exists():\n shutil.rmtree(driver_path / \"out\")\n\nsubprocess.run(\"npm i\", cwd=driver_path, shell=True)\nsubprocess.run(\"npm run bake\", cwd=driver_path, shell=True)\n\nfor driver in [\"driver-linux\", \"driver-macos\", \"driver-win.exe\"]:\n if (package_path / driver).exists():\n os.remove((package_path / driver))\n\n in_path = driver_path / \"out\" / driver\n out_path = drivers_path / (driver + \".gz\")\n with open(in_path, \"rb\") as f_in, gzip.open(out_path, \"wb\") as f_out:\n shutil.copyfileobj(f_in, f_out)\n\nnode_modules_playwright = driver_path / \"node_modules\" / \"playwright\"\n\nshutil.copyfile(\n node_modules_playwright / \"browsers.json\", drivers_path / \"browsers.json\",\n)\n\nupstream_readme = (node_modules_playwright / \"README.md\").read_text()\npw_python_readme = (_dirname / \"README.md\").read_text()\n\nmatches = re.findall(r\"<!-- GEN:(.*?) -->(.*?)<!-- GEN:stop -->\", upstream_readme)\n\nfor key, value in matches:\n pw_python_readme = re.sub(\n rf\"(<!-- GEN:{key} -->).*?(<!-- GEN:stop -->)\",\n f\"<!-- GEN:{key} -->{value}<!-- GEN:stop -->\",\n pw_python_readme,\n )\n\n(_dirname / \"README.md\").write_text(pw_python_readme)\n", "path": "build_driver.py"}]} | 808 | 733 |
gh_patches_debug_18813 | rasdani/github-patches | git_diff | ibis-project__ibis-2521 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BENCH: cleanup errors around benchmarks
we are showing some errors in the benchmark suite: https://github.com/ibis-project/ibis/pull/2451/checks?check_run_id=1220781799
would be nice to have these run fully w/o errors.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `benchmarks/benchmarks.py`
Content:
```
1 import numpy as np
2 import pandas as pd
3
4 import ibis
5 import ibis.expr.datatypes as dt
6 from ibis.backends.pandas.udf import udf
7
8
9 def make_t(name='t'):
10 return ibis.table(
11 (
12 ('_timestamp', 'int32'),
13 ('dim1', 'int32'),
14 ('dim2', 'int32'),
15 ('valid_seconds', 'int32'),
16 ('meas1', 'int32'),
17 ('meas2', 'int32'),
18 ('year', 'int32'),
19 ('month', 'int32'),
20 ('day', 'int32'),
21 ('hour', 'int32'),
22 ('minute', 'int32'),
23 ),
24 name=name,
25 )
26
27
28 def make_base(t):
29 return (
30 (t.year > 2016)
31 | ((t.year == 2016) & (t.month > 6))
32 | ((t.year == 2016) & (t.month == 6) & (t.day > 6))
33 | ((t.year == 2016) & (t.month == 6) & (t.day == 6) & (t.hour > 6))
34 | (
35 (t.year == 2016)
36 & (t.month == 6)
37 & (t.day == 6)
38 & (t.hour == 6)
39 & (t.minute >= 5)
40 )
41 ) & (
42 (t.year < 2016)
43 | ((t.year == 2016) & (t.month < 6))
44 | ((t.year == 2016) & (t.month == 6) & (t.day < 6))
45 | ((t.year == 2016) & (t.month == 6) & (t.day == 6) & (t.hour < 6))
46 | (
47 (t.year == 2016)
48 & (t.month == 6)
49 & (t.day == 6)
50 & (t.hour == 6)
51 & (t.minute <= 5)
52 )
53 )
54
55
56 def make_large_expr(t, base):
57 src_table = t[base]
58 src_table = src_table.mutate(
59 _timestamp=(src_table['_timestamp'] - src_table['_timestamp'] % 3600)
60 .cast('int32')
61 .name('_timestamp'),
62 valid_seconds=300,
63 )
64
65 aggs = []
66 for meas in ['meas1', 'meas2']:
67 aggs.append(src_table[meas].sum().cast('float').name(meas))
68 src_table = src_table.aggregate(
69 aggs, by=['_timestamp', 'dim1', 'dim2', 'valid_seconds']
70 )
71
72 part_keys = ['year', 'month', 'day', 'hour', 'minute']
73 ts_col = src_table['_timestamp'].cast('timestamp')
74 new_cols = {}
75 for part_key in part_keys:
76 part_col = getattr(ts_col, part_key)()
77 new_cols[part_key] = part_col
78 src_table = src_table.mutate(**new_cols)
79 return src_table[
80 [
81 '_timestamp',
82 'dim1',
83 'dim2',
84 'meas1',
85 'meas2',
86 'year',
87 'month',
88 'day',
89 'hour',
90 'minute',
91 ]
92 ]
93
94
95 class Suite:
96 def setup(self):
97 self.t = t = make_t()
98 self.base = make_base(t)
99 self.expr = self.large_expr
100
101 @property
102 def large_expr(self):
103 t = make_t()
104 return make_large_expr(t, make_base(t))
105
106
107 class Construction(Suite):
108 def time_large_expr_construction(self):
109 self.large_expr
110
111
112 class Hashing(Suite):
113 def time_hash_small_expr(self):
114 hash(make_t())
115
116 def time_hash_medium_expr(self):
117 hash(make_base(make_t()))
118
119 def time_hash_large_expr(self):
120 hash(self.large_expr)
121
122
123 class Formatting(Suite):
124 def time_base_expr_formatting(self):
125 str(self.base)
126
127 def time_large_expr_formatting(self):
128 str(self.expr)
129
130
131 class Compilation(Suite):
132 def time_impala_base_compile(self):
133 ibis.impala.compile(self.base)
134
135 def time_impala_large_expr_compile(self):
136 ibis.impala.compile(self.expr)
137
138
139 class PandasBackend:
140 def setup(self):
141 n = 30 * int(2e5)
142 self.data = pd.DataFrame(
143 {
144 'key': np.random.choice(16000, size=n),
145 'low_card_key': np.random.choice(30, size=n),
146 'value': np.random.rand(n),
147 'timestamps': pd.date_range(
148 start='now', periods=n, freq='s'
149 ).values,
150 'timestamp_strings': pd.date_range(
151 start='now', periods=n, freq='s'
152 ).values.astype(str),
153 'repeated_timestamps': pd.date_range(
154 start='2018-09-01', periods=30
155 ).repeat(int(n / 30)),
156 }
157 )
158
159 t = ibis.pandas.connect({'df': self.data}).table('df')
160
161 self.high_card_group_by = t.groupby(t.key).aggregate(
162 avg_value=t.value.mean()
163 )
164
165 self.cast_to_dates = t.timestamps.cast(dt.date)
166 self.cast_to_dates_from_strings = t.timestamp_strings.cast(dt.date)
167
168 self.multikey_group_by_with_mutate = (
169 t.mutate(dates=t.timestamps.cast('date'))
170 .groupby(['low_card_key', 'dates'])
171 .aggregate(avg_value=lambda t: t.value.mean())
172 )
173
174 self.simple_sort = t.sort_by([t.key])
175
176 self.simple_sort_projection = t[['key', 'value']].sort_by(['key'])
177
178 self.multikey_sort = t.sort_by(['low_card_key', 'key'])
179
180 self.multikey_sort_projection = t[
181 ['low_card_key', 'key', 'value']
182 ].sort_by(['low_card_key', 'key'])
183
184 low_card_rolling_window = ibis.trailing_range_window(
185 ibis.interval(days=2),
186 order_by=t.repeated_timestamps,
187 group_by=t.low_card_key,
188 )
189 self.low_card_grouped_rolling = t.value.mean().over(
190 low_card_rolling_window
191 )
192
193 high_card_rolling_window = ibis.trailing_range_window(
194 ibis.interval(days=2),
195 order_by=t.repeated_timestamps,
196 group_by=t.key,
197 )
198 self.high_card_grouped_rolling = t.value.mean().over(
199 high_card_rolling_window
200 )
201
202 @udf.reduction(['double'], 'double')
203 def my_mean(series):
204 return series.mean()
205
206 self.low_card_grouped_rolling_udf_mean = my_mean(t.value).over(
207 low_card_rolling_window
208 )
209 self.high_card_grouped_rolling_udf_mean = my_mean(t.value).over(
210 high_card_rolling_window
211 )
212
213 @udf.analytic(['double'], 'double')
214 def my_zscore(series):
215 return (series - series.mean()) / series.std()
216
217 low_card_window = ibis.window(group_by=t.low_card_key)
218
219 high_card_window = ibis.window(group_by=t.key)
220
221 self.low_card_window_analytics_udf = my_zscore(t.value).over(
222 low_card_window
223 )
224 self.high_card_window_analytics_udf = my_zscore(t.value).over(
225 high_card_window
226 )
227
228 @udf.reduction(['double', 'double'], 'double')
229 def my_wm(v, w):
230 return np.average(v, weights=w)
231
232 self.low_card_grouped_rolling_udf_wm = my_wm(t.value, t.value).over(
233 low_card_rolling_window
234 )
235
236 self.high_card_grouped_rolling_udf_wm = my_wm(t.value, t.value).over(
237 low_card_rolling_window
238 )
239
240 def time_high_cardinality_group_by(self):
241 self.high_card_group_by.execute()
242
243 def time_cast_to_date(self):
244 self.cast_to_dates.execute()
245
246 def time_cast_to_date_from_string(self):
247 self.cast_to_dates_from_strings.execute()
248
249 def time_multikey_group_by_with_mutate(self):
250 self.multikey_group_by_with_mutate.execute()
251
252 def time_simple_sort(self):
253 self.simple_sort.execute()
254
255 def time_multikey_sort(self):
256 self.multikey_sort.execute()
257
258 def time_simple_sort_projection(self):
259 self.simple_sort_projection.execute()
260
261 def time_multikey_sort_projection(self):
262 self.multikey_sort_projection.execute()
263
264 def time_low_card_grouped_rolling(self):
265 self.low_card_grouped_rolling.execute()
266
267 def time_high_card_grouped_rolling(self):
268 self.high_card_grouped_rolling.execute()
269
270 def time_low_card_grouped_rolling_udf(self):
271 self.low_card_grouped_rolling_udf.execute()
272
273 def time_high_card_grouped_rolling_udf(self):
274 self.high_card_grouped_rolling_udf.execute()
275
276 def time_low_card_window_analytics_udf(self):
277 self.low_card_window_analytics_udf.execute()
278
279 def time_high_card_grouped_rolling_udf_wm(self):
280 self.high_card_grouped_rolling_udf_wm.execute()
281
282 def time_low_card_grouped_rolling_udf_wm(self):
283 self.low_card_grouped_rolling_udf_wm.execute()
284
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/benchmarks/benchmarks.py b/benchmarks/benchmarks.py
--- a/benchmarks/benchmarks.py
+++ b/benchmarks/benchmarks.py
@@ -138,7 +138,7 @@
class PandasBackend:
def setup(self):
- n = 30 * int(2e5)
+ n = 30 * int(2e4)
self.data = pd.DataFrame(
{
'key': np.random.choice(16000, size=n),
@@ -268,10 +268,10 @@
self.high_card_grouped_rolling.execute()
def time_low_card_grouped_rolling_udf(self):
- self.low_card_grouped_rolling_udf.execute()
+ self.low_card_grouped_rolling_udf_mean.execute()
def time_high_card_grouped_rolling_udf(self):
- self.high_card_grouped_rolling_udf.execute()
+ self.high_card_grouped_rolling_udf_mean.execute()
def time_low_card_window_analytics_udf(self):
self.low_card_window_analytics_udf.execute()
| {"golden_diff": "diff --git a/benchmarks/benchmarks.py b/benchmarks/benchmarks.py\n--- a/benchmarks/benchmarks.py\n+++ b/benchmarks/benchmarks.py\n@@ -138,7 +138,7 @@\n \n class PandasBackend:\n def setup(self):\n- n = 30 * int(2e5)\n+ n = 30 * int(2e4)\n self.data = pd.DataFrame(\n {\n 'key': np.random.choice(16000, size=n),\n@@ -268,10 +268,10 @@\n self.high_card_grouped_rolling.execute()\n \n def time_low_card_grouped_rolling_udf(self):\n- self.low_card_grouped_rolling_udf.execute()\n+ self.low_card_grouped_rolling_udf_mean.execute()\n \n def time_high_card_grouped_rolling_udf(self):\n- self.high_card_grouped_rolling_udf.execute()\n+ self.high_card_grouped_rolling_udf_mean.execute()\n \n def time_low_card_window_analytics_udf(self):\n self.low_card_window_analytics_udf.execute()\n", "issue": "BENCH: cleanup errors around benchmarks\nwe are showing some errors in the benchmark suite: https://github.com/ibis-project/ibis/pull/2451/checks?check_run_id=1220781799\r\n\r\nwould be nice to have these run fully w/o errors.\n", "before_files": [{"content": "import numpy as np\nimport pandas as pd\n\nimport ibis\nimport ibis.expr.datatypes as dt\nfrom ibis.backends.pandas.udf import udf\n\n\ndef make_t(name='t'):\n return ibis.table(\n (\n ('_timestamp', 'int32'),\n ('dim1', 'int32'),\n ('dim2', 'int32'),\n ('valid_seconds', 'int32'),\n ('meas1', 'int32'),\n ('meas2', 'int32'),\n ('year', 'int32'),\n ('month', 'int32'),\n ('day', 'int32'),\n ('hour', 'int32'),\n ('minute', 'int32'),\n ),\n name=name,\n )\n\n\ndef make_base(t):\n return (\n (t.year > 2016)\n | ((t.year == 2016) & (t.month > 6))\n | ((t.year == 2016) & (t.month == 6) & (t.day > 6))\n | ((t.year == 2016) & (t.month == 6) & (t.day == 6) & (t.hour > 6))\n | (\n (t.year == 2016)\n & (t.month == 6)\n & (t.day == 6)\n & (t.hour == 6)\n & (t.minute >= 5)\n )\n ) & (\n (t.year < 2016)\n | ((t.year == 2016) & (t.month < 6))\n | ((t.year == 2016) & (t.month == 6) & (t.day < 6))\n | ((t.year == 2016) & (t.month == 6) & (t.day == 6) & (t.hour < 6))\n | (\n (t.year == 2016)\n & (t.month == 6)\n & (t.day == 6)\n & (t.hour == 6)\n & (t.minute <= 5)\n )\n )\n\n\ndef make_large_expr(t, base):\n src_table = t[base]\n src_table = src_table.mutate(\n _timestamp=(src_table['_timestamp'] - src_table['_timestamp'] % 3600)\n .cast('int32')\n .name('_timestamp'),\n valid_seconds=300,\n )\n\n aggs = []\n for meas in ['meas1', 'meas2']:\n aggs.append(src_table[meas].sum().cast('float').name(meas))\n src_table = src_table.aggregate(\n aggs, by=['_timestamp', 'dim1', 'dim2', 'valid_seconds']\n )\n\n part_keys = ['year', 'month', 'day', 'hour', 'minute']\n ts_col = src_table['_timestamp'].cast('timestamp')\n new_cols = {}\n for part_key in part_keys:\n part_col = getattr(ts_col, part_key)()\n new_cols[part_key] = part_col\n src_table = src_table.mutate(**new_cols)\n return src_table[\n [\n '_timestamp',\n 'dim1',\n 'dim2',\n 'meas1',\n 'meas2',\n 'year',\n 'month',\n 'day',\n 'hour',\n 'minute',\n ]\n ]\n\n\nclass Suite:\n def setup(self):\n self.t = t = make_t()\n self.base = make_base(t)\n self.expr = self.large_expr\n\n @property\n def large_expr(self):\n t = make_t()\n return make_large_expr(t, make_base(t))\n\n\nclass Construction(Suite):\n def time_large_expr_construction(self):\n self.large_expr\n\n\nclass Hashing(Suite):\n def time_hash_small_expr(self):\n hash(make_t())\n\n def time_hash_medium_expr(self):\n hash(make_base(make_t()))\n\n def time_hash_large_expr(self):\n hash(self.large_expr)\n\n\nclass Formatting(Suite):\n def time_base_expr_formatting(self):\n str(self.base)\n\n def time_large_expr_formatting(self):\n str(self.expr)\n\n\nclass Compilation(Suite):\n def time_impala_base_compile(self):\n ibis.impala.compile(self.base)\n\n def time_impala_large_expr_compile(self):\n ibis.impala.compile(self.expr)\n\n\nclass PandasBackend:\n def setup(self):\n n = 30 * int(2e5)\n self.data = pd.DataFrame(\n {\n 'key': np.random.choice(16000, size=n),\n 'low_card_key': np.random.choice(30, size=n),\n 'value': np.random.rand(n),\n 'timestamps': pd.date_range(\n start='now', periods=n, freq='s'\n ).values,\n 'timestamp_strings': pd.date_range(\n start='now', periods=n, freq='s'\n ).values.astype(str),\n 'repeated_timestamps': pd.date_range(\n start='2018-09-01', periods=30\n ).repeat(int(n / 30)),\n }\n )\n\n t = ibis.pandas.connect({'df': self.data}).table('df')\n\n self.high_card_group_by = t.groupby(t.key).aggregate(\n avg_value=t.value.mean()\n )\n\n self.cast_to_dates = t.timestamps.cast(dt.date)\n self.cast_to_dates_from_strings = t.timestamp_strings.cast(dt.date)\n\n self.multikey_group_by_with_mutate = (\n t.mutate(dates=t.timestamps.cast('date'))\n .groupby(['low_card_key', 'dates'])\n .aggregate(avg_value=lambda t: t.value.mean())\n )\n\n self.simple_sort = t.sort_by([t.key])\n\n self.simple_sort_projection = t[['key', 'value']].sort_by(['key'])\n\n self.multikey_sort = t.sort_by(['low_card_key', 'key'])\n\n self.multikey_sort_projection = t[\n ['low_card_key', 'key', 'value']\n ].sort_by(['low_card_key', 'key'])\n\n low_card_rolling_window = ibis.trailing_range_window(\n ibis.interval(days=2),\n order_by=t.repeated_timestamps,\n group_by=t.low_card_key,\n )\n self.low_card_grouped_rolling = t.value.mean().over(\n low_card_rolling_window\n )\n\n high_card_rolling_window = ibis.trailing_range_window(\n ibis.interval(days=2),\n order_by=t.repeated_timestamps,\n group_by=t.key,\n )\n self.high_card_grouped_rolling = t.value.mean().over(\n high_card_rolling_window\n )\n\n @udf.reduction(['double'], 'double')\n def my_mean(series):\n return series.mean()\n\n self.low_card_grouped_rolling_udf_mean = my_mean(t.value).over(\n low_card_rolling_window\n )\n self.high_card_grouped_rolling_udf_mean = my_mean(t.value).over(\n high_card_rolling_window\n )\n\n @udf.analytic(['double'], 'double')\n def my_zscore(series):\n return (series - series.mean()) / series.std()\n\n low_card_window = ibis.window(group_by=t.low_card_key)\n\n high_card_window = ibis.window(group_by=t.key)\n\n self.low_card_window_analytics_udf = my_zscore(t.value).over(\n low_card_window\n )\n self.high_card_window_analytics_udf = my_zscore(t.value).over(\n high_card_window\n )\n\n @udf.reduction(['double', 'double'], 'double')\n def my_wm(v, w):\n return np.average(v, weights=w)\n\n self.low_card_grouped_rolling_udf_wm = my_wm(t.value, t.value).over(\n low_card_rolling_window\n )\n\n self.high_card_grouped_rolling_udf_wm = my_wm(t.value, t.value).over(\n low_card_rolling_window\n )\n\n def time_high_cardinality_group_by(self):\n self.high_card_group_by.execute()\n\n def time_cast_to_date(self):\n self.cast_to_dates.execute()\n\n def time_cast_to_date_from_string(self):\n self.cast_to_dates_from_strings.execute()\n\n def time_multikey_group_by_with_mutate(self):\n self.multikey_group_by_with_mutate.execute()\n\n def time_simple_sort(self):\n self.simple_sort.execute()\n\n def time_multikey_sort(self):\n self.multikey_sort.execute()\n\n def time_simple_sort_projection(self):\n self.simple_sort_projection.execute()\n\n def time_multikey_sort_projection(self):\n self.multikey_sort_projection.execute()\n\n def time_low_card_grouped_rolling(self):\n self.low_card_grouped_rolling.execute()\n\n def time_high_card_grouped_rolling(self):\n self.high_card_grouped_rolling.execute()\n\n def time_low_card_grouped_rolling_udf(self):\n self.low_card_grouped_rolling_udf.execute()\n\n def time_high_card_grouped_rolling_udf(self):\n self.high_card_grouped_rolling_udf.execute()\n\n def time_low_card_window_analytics_udf(self):\n self.low_card_window_analytics_udf.execute()\n\n def time_high_card_grouped_rolling_udf_wm(self):\n self.high_card_grouped_rolling_udf_wm.execute()\n\n def time_low_card_grouped_rolling_udf_wm(self):\n self.low_card_grouped_rolling_udf_wm.execute()\n", "path": "benchmarks/benchmarks.py"}], "after_files": [{"content": "import numpy as np\nimport pandas as pd\n\nimport ibis\nimport ibis.expr.datatypes as dt\nfrom ibis.pandas.udf import udf\n\n\ndef make_t(name='t'):\n return ibis.table(\n (\n ('_timestamp', 'int32'),\n ('dim1', 'int32'),\n ('dim2', 'int32'),\n ('valid_seconds', 'int32'),\n ('meas1', 'int32'),\n ('meas2', 'int32'),\n ('year', 'int32'),\n ('month', 'int32'),\n ('day', 'int32'),\n ('hour', 'int32'),\n ('minute', 'int32'),\n ),\n name=name,\n )\n\n\ndef make_base(t):\n return (\n (t.year > 2016)\n | ((t.year == 2016) & (t.month > 6))\n | ((t.year == 2016) & (t.month == 6) & (t.day > 6))\n | ((t.year == 2016) & (t.month == 6) & (t.day == 6) & (t.hour > 6))\n | (\n (t.year == 2016)\n & (t.month == 6)\n & (t.day == 6)\n & (t.hour == 6)\n & (t.minute >= 5)\n )\n ) & (\n (t.year < 2016)\n | ((t.year == 2016) & (t.month < 6))\n | ((t.year == 2016) & (t.month == 6) & (t.day < 6))\n | ((t.year == 2016) & (t.month == 6) & (t.day == 6) & (t.hour < 6))\n | (\n (t.year == 2016)\n & (t.month == 6)\n & (t.day == 6)\n & (t.hour == 6)\n & (t.minute <= 5)\n )\n )\n\n\ndef make_large_expr(t, base):\n src_table = t[base]\n src_table = src_table.mutate(\n _timestamp=(src_table['_timestamp'] - src_table['_timestamp'] % 3600)\n .cast('int32')\n .name('_timestamp'),\n valid_seconds=300,\n )\n\n aggs = []\n for meas in ['meas1', 'meas2']:\n aggs.append(src_table[meas].sum().cast('float').name(meas))\n src_table = src_table.aggregate(\n aggs, by=['_timestamp', 'dim1', 'dim2', 'valid_seconds']\n )\n\n part_keys = ['year', 'month', 'day', 'hour', 'minute']\n ts_col = src_table['_timestamp'].cast('timestamp')\n new_cols = {}\n for part_key in part_keys:\n part_col = getattr(ts_col, part_key)()\n new_cols[part_key] = part_col\n src_table = src_table.mutate(**new_cols)\n return src_table[\n [\n '_timestamp',\n 'dim1',\n 'dim2',\n 'meas1',\n 'meas2',\n 'year',\n 'month',\n 'day',\n 'hour',\n 'minute',\n ]\n ]\n\n\nclass Suite:\n def setup(self):\n self.t = t = make_t()\n self.base = make_base(t)\n self.expr = self.large_expr\n\n @property\n def large_expr(self):\n t = make_t()\n return make_large_expr(t, make_base(t))\n\n\nclass Construction(Suite):\n def time_large_expr_construction(self):\n self.large_expr\n\n\nclass Hashing(Suite):\n def time_hash_small_expr(self):\n hash(make_t())\n\n def time_hash_medium_expr(self):\n hash(make_base(make_t()))\n\n def time_hash_large_expr(self):\n hash(self.large_expr)\n\n\nclass Formatting(Suite):\n def time_base_expr_formatting(self):\n str(self.base)\n\n def time_large_expr_formatting(self):\n str(self.expr)\n\n\nclass Compilation(Suite):\n def time_impala_base_compile(self):\n ibis.impala.compile(self.base)\n\n def time_impala_large_expr_compile(self):\n ibis.impala.compile(self.expr)\n\n\nclass PandasBackend:\n def setup(self):\n n = 30 * int(2e4)\n self.data = pd.DataFrame(\n {\n 'key': np.random.choice(16000, size=n),\n 'low_card_key': np.random.choice(30, size=n),\n 'value': np.random.rand(n),\n 'timestamps': pd.date_range(\n start='now', periods=n, freq='s'\n ).values,\n 'timestamp_strings': pd.date_range(\n start='now', periods=n, freq='s'\n ).values.astype(str),\n 'repeated_timestamps': pd.date_range(\n start='2018-09-01', periods=30\n ).repeat(int(n / 30)),\n }\n )\n\n t = ibis.pandas.connect({'df': self.data}).table('df')\n\n self.high_card_group_by = t.groupby(t.key).aggregate(\n avg_value=t.value.mean()\n )\n\n self.cast_to_dates = t.timestamps.cast(dt.date)\n self.cast_to_dates_from_strings = t.timestamp_strings.cast(dt.date)\n\n self.multikey_group_by_with_mutate = (\n t.mutate(dates=t.timestamps.cast('date'))\n .groupby(['low_card_key', 'dates'])\n .aggregate(avg_value=lambda t: t.value.mean())\n )\n\n self.simple_sort = t.sort_by([t.key])\n\n self.simple_sort_projection = t[['key', 'value']].sort_by(['key'])\n\n self.multikey_sort = t.sort_by(['low_card_key', 'key'])\n\n self.multikey_sort_projection = t[\n ['low_card_key', 'key', 'value']\n ].sort_by(['low_card_key', 'key'])\n\n low_card_rolling_window = ibis.trailing_range_window(\n ibis.interval(days=2),\n order_by=t.repeated_timestamps,\n group_by=t.low_card_key,\n )\n self.low_card_grouped_rolling = t.value.mean().over(\n low_card_rolling_window\n )\n\n high_card_rolling_window = ibis.trailing_range_window(\n ibis.interval(days=2),\n order_by=t.repeated_timestamps,\n group_by=t.key,\n )\n self.high_card_grouped_rolling = t.value.mean().over(\n high_card_rolling_window\n )\n\n @udf.reduction(['double'], 'double')\n def my_mean(series):\n return series.mean()\n\n self.low_card_grouped_rolling_udf_mean = my_mean(t.value).over(\n low_card_rolling_window\n )\n self.high_card_grouped_rolling_udf_mean = my_mean(t.value).over(\n high_card_rolling_window\n )\n\n @udf.analytic(['double'], 'double')\n def my_zscore(series):\n return (series - series.mean()) / series.std()\n\n low_card_window = ibis.window(group_by=t.low_card_key)\n\n high_card_window = ibis.window(group_by=t.key)\n\n self.low_card_window_analytics_udf = my_zscore(t.value).over(\n low_card_window\n )\n self.high_card_window_analytics_udf = my_zscore(t.value).over(\n high_card_window\n )\n\n @udf.reduction(['double', 'double'], 'double')\n def my_wm(v, w):\n return np.average(v, weights=w)\n\n self.low_card_grouped_rolling_udf_wm = my_wm(t.value, t.value).over(\n low_card_rolling_window\n )\n\n self.high_card_grouped_rolling_udf_wm = my_wm(t.value, t.value).over(\n low_card_rolling_window\n )\n\n def time_high_cardinality_group_by(self):\n self.high_card_group_by.execute()\n\n def time_cast_to_date(self):\n self.cast_to_dates.execute()\n\n def time_cast_to_date_from_string(self):\n self.cast_to_dates_from_strings.execute()\n\n def time_multikey_group_by_with_mutate(self):\n self.multikey_group_by_with_mutate.execute()\n\n def time_simple_sort(self):\n self.simple_sort.execute()\n\n def time_multikey_sort(self):\n self.multikey_sort.execute()\n\n def time_simple_sort_projection(self):\n self.simple_sort_projection.execute()\n\n def time_multikey_sort_projection(self):\n self.multikey_sort_projection.execute()\n\n def time_low_card_grouped_rolling(self):\n self.low_card_grouped_rolling.execute()\n\n def time_high_card_grouped_rolling(self):\n self.high_card_grouped_rolling.execute()\n\n def time_low_card_grouped_rolling_udf(self):\n self.low_card_grouped_rolling_udf_mean.execute()\n\n def time_high_card_grouped_rolling_udf(self):\n self.high_card_grouped_rolling_udf_mean.execute()\n\n def time_low_card_window_analytics_udf(self):\n self.low_card_window_analytics_udf.execute()\n\n def time_high_card_grouped_rolling_udf_wm(self):\n self.high_card_grouped_rolling_udf_wm.execute()\n\n def time_low_card_grouped_rolling_udf_wm(self):\n self.low_card_grouped_rolling_udf_wm.execute()\n", "path": "benchmarks/benchmarks.py"}]} | 3,174 | 251 |
gh_patches_debug_12201 | rasdani/github-patches | git_diff | PyGithub__PyGithub-1433 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
List pull requests based on commit
There's a functionality on the Github API that is apparently not supported on PyGithub -- https://developer.github.com/v3/repos/commits/#list-pull-requests-associated-with-commit
## Motivation
I'm doing an auto-versioning type of thing on my repo, and I need to walk through all the pull requests searching for labels. I can get all the commits since the last release using `<repo>.get_commits(since=date_from_last_release)`. It would be incredibly useful if those commits came with their associated pull requests - that way I could simply check for each label on those.
I couldn't find this functionality on the documentation or reading the code, and I couldn't find another decent way to do this on the current PyGithub implementation.
## Caveats
This feature seems to be in preview period and the API might change, so it's probably not worth doing this right now (if ever).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `github/Commit.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 ############################ Copyrights and license ############################
4 # #
5 # Copyright 2012 Vincent Jacques <[email protected]> #
6 # Copyright 2012 Zearin <[email protected]> #
7 # Copyright 2013 AKFish <[email protected]> #
8 # Copyright 2013 Vincent Jacques <[email protected]> #
9 # Copyright 2013 martinqt <[email protected]> #
10 # Copyright 2014 Andy Casey <[email protected]> #
11 # Copyright 2014 Vincent Jacques <[email protected]> #
12 # Copyright 2016 Jannis Gebauer <[email protected]> #
13 # Copyright 2016 John Eskew <[email protected]> #
14 # Copyright 2016 Peter Buckley <[email protected]> #
15 # Copyright 2018 sfdye <[email protected]> #
16 # #
17 # This file is part of PyGithub. #
18 # http://pygithub.readthedocs.io/ #
19 # #
20 # PyGithub is free software: you can redistribute it and/or modify it under #
21 # the terms of the GNU Lesser General Public License as published by the Free #
22 # Software Foundation, either version 3 of the License, or (at your option) #
23 # any later version. #
24 # #
25 # PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #
26 # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #
27 # FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #
28 # details. #
29 # #
30 # You should have received a copy of the GNU Lesser General Public License #
31 # along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #
32 # #
33 ################################################################################
34
35 import github.CommitCombinedStatus
36 import github.CommitComment
37 import github.CommitStats
38 import github.CommitStatus
39 import github.File
40 import github.GitCommit
41 import github.GithubObject
42 import github.NamedUser
43 import github.PaginatedList
44
45
46 class Commit(github.GithubObject.CompletableGithubObject):
47 """
48 This class represents Commits. The reference can be found here http://developer.github.com/v3/git/commits/
49 """
50
51 def __repr__(self):
52 return self.get__repr__({"sha": self._sha.value})
53
54 @property
55 def author(self):
56 """
57 :type: :class:`github.NamedUser.NamedUser`
58 """
59 self._completeIfNotSet(self._author)
60 return self._author.value
61
62 @property
63 def comments_url(self):
64 """
65 :type: string
66 """
67 self._completeIfNotSet(self._comments_url)
68 return self._comments_url.value
69
70 @property
71 def commit(self):
72 """
73 :type: :class:`github.GitCommit.GitCommit`
74 """
75 self._completeIfNotSet(self._commit)
76 return self._commit.value
77
78 @property
79 def committer(self):
80 """
81 :type: :class:`github.NamedUser.NamedUser`
82 """
83 self._completeIfNotSet(self._committer)
84 return self._committer.value
85
86 @property
87 def files(self):
88 """
89 :type: list of :class:`github.File.File`
90 """
91 self._completeIfNotSet(self._files)
92 return self._files.value
93
94 @property
95 def html_url(self):
96 """
97 :type: string
98 """
99 self._completeIfNotSet(self._html_url)
100 return self._html_url.value
101
102 @property
103 def parents(self):
104 """
105 :type: list of :class:`github.Commit.Commit`
106 """
107 self._completeIfNotSet(self._parents)
108 return self._parents.value
109
110 @property
111 def sha(self):
112 """
113 :type: string
114 """
115 self._completeIfNotSet(self._sha)
116 return self._sha.value
117
118 @property
119 def stats(self):
120 """
121 :type: :class:`github.CommitStats.CommitStats`
122 """
123 self._completeIfNotSet(self._stats)
124 return self._stats.value
125
126 @property
127 def url(self):
128 """
129 :type: string
130 """
131 self._completeIfNotSet(self._url)
132 return self._url.value
133
134 def create_comment(
135 self,
136 body,
137 line=github.GithubObject.NotSet,
138 path=github.GithubObject.NotSet,
139 position=github.GithubObject.NotSet,
140 ):
141 """
142 :calls: `POST /repos/:owner/:repo/commits/:sha/comments <http://developer.github.com/v3/repos/comments>`_
143 :param body: string
144 :param line: integer
145 :param path: string
146 :param position: integer
147 :rtype: :class:`github.CommitComment.CommitComment`
148 """
149 assert isinstance(body, str), body
150 assert line is github.GithubObject.NotSet or isinstance(line, int), line
151 assert path is github.GithubObject.NotSet or isinstance(path, str), path
152 assert position is github.GithubObject.NotSet or isinstance(
153 position, int
154 ), position
155 post_parameters = {
156 "body": body,
157 }
158 if line is not github.GithubObject.NotSet:
159 post_parameters["line"] = line
160 if path is not github.GithubObject.NotSet:
161 post_parameters["path"] = path
162 if position is not github.GithubObject.NotSet:
163 post_parameters["position"] = position
164 headers, data = self._requester.requestJsonAndCheck(
165 "POST", self.url + "/comments", input=post_parameters
166 )
167 return github.CommitComment.CommitComment(
168 self._requester, headers, data, completed=True
169 )
170
171 def create_status(
172 self,
173 state,
174 target_url=github.GithubObject.NotSet,
175 description=github.GithubObject.NotSet,
176 context=github.GithubObject.NotSet,
177 ):
178 """
179 :calls: `POST /repos/:owner/:repo/statuses/:sha <http://developer.github.com/v3/repos/statuses>`_
180 :param state: string
181 :param target_url: string
182 :param description: string
183 :param context: string
184 :rtype: :class:`github.CommitStatus.CommitStatus`
185 """
186 assert isinstance(state, str), state
187 assert target_url is github.GithubObject.NotSet or isinstance(
188 target_url, str
189 ), target_url
190 assert description is github.GithubObject.NotSet or isinstance(
191 description, str
192 ), description
193 assert context is github.GithubObject.NotSet or isinstance(
194 context, str
195 ), context
196 post_parameters = {
197 "state": state,
198 }
199 if target_url is not github.GithubObject.NotSet:
200 post_parameters["target_url"] = target_url
201 if description is not github.GithubObject.NotSet:
202 post_parameters["description"] = description
203 if context is not github.GithubObject.NotSet:
204 post_parameters["context"] = context
205 headers, data = self._requester.requestJsonAndCheck(
206 "POST",
207 self._parentUrl(self._parentUrl(self.url)) + "/statuses/" + self.sha,
208 input=post_parameters,
209 )
210 return github.CommitStatus.CommitStatus(
211 self._requester, headers, data, completed=True
212 )
213
214 def get_comments(self):
215 """
216 :calls: `GET /repos/:owner/:repo/commits/:sha/comments <http://developer.github.com/v3/repos/comments>`_
217 :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.CommitComment.CommitComment`
218 """
219 return github.PaginatedList.PaginatedList(
220 github.CommitComment.CommitComment,
221 self._requester,
222 self.url + "/comments",
223 None,
224 )
225
226 def get_statuses(self):
227 """
228 :calls: `GET /repos/:owner/:repo/statuses/:ref <http://developer.github.com/v3/repos/statuses>`_
229 :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.CommitStatus.CommitStatus`
230 """
231 return github.PaginatedList.PaginatedList(
232 github.CommitStatus.CommitStatus,
233 self._requester,
234 self._parentUrl(self._parentUrl(self.url)) + "/statuses/" + self.sha,
235 None,
236 )
237
238 def get_combined_status(self):
239 """
240 :calls: `GET /repos/:owner/:repo/commits/:ref/status/ <http://developer.github.com/v3/repos/statuses>`_
241 :rtype: :class:`github.CommitCombinedStatus.CommitCombinedStatus`
242 """
243 headers, data = self._requester.requestJsonAndCheck("GET", self.url + "/status")
244 return github.CommitCombinedStatus.CommitCombinedStatus(
245 self._requester, headers, data, completed=True
246 )
247
248 @property
249 def _identity(self):
250 return self.sha
251
252 def _initAttributes(self):
253 self._author = github.GithubObject.NotSet
254 self._comments_url = github.GithubObject.NotSet
255 self._commit = github.GithubObject.NotSet
256 self._committer = github.GithubObject.NotSet
257 self._files = github.GithubObject.NotSet
258 self._html_url = github.GithubObject.NotSet
259 self._parents = github.GithubObject.NotSet
260 self._sha = github.GithubObject.NotSet
261 self._stats = github.GithubObject.NotSet
262 self._url = github.GithubObject.NotSet
263
264 def _useAttributes(self, attributes):
265 if "author" in attributes: # pragma no branch
266 self._author = self._makeClassAttribute(
267 github.NamedUser.NamedUser, attributes["author"]
268 )
269 if "comments_url" in attributes: # pragma no branch
270 self._comments_url = self._makeStringAttribute(attributes["comments_url"])
271 if "commit" in attributes: # pragma no branch
272 self._commit = self._makeClassAttribute(
273 github.GitCommit.GitCommit, attributes["commit"]
274 )
275 if "committer" in attributes: # pragma no branch
276 self._committer = self._makeClassAttribute(
277 github.NamedUser.NamedUser, attributes["committer"]
278 )
279 if "files" in attributes: # pragma no branch
280 self._files = self._makeListOfClassesAttribute(
281 github.File.File, attributes["files"]
282 )
283 if "html_url" in attributes: # pragma no branch
284 self._html_url = self._makeStringAttribute(attributes["html_url"])
285 if "parents" in attributes: # pragma no branch
286 self._parents = self._makeListOfClassesAttribute(
287 Commit, attributes["parents"]
288 )
289 if "sha" in attributes: # pragma no branch
290 self._sha = self._makeStringAttribute(attributes["sha"])
291 if "stats" in attributes: # pragma no branch
292 self._stats = self._makeClassAttribute(
293 github.CommitStats.CommitStats, attributes["stats"]
294 )
295 if "url" in attributes: # pragma no branch
296 self._url = self._makeStringAttribute(attributes["url"])
297
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/github/Commit.py b/github/Commit.py
--- a/github/Commit.py
+++ b/github/Commit.py
@@ -245,6 +245,19 @@
self._requester, headers, data, completed=True
)
+ def get_pulls(self):
+ """
+ :calls: `GET /repos/:owner/:repo/commits/:sha/pulls <https://developer.github.com/v3/repos/commits/#list-pull-requests-associated-with-commit>`_
+ :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.PullRequest.PullRequest`
+ """
+ return github.PaginatedList.PaginatedList(
+ github.PullRequest.PullRequest,
+ self._requester,
+ self.url + "/pulls",
+ None,
+ headers={"Accept": "application/vnd.github.groot-preview+json"},
+ )
+
@property
def _identity(self):
return self.sha
| {"golden_diff": "diff --git a/github/Commit.py b/github/Commit.py\n--- a/github/Commit.py\n+++ b/github/Commit.py\n@@ -245,6 +245,19 @@\n self._requester, headers, data, completed=True\n )\n \n+ def get_pulls(self):\n+ \"\"\"\n+ :calls: `GET /repos/:owner/:repo/commits/:sha/pulls <https://developer.github.com/v3/repos/commits/#list-pull-requests-associated-with-commit>`_\n+ :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.PullRequest.PullRequest`\n+ \"\"\"\n+ return github.PaginatedList.PaginatedList(\n+ github.PullRequest.PullRequest,\n+ self._requester,\n+ self.url + \"/pulls\",\n+ None,\n+ headers={\"Accept\": \"application/vnd.github.groot-preview+json\"},\n+ )\n+\n @property\n def _identity(self):\n return self.sha\n", "issue": "List pull requests based on commit\nThere's a functionality on the Github API that is apparently not supported on PyGithub -- https://developer.github.com/v3/repos/commits/#list-pull-requests-associated-with-commit\r\n\r\n## Motivation\r\n\r\nI'm doing an auto-versioning type of thing on my repo, and I need to walk through all the pull requests searching for labels. I can get all the commits since the last release using `<repo>.get_commits(since=date_from_last_release)`. It would be incredibly useful if those commits came with their associated pull requests - that way I could simply check for each label on those.\r\n\r\nI couldn't find this functionality on the documentation or reading the code, and I couldn't find another decent way to do this on the current PyGithub implementation.\r\n\r\n## Caveats\r\n\r\nThis feature seems to be in preview period and the API might change, so it's probably not worth doing this right now (if ever).\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n############################ Copyrights and license ############################\n# #\n# Copyright 2012 Vincent Jacques <[email protected]> #\n# Copyright 2012 Zearin <[email protected]> #\n# Copyright 2013 AKFish <[email protected]> #\n# Copyright 2013 Vincent Jacques <[email protected]> #\n# Copyright 2013 martinqt <[email protected]> #\n# Copyright 2014 Andy Casey <[email protected]> #\n# Copyright 2014 Vincent Jacques <[email protected]> #\n# Copyright 2016 Jannis Gebauer <[email protected]> #\n# Copyright 2016 John Eskew <[email protected]> #\n# Copyright 2016 Peter Buckley <[email protected]> #\n# Copyright 2018 sfdye <[email protected]> #\n# #\n# This file is part of PyGithub. #\n# http://pygithub.readthedocs.io/ #\n# #\n# PyGithub is free software: you can redistribute it and/or modify it under #\n# the terms of the GNU Lesser General Public License as published by the Free #\n# Software Foundation, either version 3 of the License, or (at your option) #\n# any later version. #\n# #\n# PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #\n# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #\n# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #\n# details. #\n# #\n# You should have received a copy of the GNU Lesser General Public License #\n# along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #\n# #\n################################################################################\n\nimport github.CommitCombinedStatus\nimport github.CommitComment\nimport github.CommitStats\nimport github.CommitStatus\nimport github.File\nimport github.GitCommit\nimport github.GithubObject\nimport github.NamedUser\nimport github.PaginatedList\n\n\nclass Commit(github.GithubObject.CompletableGithubObject):\n \"\"\"\n This class represents Commits. The reference can be found here http://developer.github.com/v3/git/commits/\n \"\"\"\n\n def __repr__(self):\n return self.get__repr__({\"sha\": self._sha.value})\n\n @property\n def author(self):\n \"\"\"\n :type: :class:`github.NamedUser.NamedUser`\n \"\"\"\n self._completeIfNotSet(self._author)\n return self._author.value\n\n @property\n def comments_url(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._comments_url)\n return self._comments_url.value\n\n @property\n def commit(self):\n \"\"\"\n :type: :class:`github.GitCommit.GitCommit`\n \"\"\"\n self._completeIfNotSet(self._commit)\n return self._commit.value\n\n @property\n def committer(self):\n \"\"\"\n :type: :class:`github.NamedUser.NamedUser`\n \"\"\"\n self._completeIfNotSet(self._committer)\n return self._committer.value\n\n @property\n def files(self):\n \"\"\"\n :type: list of :class:`github.File.File`\n \"\"\"\n self._completeIfNotSet(self._files)\n return self._files.value\n\n @property\n def html_url(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._html_url)\n return self._html_url.value\n\n @property\n def parents(self):\n \"\"\"\n :type: list of :class:`github.Commit.Commit`\n \"\"\"\n self._completeIfNotSet(self._parents)\n return self._parents.value\n\n @property\n def sha(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._sha)\n return self._sha.value\n\n @property\n def stats(self):\n \"\"\"\n :type: :class:`github.CommitStats.CommitStats`\n \"\"\"\n self._completeIfNotSet(self._stats)\n return self._stats.value\n\n @property\n def url(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._url)\n return self._url.value\n\n def create_comment(\n self,\n body,\n line=github.GithubObject.NotSet,\n path=github.GithubObject.NotSet,\n position=github.GithubObject.NotSet,\n ):\n \"\"\"\n :calls: `POST /repos/:owner/:repo/commits/:sha/comments <http://developer.github.com/v3/repos/comments>`_\n :param body: string\n :param line: integer\n :param path: string\n :param position: integer\n :rtype: :class:`github.CommitComment.CommitComment`\n \"\"\"\n assert isinstance(body, str), body\n assert line is github.GithubObject.NotSet or isinstance(line, int), line\n assert path is github.GithubObject.NotSet or isinstance(path, str), path\n assert position is github.GithubObject.NotSet or isinstance(\n position, int\n ), position\n post_parameters = {\n \"body\": body,\n }\n if line is not github.GithubObject.NotSet:\n post_parameters[\"line\"] = line\n if path is not github.GithubObject.NotSet:\n post_parameters[\"path\"] = path\n if position is not github.GithubObject.NotSet:\n post_parameters[\"position\"] = position\n headers, data = self._requester.requestJsonAndCheck(\n \"POST\", self.url + \"/comments\", input=post_parameters\n )\n return github.CommitComment.CommitComment(\n self._requester, headers, data, completed=True\n )\n\n def create_status(\n self,\n state,\n target_url=github.GithubObject.NotSet,\n description=github.GithubObject.NotSet,\n context=github.GithubObject.NotSet,\n ):\n \"\"\"\n :calls: `POST /repos/:owner/:repo/statuses/:sha <http://developer.github.com/v3/repos/statuses>`_\n :param state: string\n :param target_url: string\n :param description: string\n :param context: string\n :rtype: :class:`github.CommitStatus.CommitStatus`\n \"\"\"\n assert isinstance(state, str), state\n assert target_url is github.GithubObject.NotSet or isinstance(\n target_url, str\n ), target_url\n assert description is github.GithubObject.NotSet or isinstance(\n description, str\n ), description\n assert context is github.GithubObject.NotSet or isinstance(\n context, str\n ), context\n post_parameters = {\n \"state\": state,\n }\n if target_url is not github.GithubObject.NotSet:\n post_parameters[\"target_url\"] = target_url\n if description is not github.GithubObject.NotSet:\n post_parameters[\"description\"] = description\n if context is not github.GithubObject.NotSet:\n post_parameters[\"context\"] = context\n headers, data = self._requester.requestJsonAndCheck(\n \"POST\",\n self._parentUrl(self._parentUrl(self.url)) + \"/statuses/\" + self.sha,\n input=post_parameters,\n )\n return github.CommitStatus.CommitStatus(\n self._requester, headers, data, completed=True\n )\n\n def get_comments(self):\n \"\"\"\n :calls: `GET /repos/:owner/:repo/commits/:sha/comments <http://developer.github.com/v3/repos/comments>`_\n :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.CommitComment.CommitComment`\n \"\"\"\n return github.PaginatedList.PaginatedList(\n github.CommitComment.CommitComment,\n self._requester,\n self.url + \"/comments\",\n None,\n )\n\n def get_statuses(self):\n \"\"\"\n :calls: `GET /repos/:owner/:repo/statuses/:ref <http://developer.github.com/v3/repos/statuses>`_\n :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.CommitStatus.CommitStatus`\n \"\"\"\n return github.PaginatedList.PaginatedList(\n github.CommitStatus.CommitStatus,\n self._requester,\n self._parentUrl(self._parentUrl(self.url)) + \"/statuses/\" + self.sha,\n None,\n )\n\n def get_combined_status(self):\n \"\"\"\n :calls: `GET /repos/:owner/:repo/commits/:ref/status/ <http://developer.github.com/v3/repos/statuses>`_\n :rtype: :class:`github.CommitCombinedStatus.CommitCombinedStatus`\n \"\"\"\n headers, data = self._requester.requestJsonAndCheck(\"GET\", self.url + \"/status\")\n return github.CommitCombinedStatus.CommitCombinedStatus(\n self._requester, headers, data, completed=True\n )\n\n @property\n def _identity(self):\n return self.sha\n\n def _initAttributes(self):\n self._author = github.GithubObject.NotSet\n self._comments_url = github.GithubObject.NotSet\n self._commit = github.GithubObject.NotSet\n self._committer = github.GithubObject.NotSet\n self._files = github.GithubObject.NotSet\n self._html_url = github.GithubObject.NotSet\n self._parents = github.GithubObject.NotSet\n self._sha = github.GithubObject.NotSet\n self._stats = github.GithubObject.NotSet\n self._url = github.GithubObject.NotSet\n\n def _useAttributes(self, attributes):\n if \"author\" in attributes: # pragma no branch\n self._author = self._makeClassAttribute(\n github.NamedUser.NamedUser, attributes[\"author\"]\n )\n if \"comments_url\" in attributes: # pragma no branch\n self._comments_url = self._makeStringAttribute(attributes[\"comments_url\"])\n if \"commit\" in attributes: # pragma no branch\n self._commit = self._makeClassAttribute(\n github.GitCommit.GitCommit, attributes[\"commit\"]\n )\n if \"committer\" in attributes: # pragma no branch\n self._committer = self._makeClassAttribute(\n github.NamedUser.NamedUser, attributes[\"committer\"]\n )\n if \"files\" in attributes: # pragma no branch\n self._files = self._makeListOfClassesAttribute(\n github.File.File, attributes[\"files\"]\n )\n if \"html_url\" in attributes: # pragma no branch\n self._html_url = self._makeStringAttribute(attributes[\"html_url\"])\n if \"parents\" in attributes: # pragma no branch\n self._parents = self._makeListOfClassesAttribute(\n Commit, attributes[\"parents\"]\n )\n if \"sha\" in attributes: # pragma no branch\n self._sha = self._makeStringAttribute(attributes[\"sha\"])\n if \"stats\" in attributes: # pragma no branch\n self._stats = self._makeClassAttribute(\n github.CommitStats.CommitStats, attributes[\"stats\"]\n )\n if \"url\" in attributes: # pragma no branch\n self._url = self._makeStringAttribute(attributes[\"url\"])\n", "path": "github/Commit.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n############################ Copyrights and license ############################\n# #\n# Copyright 2012 Vincent Jacques <[email protected]> #\n# Copyright 2012 Zearin <[email protected]> #\n# Copyright 2013 AKFish <[email protected]> #\n# Copyright 2013 Vincent Jacques <[email protected]> #\n# Copyright 2013 martinqt <[email protected]> #\n# Copyright 2014 Andy Casey <[email protected]> #\n# Copyright 2014 Vincent Jacques <[email protected]> #\n# Copyright 2016 Jannis Gebauer <[email protected]> #\n# Copyright 2016 John Eskew <[email protected]> #\n# Copyright 2016 Peter Buckley <[email protected]> #\n# Copyright 2018 sfdye <[email protected]> #\n# #\n# This file is part of PyGithub. #\n# http://pygithub.readthedocs.io/ #\n# #\n# PyGithub is free software: you can redistribute it and/or modify it under #\n# the terms of the GNU Lesser General Public License as published by the Free #\n# Software Foundation, either version 3 of the License, or (at your option) #\n# any later version. #\n# #\n# PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #\n# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #\n# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #\n# details. #\n# #\n# You should have received a copy of the GNU Lesser General Public License #\n# along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #\n# #\n################################################################################\n\nimport github.CommitCombinedStatus\nimport github.CommitComment\nimport github.CommitStats\nimport github.CommitStatus\nimport github.File\nimport github.GitCommit\nimport github.GithubObject\nimport github.NamedUser\nimport github.PaginatedList\n\n\nclass Commit(github.GithubObject.CompletableGithubObject):\n \"\"\"\n This class represents Commits. The reference can be found here http://developer.github.com/v3/git/commits/\n \"\"\"\n\n def __repr__(self):\n return self.get__repr__({\"sha\": self._sha.value})\n\n @property\n def author(self):\n \"\"\"\n :type: :class:`github.NamedUser.NamedUser`\n \"\"\"\n self._completeIfNotSet(self._author)\n return self._author.value\n\n @property\n def comments_url(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._comments_url)\n return self._comments_url.value\n\n @property\n def commit(self):\n \"\"\"\n :type: :class:`github.GitCommit.GitCommit`\n \"\"\"\n self._completeIfNotSet(self._commit)\n return self._commit.value\n\n @property\n def committer(self):\n \"\"\"\n :type: :class:`github.NamedUser.NamedUser`\n \"\"\"\n self._completeIfNotSet(self._committer)\n return self._committer.value\n\n @property\n def files(self):\n \"\"\"\n :type: list of :class:`github.File.File`\n \"\"\"\n self._completeIfNotSet(self._files)\n return self._files.value\n\n @property\n def html_url(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._html_url)\n return self._html_url.value\n\n @property\n def parents(self):\n \"\"\"\n :type: list of :class:`github.Commit.Commit`\n \"\"\"\n self._completeIfNotSet(self._parents)\n return self._parents.value\n\n @property\n def sha(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._sha)\n return self._sha.value\n\n @property\n def stats(self):\n \"\"\"\n :type: :class:`github.CommitStats.CommitStats`\n \"\"\"\n self._completeIfNotSet(self._stats)\n return self._stats.value\n\n @property\n def url(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._url)\n return self._url.value\n\n def create_comment(\n self,\n body,\n line=github.GithubObject.NotSet,\n path=github.GithubObject.NotSet,\n position=github.GithubObject.NotSet,\n ):\n \"\"\"\n :calls: `POST /repos/:owner/:repo/commits/:sha/comments <http://developer.github.com/v3/repos/comments>`_\n :param body: string\n :param line: integer\n :param path: string\n :param position: integer\n :rtype: :class:`github.CommitComment.CommitComment`\n \"\"\"\n assert isinstance(body, str), body\n assert line is github.GithubObject.NotSet or isinstance(line, int), line\n assert path is github.GithubObject.NotSet or isinstance(path, str), path\n assert position is github.GithubObject.NotSet or isinstance(\n position, int\n ), position\n post_parameters = {\n \"body\": body,\n }\n if line is not github.GithubObject.NotSet:\n post_parameters[\"line\"] = line\n if path is not github.GithubObject.NotSet:\n post_parameters[\"path\"] = path\n if position is not github.GithubObject.NotSet:\n post_parameters[\"position\"] = position\n headers, data = self._requester.requestJsonAndCheck(\n \"POST\", self.url + \"/comments\", input=post_parameters\n )\n return github.CommitComment.CommitComment(\n self._requester, headers, data, completed=True\n )\n\n def create_status(\n self,\n state,\n target_url=github.GithubObject.NotSet,\n description=github.GithubObject.NotSet,\n context=github.GithubObject.NotSet,\n ):\n \"\"\"\n :calls: `POST /repos/:owner/:repo/statuses/:sha <http://developer.github.com/v3/repos/statuses>`_\n :param state: string\n :param target_url: string\n :param description: string\n :param context: string\n :rtype: :class:`github.CommitStatus.CommitStatus`\n \"\"\"\n assert isinstance(state, str), state\n assert target_url is github.GithubObject.NotSet or isinstance(\n target_url, str\n ), target_url\n assert description is github.GithubObject.NotSet or isinstance(\n description, str\n ), description\n assert context is github.GithubObject.NotSet or isinstance(\n context, str\n ), context\n post_parameters = {\n \"state\": state,\n }\n if target_url is not github.GithubObject.NotSet:\n post_parameters[\"target_url\"] = target_url\n if description is not github.GithubObject.NotSet:\n post_parameters[\"description\"] = description\n if context is not github.GithubObject.NotSet:\n post_parameters[\"context\"] = context\n headers, data = self._requester.requestJsonAndCheck(\n \"POST\",\n self._parentUrl(self._parentUrl(self.url)) + \"/statuses/\" + self.sha,\n input=post_parameters,\n )\n return github.CommitStatus.CommitStatus(\n self._requester, headers, data, completed=True\n )\n\n def get_comments(self):\n \"\"\"\n :calls: `GET /repos/:owner/:repo/commits/:sha/comments <http://developer.github.com/v3/repos/comments>`_\n :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.CommitComment.CommitComment`\n \"\"\"\n return github.PaginatedList.PaginatedList(\n github.CommitComment.CommitComment,\n self._requester,\n self.url + \"/comments\",\n None,\n )\n\n def get_statuses(self):\n \"\"\"\n :calls: `GET /repos/:owner/:repo/statuses/:ref <http://developer.github.com/v3/repos/statuses>`_\n :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.CommitStatus.CommitStatus`\n \"\"\"\n return github.PaginatedList.PaginatedList(\n github.CommitStatus.CommitStatus,\n self._requester,\n self._parentUrl(self._parentUrl(self.url)) + \"/statuses/\" + self.sha,\n None,\n )\n\n def get_combined_status(self):\n \"\"\"\n :calls: `GET /repos/:owner/:repo/commits/:ref/status/ <http://developer.github.com/v3/repos/statuses>`_\n :rtype: :class:`github.CommitCombinedStatus.CommitCombinedStatus`\n \"\"\"\n headers, data = self._requester.requestJsonAndCheck(\"GET\", self.url + \"/status\")\n return github.CommitCombinedStatus.CommitCombinedStatus(\n self._requester, headers, data, completed=True\n )\n\n def get_pulls(self):\n \"\"\"\n :calls: `GET /repos/:owner/:repo/commits/:sha/pulls <https://developer.github.com/v3/repos/commits/#list-pull-requests-associated-with-commit>`_\n :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.PullRequest.PullRequest`\n \"\"\"\n return github.PaginatedList.PaginatedList(\n github.PullRequest.PullRequest,\n self._requester,\n self.url + \"/pulls\",\n None,\n headers={\"Accept\": \"application/vnd.github.groot-preview+json\"},\n )\n\n @property\n def _identity(self):\n return self.sha\n\n def _initAttributes(self):\n self._author = github.GithubObject.NotSet\n self._comments_url = github.GithubObject.NotSet\n self._commit = github.GithubObject.NotSet\n self._committer = github.GithubObject.NotSet\n self._files = github.GithubObject.NotSet\n self._html_url = github.GithubObject.NotSet\n self._parents = github.GithubObject.NotSet\n self._sha = github.GithubObject.NotSet\n self._stats = github.GithubObject.NotSet\n self._url = github.GithubObject.NotSet\n\n def _useAttributes(self, attributes):\n if \"author\" in attributes: # pragma no branch\n self._author = self._makeClassAttribute(\n github.NamedUser.NamedUser, attributes[\"author\"]\n )\n if \"comments_url\" in attributes: # pragma no branch\n self._comments_url = self._makeStringAttribute(attributes[\"comments_url\"])\n if \"commit\" in attributes: # pragma no branch\n self._commit = self._makeClassAttribute(\n github.GitCommit.GitCommit, attributes[\"commit\"]\n )\n if \"committer\" in attributes: # pragma no branch\n self._committer = self._makeClassAttribute(\n github.NamedUser.NamedUser, attributes[\"committer\"]\n )\n if \"files\" in attributes: # pragma no branch\n self._files = self._makeListOfClassesAttribute(\n github.File.File, attributes[\"files\"]\n )\n if \"html_url\" in attributes: # pragma no branch\n self._html_url = self._makeStringAttribute(attributes[\"html_url\"])\n if \"parents\" in attributes: # pragma no branch\n self._parents = self._makeListOfClassesAttribute(\n Commit, attributes[\"parents\"]\n )\n if \"sha\" in attributes: # pragma no branch\n self._sha = self._makeStringAttribute(attributes[\"sha\"])\n if \"stats\" in attributes: # pragma no branch\n self._stats = self._makeClassAttribute(\n github.CommitStats.CommitStats, attributes[\"stats\"]\n )\n if \"url\" in attributes: # pragma no branch\n self._url = self._makeStringAttribute(attributes[\"url\"])\n", "path": "github/Commit.py"}]} | 3,722 | 220 |
gh_patches_debug_17758 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-1857 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug]: "Ortsteil" missing for AbfallPlus in Weißenburg/Gunzenhausen
### I Have A Problem With:
A specific source
### What's Your Problem
For my part of the village (aka "Ortsteil") it is not choosable.
You can check it here: https://www.landkreis-wug.de/abfall/abfuhrkalender/ , I have to select
Stadt/Gemeinde:Haundorf
Straße/Ortsteil: Obererlbach
Straße: Alle Straßen
I tried the app_abfallplus_de.py script, but I can't select my "Ortsteil", just the Maincommunity/-city "Haundorf".
waste_collection_schedule:
sources:
- name: app_abfallplus_de
args:
app_id: de.k4systems.abfallappwug
city: Haundorf
strasse: alle Straßen
### Source (if relevant)
app_abfallplus_de
### Logs
_No response_
### Relevant Configuration
_No response_
### Checklist Source Error
- [X] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)
- [X] Checked that the website of your service provider is still working
- [X] Tested my attributes on the service provider website (if possible)
- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on "Redownload" and choose master as version)
### Checklist Sensor Error
- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)
### Required
- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.
- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `custom_components/waste_collection_schedule/waste_collection_schedule/wizard/app_abfallplus_de.py`
Content:
```
1 #!/usr/bin/env python3
2 import site
3 from pathlib import Path
4 from typing import Tuple
5
6 import inquirer
7
8 package_dir = Path(__file__).resolve().parents[2]
9 site.addsitedir(str(package_dir))
10 import waste_collection_schedule.service.AppAbfallplusDe as AppAbfallplusDe # noqa: E402
11
12 YAML = {
13 "base": """
14 waste_collection_schedule:
15 sources:
16 - name: app_abfallplus_de
17 args:
18 app_id: {app_id}
19 city: {city}""",
20 "bezirk": """
21 bezirk: {bezirk}""",
22 "street": """
23 strasse: {strasse}""",
24 "hnr": """
25 hnr: {hnr}""",
26 "bundesland": """
27 bundesland: {bundesland}""",
28 "landkreis": """
29 landkreis: {landkreis}""",
30 }
31
32
33 def select_bundesland(app: AppAbfallplusDe.AppAbfallplusDe):
34 bundeslaender = app.get_bundeslaender()
35 questions = [
36 inquirer.List(
37 "bundesland",
38 choices=sorted([(s["name"], s["name"]) for s in bundeslaender]),
39 message="Select your Bundesland",
40 )
41 ]
42 bundesland = inquirer.prompt(questions)["bundesland"]
43 app.select_bundesland(bundesland)
44 return bundesland
45
46
47 def select_landkreis(app: AppAbfallplusDe.AppAbfallplusDe):
48 landkreise = app.get_landkreise()
49 questions = [
50 inquirer.List(
51 "landkreis",
52 choices=sorted(
53 [(s["name"], s["name"]) for s in landkreise] + [("BACK", "BACK")]
54 ),
55 message="Select your Landkreis",
56 )
57 ]
58 landkreis = inquirer.prompt(questions)["landkreis"]
59 if landkreis == "BACK":
60 app.clear(0)
61 select_bundesland(app)
62 return select_landkreis(app)
63 app.select_landkreis(landkreis)
64 return landkreis
65
66
67 def select_city(app: AppAbfallplusDe.AppAbfallplusDe, bund_select: bool):
68 cities = app.get_kommunen()
69 questions = [
70 inquirer.List(
71 "city",
72 choices=sorted([(s["name"], s["name"]) for s in cities])
73 + ([("BACK", "BACK")] if bund_select else []),
74 message="Select your Kommune",
75 )
76 ]
77 city = inquirer.prompt(questions)["city"]
78 if city == "BACK":
79 app.clear(1)
80 select_landkreis(app)
81 return select_city(app, bund_select)
82
83 app.select_kommune(city)
84 return city
85
86
87 def select_bezirk(
88 app: AppAbfallplusDe.AppAbfallplusDe, bund_select: bool
89 ) -> Tuple[str, bool]:
90 bezirke = app.get_bezirke()
91 questions = [
92 inquirer.List(
93 "bezirk",
94 choices=sorted([(s["name"], s["name"]) for s in bezirke])
95 + [("BACK", "BACK")],
96 message="Select your Bezirk",
97 )
98 ]
99 bezirk = inquirer.prompt(questions)["bezirk"]
100 if bezirk == "BACK":
101 app.clear(2)
102 select_city(app, bund_select)
103 return select_bezirk(app, bund_select)
104
105 return bezirk, app.select_bezirk(bezirk)
106
107
108 def select_street(app: AppAbfallplusDe.AppAbfallplusDe, bund_select: bool):
109 street = None
110 street_search = ""
111 while street is None:
112 questions = [
113 inquirer.Text(
114 "street_search",
115 message="Search your street you will be given some options to choose from",
116 default=street_search,
117 )
118 ]
119 streets = app.get_streets(inquirer.prompt(questions)["street_search"])
120 questions = [
121 inquirer.List(
122 "street",
123 choices=sorted([(s["name"], s["name"]) for s in streets])
124 + [("BACK", "BACK")],
125 message="Select your Street",
126 )
127 ]
128 street = inquirer.prompt(questions)["street"]
129 if street == "BACK":
130 street = None
131
132 if street == "BACK":
133 app.clear(2)
134 select_city(app, bund_select)
135 return select_street(app, bund_select)
136
137 app.select_street(street)
138 return street
139
140
141 def select_house_number(app: AppAbfallplusDe.AppAbfallplusDe, bund_select: bool):
142 house_numbers = app.get_hnrs()
143 questions = [
144 inquirer.List(
145 "house_number",
146 choices=[(s["name"], s["name"]) for s in house_numbers]
147 + [("BACK", "BACK")],
148 message="Select your House Number",
149 )
150 ]
151 house_number = inquirer.prompt(questions)["house_number"]
152 if house_number == "BACK":
153 app.clear(3)
154 select_street(app, bund_select)
155 return select_house_number(app, bund_select)
156 app.select_hnr(house_number)
157 return house_number
158
159
160 def main():
161 questions = [
162 inquirer.List(
163 "app-id",
164 choices=[(s, s) for s in sorted(AppAbfallplusDe.SUPPORTED_APPS)],
165 message="Select your App",
166 )
167 ]
168 app_id = inquirer.prompt(questions)["app-id"]
169
170 app = AppAbfallplusDe.AppAbfallplusDe(app_id, "", "", "")
171 bezirk_needed = "bezirk" in app.init_connection() and app.get_bezirke() != []
172 cities = app.get_kommunen()
173 bund_select = cities == []
174
175 bundesland = landkreis = None
176 if bund_select:
177 bundesland = select_bundesland(app)
178 landkreis = select_landkreis(app)
179 # cities = app.get_kommunen()
180
181 city = select_city(app, bund_select)
182 finished = False
183 house_number = ""
184 street = None
185 if bezirk_needed:
186 bezirk, finished = select_bezirk(app, bund_select)
187 if not finished:
188 street = select_street(app, bund_select)
189 if app.get_hrn_needed():
190 house_number = select_house_number(app, bund_select)
191
192 yaml = YAML["base"].format(
193 app_id=app_id,
194 city=city,
195 )
196 if bezirk_needed:
197 yaml += YAML["bezirk"].format(bezirk=bezirk)
198 if street:
199 yaml += YAML["street"].format(strasse=street)
200 if house_number:
201 yaml += YAML["hnr"].format(hnr=house_number)
202 if bundesland:
203 yaml += YAML["bundesland"].format(bundesland=bundesland)
204 if landkreis:
205 yaml += YAML["landkreis"].format(landkreis=landkreis)
206
207 print(yaml)
208
209
210 if __name__ == "__main__":
211 main()
212
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/wizard/app_abfallplus_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/wizard/app_abfallplus_de.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/wizard/app_abfallplus_de.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/wizard/app_abfallplus_de.py
@@ -168,7 +168,7 @@
app_id = inquirer.prompt(questions)["app-id"]
app = AppAbfallplusDe.AppAbfallplusDe(app_id, "", "", "")
- bezirk_needed = "bezirk" in app.init_connection() and app.get_bezirke() != []
+ bezirk_needed = "bezirk" in app.init_connection()
cities = app.get_kommunen()
bund_select = cities == []
@@ -182,6 +182,7 @@
finished = False
house_number = ""
street = None
+ bezirk_needed = bezirk_needed and app.get_bezirke() != []
if bezirk_needed:
bezirk, finished = select_bezirk(app, bund_select)
if not finished:
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/wizard/app_abfallplus_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/wizard/app_abfallplus_de.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/wizard/app_abfallplus_de.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/wizard/app_abfallplus_de.py\n@@ -168,7 +168,7 @@\n app_id = inquirer.prompt(questions)[\"app-id\"]\n \n app = AppAbfallplusDe.AppAbfallplusDe(app_id, \"\", \"\", \"\")\n- bezirk_needed = \"bezirk\" in app.init_connection() and app.get_bezirke() != []\n+ bezirk_needed = \"bezirk\" in app.init_connection()\n cities = app.get_kommunen()\n bund_select = cities == []\n \n@@ -182,6 +182,7 @@\n finished = False\n house_number = \"\"\n street = None\n+ bezirk_needed = bezirk_needed and app.get_bezirke() != []\n if bezirk_needed:\n bezirk, finished = select_bezirk(app, bund_select)\n if not finished:\n", "issue": "[Bug]: \"Ortsteil\" missing for AbfallPlus in Wei\u00dfenburg/Gunzenhausen\n### I Have A Problem With:\n\nA specific source\n\n### What's Your Problem\n\nFor my part of the village (aka \"Ortsteil\") it is not choosable. \r\nYou can check it here: https://www.landkreis-wug.de/abfall/abfuhrkalender/ , I have to select\r\nStadt/Gemeinde:Haundorf\r\nStra\u00dfe/Ortsteil: Obererlbach\r\nStra\u00dfe: Alle Stra\u00dfen\r\n\r\nI tried the app_abfallplus_de.py script, but I can't select my \"Ortsteil\", just the Maincommunity/-city \"Haundorf\".\r\nwaste_collection_schedule:\r\n sources:\r\n - name: app_abfallplus_de\r\n args:\r\n app_id: de.k4systems.abfallappwug\r\n city: Haundorf\r\n strasse: alle Stra\u00dfen\n\n### Source (if relevant)\n\napp_abfallplus_de\n\n### Logs\n\n_No response_\n\n### Relevant Configuration\n\n_No response_\n\n### Checklist Source Error\n\n- [X] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)\n- [X] Checked that the website of your service provider is still working\n- [X] Tested my attributes on the service provider website (if possible)\n- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on \"Redownload\" and choose master as version)\n\n### Checklist Sensor Error\n\n- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)\n\n### Required\n\n- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.\n- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.\n", "before_files": [{"content": "#!/usr/bin/env python3\nimport site\nfrom pathlib import Path\nfrom typing import Tuple\n\nimport inquirer\n\npackage_dir = Path(__file__).resolve().parents[2]\nsite.addsitedir(str(package_dir))\nimport waste_collection_schedule.service.AppAbfallplusDe as AppAbfallplusDe # noqa: E402\n\nYAML = {\n \"base\": \"\"\"\nwaste_collection_schedule:\n sources:\n - name: app_abfallplus_de\n args:\n app_id: {app_id}\n city: {city}\"\"\",\n \"bezirk\": \"\"\"\n bezirk: {bezirk}\"\"\",\n \"street\": \"\"\"\n strasse: {strasse}\"\"\",\n \"hnr\": \"\"\"\n hnr: {hnr}\"\"\",\n \"bundesland\": \"\"\"\n bundesland: {bundesland}\"\"\",\n \"landkreis\": \"\"\"\n landkreis: {landkreis}\"\"\",\n}\n\n\ndef select_bundesland(app: AppAbfallplusDe.AppAbfallplusDe):\n bundeslaender = app.get_bundeslaender()\n questions = [\n inquirer.List(\n \"bundesland\",\n choices=sorted([(s[\"name\"], s[\"name\"]) for s in bundeslaender]),\n message=\"Select your Bundesland\",\n )\n ]\n bundesland = inquirer.prompt(questions)[\"bundesland\"]\n app.select_bundesland(bundesland)\n return bundesland\n\n\ndef select_landkreis(app: AppAbfallplusDe.AppAbfallplusDe):\n landkreise = app.get_landkreise()\n questions = [\n inquirer.List(\n \"landkreis\",\n choices=sorted(\n [(s[\"name\"], s[\"name\"]) for s in landkreise] + [(\"BACK\", \"BACK\")]\n ),\n message=\"Select your Landkreis\",\n )\n ]\n landkreis = inquirer.prompt(questions)[\"landkreis\"]\n if landkreis == \"BACK\":\n app.clear(0)\n select_bundesland(app)\n return select_landkreis(app)\n app.select_landkreis(landkreis)\n return landkreis\n\n\ndef select_city(app: AppAbfallplusDe.AppAbfallplusDe, bund_select: bool):\n cities = app.get_kommunen()\n questions = [\n inquirer.List(\n \"city\",\n choices=sorted([(s[\"name\"], s[\"name\"]) for s in cities])\n + ([(\"BACK\", \"BACK\")] if bund_select else []),\n message=\"Select your Kommune\",\n )\n ]\n city = inquirer.prompt(questions)[\"city\"]\n if city == \"BACK\":\n app.clear(1)\n select_landkreis(app)\n return select_city(app, bund_select)\n\n app.select_kommune(city)\n return city\n\n\ndef select_bezirk(\n app: AppAbfallplusDe.AppAbfallplusDe, bund_select: bool\n) -> Tuple[str, bool]:\n bezirke = app.get_bezirke()\n questions = [\n inquirer.List(\n \"bezirk\",\n choices=sorted([(s[\"name\"], s[\"name\"]) for s in bezirke])\n + [(\"BACK\", \"BACK\")],\n message=\"Select your Bezirk\",\n )\n ]\n bezirk = inquirer.prompt(questions)[\"bezirk\"]\n if bezirk == \"BACK\":\n app.clear(2)\n select_city(app, bund_select)\n return select_bezirk(app, bund_select)\n\n return bezirk, app.select_bezirk(bezirk)\n\n\ndef select_street(app: AppAbfallplusDe.AppAbfallplusDe, bund_select: bool):\n street = None\n street_search = \"\"\n while street is None:\n questions = [\n inquirer.Text(\n \"street_search\",\n message=\"Search your street you will be given some options to choose from\",\n default=street_search,\n )\n ]\n streets = app.get_streets(inquirer.prompt(questions)[\"street_search\"])\n questions = [\n inquirer.List(\n \"street\",\n choices=sorted([(s[\"name\"], s[\"name\"]) for s in streets])\n + [(\"BACK\", \"BACK\")],\n message=\"Select your Street\",\n )\n ]\n street = inquirer.prompt(questions)[\"street\"]\n if street == \"BACK\":\n street = None\n\n if street == \"BACK\":\n app.clear(2)\n select_city(app, bund_select)\n return select_street(app, bund_select)\n\n app.select_street(street)\n return street\n\n\ndef select_house_number(app: AppAbfallplusDe.AppAbfallplusDe, bund_select: bool):\n house_numbers = app.get_hnrs()\n questions = [\n inquirer.List(\n \"house_number\",\n choices=[(s[\"name\"], s[\"name\"]) for s in house_numbers]\n + [(\"BACK\", \"BACK\")],\n message=\"Select your House Number\",\n )\n ]\n house_number = inquirer.prompt(questions)[\"house_number\"]\n if house_number == \"BACK\":\n app.clear(3)\n select_street(app, bund_select)\n return select_house_number(app, bund_select)\n app.select_hnr(house_number)\n return house_number\n\n\ndef main():\n questions = [\n inquirer.List(\n \"app-id\",\n choices=[(s, s) for s in sorted(AppAbfallplusDe.SUPPORTED_APPS)],\n message=\"Select your App\",\n )\n ]\n app_id = inquirer.prompt(questions)[\"app-id\"]\n\n app = AppAbfallplusDe.AppAbfallplusDe(app_id, \"\", \"\", \"\")\n bezirk_needed = \"bezirk\" in app.init_connection() and app.get_bezirke() != []\n cities = app.get_kommunen()\n bund_select = cities == []\n\n bundesland = landkreis = None\n if bund_select:\n bundesland = select_bundesland(app)\n landkreis = select_landkreis(app)\n # cities = app.get_kommunen()\n\n city = select_city(app, bund_select)\n finished = False\n house_number = \"\"\n street = None\n if bezirk_needed:\n bezirk, finished = select_bezirk(app, bund_select)\n if not finished:\n street = select_street(app, bund_select)\n if app.get_hrn_needed():\n house_number = select_house_number(app, bund_select)\n\n yaml = YAML[\"base\"].format(\n app_id=app_id,\n city=city,\n )\n if bezirk_needed:\n yaml += YAML[\"bezirk\"].format(bezirk=bezirk)\n if street:\n yaml += YAML[\"street\"].format(strasse=street)\n if house_number:\n yaml += YAML[\"hnr\"].format(hnr=house_number)\n if bundesland:\n yaml += YAML[\"bundesland\"].format(bundesland=bundesland)\n if landkreis:\n yaml += YAML[\"landkreis\"].format(landkreis=landkreis)\n\n print(yaml)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/wizard/app_abfallplus_de.py"}], "after_files": [{"content": "#!/usr/bin/env python3\nimport site\nfrom pathlib import Path\nfrom typing import Tuple\n\nimport inquirer\n\npackage_dir = Path(__file__).resolve().parents[2]\nsite.addsitedir(str(package_dir))\nimport waste_collection_schedule.service.AppAbfallplusDe as AppAbfallplusDe # noqa: E402\n\nYAML = {\n \"base\": \"\"\"\nwaste_collection_schedule:\n sources:\n - name: app_abfallplus_de\n args:\n app_id: {app_id}\n city: {city}\"\"\",\n \"bezirk\": \"\"\"\n bezirk: {bezirk}\"\"\",\n \"street\": \"\"\"\n strasse: {strasse}\"\"\",\n \"hnr\": \"\"\"\n hnr: {hnr}\"\"\",\n \"bundesland\": \"\"\"\n bundesland: {bundesland}\"\"\",\n \"landkreis\": \"\"\"\n landkreis: {landkreis}\"\"\",\n}\n\n\ndef select_bundesland(app: AppAbfallplusDe.AppAbfallplusDe):\n bundeslaender = app.get_bundeslaender()\n questions = [\n inquirer.List(\n \"bundesland\",\n choices=sorted([(s[\"name\"], s[\"name\"]) for s in bundeslaender]),\n message=\"Select your Bundesland\",\n )\n ]\n bundesland = inquirer.prompt(questions)[\"bundesland\"]\n app.select_bundesland(bundesland)\n return bundesland\n\n\ndef select_landkreis(app: AppAbfallplusDe.AppAbfallplusDe):\n landkreise = app.get_landkreise()\n questions = [\n inquirer.List(\n \"landkreis\",\n choices=sorted(\n [(s[\"name\"], s[\"name\"]) for s in landkreise] + [(\"BACK\", \"BACK\")]\n ),\n message=\"Select your Landkreis\",\n )\n ]\n landkreis = inquirer.prompt(questions)[\"landkreis\"]\n if landkreis == \"BACK\":\n app.clear(0)\n select_bundesland(app)\n return select_landkreis(app)\n app.select_landkreis(landkreis)\n return landkreis\n\n\ndef select_city(app: AppAbfallplusDe.AppAbfallplusDe, bund_select: bool):\n cities = app.get_kommunen()\n questions = [\n inquirer.List(\n \"city\",\n choices=sorted([(s[\"name\"], s[\"name\"]) for s in cities])\n + ([(\"BACK\", \"BACK\")] if bund_select else []),\n message=\"Select your Kommune\",\n )\n ]\n city = inquirer.prompt(questions)[\"city\"]\n if city == \"BACK\":\n app.clear(1)\n select_landkreis(app)\n return select_city(app, bund_select)\n\n app.select_kommune(city)\n return city\n\n\ndef select_bezirk(\n app: AppAbfallplusDe.AppAbfallplusDe, bund_select: bool\n) -> Tuple[str, bool]:\n bezirke = app.get_bezirke()\n questions = [\n inquirer.List(\n \"bezirk\",\n choices=sorted([(s[\"name\"], s[\"name\"]) for s in bezirke])\n + [(\"BACK\", \"BACK\")],\n message=\"Select your Bezirk\",\n )\n ]\n bezirk = inquirer.prompt(questions)[\"bezirk\"]\n if bezirk == \"BACK\":\n app.clear(2)\n select_city(app, bund_select)\n return select_bezirk(app, bund_select)\n\n return bezirk, app.select_bezirk(bezirk)\n\n\ndef select_street(app: AppAbfallplusDe.AppAbfallplusDe, bund_select: bool):\n street = None\n street_search = \"\"\n while street is None:\n questions = [\n inquirer.Text(\n \"street_search\",\n message=\"Search your street you will be given some options to choose from\",\n default=street_search,\n )\n ]\n streets = app.get_streets(inquirer.prompt(questions)[\"street_search\"])\n questions = [\n inquirer.List(\n \"street\",\n choices=sorted([(s[\"name\"], s[\"name\"]) for s in streets])\n + [(\"BACK\", \"BACK\")],\n message=\"Select your Street\",\n )\n ]\n street = inquirer.prompt(questions)[\"street\"]\n if street == \"BACK\":\n street = None\n\n if street == \"BACK\":\n app.clear(2)\n select_city(app, bund_select)\n return select_street(app, bund_select)\n\n app.select_street(street)\n return street\n\n\ndef select_house_number(app: AppAbfallplusDe.AppAbfallplusDe, bund_select: bool):\n house_numbers = app.get_hnrs()\n questions = [\n inquirer.List(\n \"house_number\",\n choices=[(s[\"name\"], s[\"name\"]) for s in house_numbers]\n + [(\"BACK\", \"BACK\")],\n message=\"Select your House Number\",\n )\n ]\n house_number = inquirer.prompt(questions)[\"house_number\"]\n if house_number == \"BACK\":\n app.clear(3)\n select_street(app, bund_select)\n return select_house_number(app, bund_select)\n app.select_hnr(house_number)\n return house_number\n\n\ndef main():\n questions = [\n inquirer.List(\n \"app-id\",\n choices=[(s, s) for s in sorted(AppAbfallplusDe.SUPPORTED_APPS)],\n message=\"Select your App\",\n )\n ]\n app_id = inquirer.prompt(questions)[\"app-id\"]\n\n app = AppAbfallplusDe.AppAbfallplusDe(app_id, \"\", \"\", \"\")\n bezirk_needed = \"bezirk\" in app.init_connection()\n cities = app.get_kommunen()\n bund_select = cities == []\n\n bundesland = landkreis = None\n if bund_select:\n bundesland = select_bundesland(app)\n landkreis = select_landkreis(app)\n # cities = app.get_kommunen()\n\n city = select_city(app, bund_select)\n finished = False\n house_number = \"\"\n street = None\n bezirk_needed = bezirk_needed and app.get_bezirke() != []\n if bezirk_needed:\n bezirk, finished = select_bezirk(app, bund_select)\n if not finished:\n street = select_street(app, bund_select)\n if app.get_hrn_needed():\n house_number = select_house_number(app, bund_select)\n\n yaml = YAML[\"base\"].format(\n app_id=app_id,\n city=city,\n )\n if bezirk_needed:\n yaml += YAML[\"bezirk\"].format(bezirk=bezirk)\n if street:\n yaml += YAML[\"street\"].format(strasse=street)\n if house_number:\n yaml += YAML[\"hnr\"].format(hnr=house_number)\n if bundesland:\n yaml += YAML[\"bundesland\"].format(bundesland=bundesland)\n if landkreis:\n yaml += YAML[\"landkreis\"].format(landkreis=landkreis)\n\n print(yaml)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/wizard/app_abfallplus_de.py"}]} | 2,786 | 263 |
gh_patches_debug_38367 | rasdani/github-patches | git_diff | napari__napari-5997 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Setting active layer from a plugin on PublicOnlyProxy works partially, but not in GUI
## 🐛 Bug
We’re writing a [classifier plugin](https://github.com/fractal-napari-plugins-collection/napari-feature-classifier/tree/classifier_refactor) that allows user to annotate a label layer and train classifiers based on those annotations. The way we show user annotations is by adding a label layer and coloring the objects based on user annotation.
When we add an annotation layer from our plugin, we don’t want to make it the selected layer (because we use layer selection to decide which label layer is currently being annotated).
Changing the active layer by setting `viewer.layers.selection.active` to the layer I want selected does not seem to work from a plugin (for layers that where not created by the plugin). I suspect it has something to do with the plugin not getting the actual viewer object, but a napari.utils._proxies.PublicOnlyProxy ⇒ only setting active state on that proxy.
It’s confusing though, because checking the active layer (via viewer.layers.selection.active) returns what I’d expect, but it’s not shown in the GUI.
## To Reproduce
Here's some sample code to reproduce this behavior:
```python
from pathlib import Path
import imageio
import napari
import napari.layers
import napari.viewer
import numpy as np
import pandas as pd
from magicgui.widgets import Container, Label
from napari.utils.notifications import show_info
def main():
lbls = imageio.v2.imread(Path("sample_data/test_labels.tif"))
lbls2 = np.zeros_like(lbls)
lbls2[:, 3:, 2:] = lbls[:, :-3, :-2]
lbls2 = lbls2 * 20
labels = np.unique(lbls)[1:]
labels_2 = np.unique(lbls2)[1:]
viewer = napari.Viewer()
lbls_layer = viewer.add_labels(lbls)
lbls_layer2 = viewer.add_labels(lbls2)
# Add the widget directly via code:
label_selector_widget = LabelSelector(viewer) # Comment out to reproduce issue
viewer.window.add_dock_widget(label_selector_widget) # Comment out to reproduce issue
viewer.show(block=True)
class LabelSelector(Container):
def __init__(
self,
viewer: napari.viewer.Viewer,
):
self._viewer = viewer
self.label = Label(label='Test')
super().__init__(
widgets=[
self.label
]
)
self._last_selected_label_layer = self._viewer.layers[1]
annotation_layer = self._viewer.add_labels(
self._last_selected_label_layer.data,
scale=self._last_selected_label_layer.scale,
name="Annotations",
)
self._viewer.layers.selection.active = self._viewer.layers[0]
print(f'Selected Layer at the end: {self._viewer.layers.selection.active}')
print(f"Type of annotation layer: {type(annotation_layer)}")
print(f"Type of first label layer: {type(self._viewer.layers[0])}")
if __name__ == "__main__":
main()
```
If I run it as above (i.e. adding the dockwidget from Python), I get the expected behavior and the correct layer (the first one) is selected after the new Annotations layer was added:
<img width="1198" alt="Screenshot 2023-04-25 at 09 16 44" src="https://user-images.githubusercontent.com/18033446/234206865-8bd2fe29-a7c9-4a0b-aa73-659c51acdcbe.png">
The printing output is:
```
Selected Layer at the end: lbls
Type of annotation layer: <class 'napari.layers.labels.labels.Labels'>
Type of first label layer: <class 'napari.layers.labels.labels.Labels'>
```
If the two lines that are adding the widget manually are commented out:
```python
# label_selector_widget = LabelSelector(viewer)
# viewer.window.add_dock_widget(label_selector_widget)
```
and instead the dockwidget is added as a plugin, which is started from the GUI, we get this behavior:

According to viewer.layers.selection.active, the first layer was selected. But the GUI does not show any layer selected.
The GUI still reacts to changes in the layer controls (i.e. changing opacity) and applies it to the layer that is selected behind the scenes. The user just isn't shown that it applies to that layer.
This is the print output:
```
Selected Layer at the end: lbls
Type of annotation layer: <class 'napari.layers.labels.labels.Labels'>
Type of first label layer: <class 'napari.utils._proxies.PublicOnlyProxy'>
```
## Expected behavior
I would expect the plugin flow to behave the same as when I manually add a widget: The GUI shows the selected layer.
Especially given that some parts of the GUI react to the layer selection (e.g. the layer controls), the actively selected layer should be shown.
## Environment
- Please copy and paste the information at napari info option in help menubar here:
```
napari: 0.5.0a2.dev42+g9e911040
Platform: macOS-13.2.1-arm64-arm-64bit
System: MacOS 13.2.1
Python: 3.10.9 | packaged by conda-forge | (main, Feb 2 2023, 20:26:08) [Clang 14.0.6 ]
Qt: 5.15.6
PyQt5: 5.15.7
NumPy: 1.24.2
SciPy: 1.10.1
Dask: 2023.3.0
VisPy: 0.12.1
magicgui: 0.7.2
superqt: unknown
in-n-out: 0.1.7
app-model: 0.1.2
npe2: 0.6.2
OpenGL:
- GL version: 2.1 Metal - 83
- MAX_TEXTURE_SIZE: 16384
Screens:
- screen 1: resolution 2560x1440, scale 2.0
- screen 2: resolution 1512x982, scale 2.0
Settings path:
- /Users/joel/Library/Application Support/napari/classifier-dev-napari-main_2751af53b3d3e49c82e2e47937e51f1f537130c2/settings.yaml
```
- Any other relevant information:
I also tested it in napari 0.4.17 and in more recent napari nightly builds (0.5.0a2.dev71+g66df74d5) and always get the same behavior
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `napari/utils/_proxies.py`
Content:
```
1 import os
2 import re
3 import sys
4 import warnings
5 from typing import Any, Callable, Generic, TypeVar, Union
6
7 import wrapt
8
9 from napari.utils import misc
10 from napari.utils.translations import trans
11
12 _T = TypeVar("_T")
13
14
15 class ReadOnlyWrapper(wrapt.ObjectProxy):
16 """
17 Disable item and attribute setting with the exception of ``__wrapped__``.
18 """
19
20 def __init__(self, wrapped, exceptions=()):
21 super().__init__(wrapped)
22 self._self_exceptions = exceptions
23
24 def __setattr__(self, name, val):
25 if (
26 name not in ('__wrapped__', '_self_exceptions')
27 and name not in self._self_exceptions
28 ):
29 raise TypeError(
30 trans._(
31 'cannot set attribute {name}',
32 deferred=True,
33 name=name,
34 )
35 )
36
37 super().__setattr__(name, val)
38
39 def __setitem__(self, name, val):
40 if name not in self._self_exceptions:
41 raise TypeError(
42 trans._('cannot set item {name}', deferred=True, name=name)
43 )
44 super().__setitem__(name, val)
45
46
47 _SUNDER = re.compile('^_[^_]')
48
49
50 class PublicOnlyProxy(wrapt.ObjectProxy, Generic[_T]):
51 """Proxy to prevent private attribute and item access, recursively."""
52
53 __wrapped__: _T
54
55 @staticmethod
56 def _is_private_attr(name: str) -> bool:
57 return name.startswith("_") and not (
58 name.startswith('__') and name.endswith('__')
59 )
60
61 @staticmethod
62 def _private_attr_warning(name: str, typ: str):
63 warnings.warn(
64 trans._(
65 "Private attribute access ('{typ}.{name}') in this context (e.g. inside a plugin widget or dock widget) is deprecated and will be unavailable in version 0.5.0",
66 deferred=True,
67 name=name,
68 typ=typ,
69 ),
70 category=FutureWarning,
71 stacklevel=3,
72 )
73
74 # This is code prepared for a moment where we want to block access to private attributes
75 # raise AttributeError(
76 # trans._(
77 # "Private attribute set/access ('{typ}.{name}') not allowed in this context.",
78 # deferred=True,
79 # name=name,
80 # typ=typ,
81 # )
82 # )
83
84 @staticmethod
85 def _is_called_from_napari():
86 """
87 Check if the getter or setter is called from inner napari.
88 """
89 if hasattr(sys, "_getframe"):
90 frame = sys._getframe(2)
91 return frame.f_code.co_filename.startswith(misc.ROOT_DIR)
92 return False
93
94 def __getattr__(self, name: str):
95 if self._is_private_attr(name):
96 # allow napari to access private attributes and get an non-proxy
97 if self._is_called_from_napari():
98 return super().__getattr__(name)
99
100 typ = type(self.__wrapped__).__name__
101
102 self._private_attr_warning(name, typ)
103
104 return self.create(super().__getattr__(name))
105
106 def __setattr__(self, name: str, value: Any):
107 if (
108 os.environ.get("NAPARI_ENSURE_PLUGIN_MAIN_THREAD", "0")
109 not in ("0", "False")
110 ) and not in_main_thread():
111 raise RuntimeError(
112 "Setting attributes on a napari object is only allowed from the main Qt thread."
113 )
114
115 if self._is_private_attr(name):
116 if self._is_called_from_napari():
117 return super().__setattr__(name, value)
118
119 typ = type(self.__wrapped__).__name__
120 self._private_attr_warning(name, typ)
121
122 setattr(self.__wrapped__, name, value)
123 return None
124
125 def __getitem__(self, key):
126 return self.create(super().__getitem__(key))
127
128 def __repr__(self):
129 return repr(self.__wrapped__)
130
131 def __dir__(self):
132 return [x for x in dir(self.__wrapped__) if not _SUNDER.match(x)]
133
134 @classmethod
135 def create(cls, obj: Any) -> Union['PublicOnlyProxy', Any]:
136 # restrict the scope of this proxy to napari objects
137 mod = getattr(type(obj), '__module__', None) or ''
138 if not mod.startswith('napari'):
139 return obj
140 if isinstance(obj, PublicOnlyProxy):
141 return obj # don't double-wrap
142 if callable(obj):
143 return CallablePublicOnlyProxy(obj)
144 return PublicOnlyProxy(obj)
145
146
147 class CallablePublicOnlyProxy(PublicOnlyProxy[Callable]):
148 def __call__(self, *args, **kwargs):
149 return self.__wrapped__(*args, **kwargs)
150
151
152 def in_main_thread_py() -> bool:
153 """
154 Check if caller is in main python thread.
155
156 Returns
157 -------
158 thread_flag : bool
159 True if we are in the main thread, False otherwise.
160 """
161 import threading
162
163 return threading.current_thread() == threading.main_thread()
164
165
166 def _in_main_thread() -> bool:
167 """
168 General implementation of checking if we are in a proper thread.
169 If Qt is available and Application is created then assign :py:func:`in_qt_main_thread` to `in_main_thread`.
170 If Qt liba are not available then assign :py:func:`in_main_thread_py` to in_main_thread.
171 IF Qt libs are available but there is no Application ti wil emmit warning and return result of in_main_thread_py.
172
173 Returns
174 -------
175 thread_flag : bool
176 True if we are in the main thread, False otherwise.
177 """
178
179 global in_main_thread
180 try:
181 from napari._qt.utils import in_qt_main_thread
182
183 res = in_qt_main_thread()
184 in_main_thread = in_qt_main_thread
185 except ImportError:
186 in_main_thread = in_main_thread_py
187 return in_main_thread_py()
188 except AttributeError:
189 warnings.warn(
190 "Qt libs are available but no QtApplication instance is created"
191 )
192 return in_main_thread_py()
193 return res
194
195
196 in_main_thread = _in_main_thread
197
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/napari/utils/_proxies.py b/napari/utils/_proxies.py
--- a/napari/utils/_proxies.py
+++ b/napari/utils/_proxies.py
@@ -119,6 +119,24 @@
typ = type(self.__wrapped__).__name__
self._private_attr_warning(name, typ)
+ if isinstance(value, PublicOnlyProxy):
+ # if we want to set an attribute on a PublicOnlyProxy *and* the
+ # value that we want to set is itself a PublicOnlyProxy, we unwrap
+ # the value. This has two benefits:
+ #
+ # 1. Checking the attribute later will incur a significant
+ # performance cost, because _is_called_from_napari() will be
+ # checked on each attribute access and it involves inspecting the
+ # calling frame, which is expensive.
+ # 2. Certain equality checks fail when objects are
+ # PublicOnlyProxies. Notably, equality checks fail when such
+ # objects are included in a Qt data model. For example, plugins can
+ # grab a layer from the viewer; this layer will be wrapped by the
+ # PublicOnlyProxy, and then using this object to set the current
+ # layer selection will not propagate the selection to the Viewer.
+ # See https://github.com/napari/napari/issues/5767
+ value = value.__wrapped__
+
setattr(self.__wrapped__, name, value)
return None
@@ -134,7 +152,15 @@
@classmethod
def create(cls, obj: Any) -> Union['PublicOnlyProxy', Any]:
# restrict the scope of this proxy to napari objects
- mod = getattr(type(obj), '__module__', None) or ''
+ if type(obj).__name__ == 'method':
+ # If the given object is a method, we check the module *of the
+ # object to which that method is bound*. Otherwise, the module of a
+ # method is just builtins!
+ mod = getattr(type(obj.__self__), '__module__', None) or ''
+ else:
+ # Otherwise, the module is of an object just given by the
+ # __module__ attribute.
+ mod = getattr(type(obj), '__module__', None) or ''
if not mod.startswith('napari'):
return obj
if isinstance(obj, PublicOnlyProxy):
@@ -146,7 +172,20 @@
class CallablePublicOnlyProxy(PublicOnlyProxy[Callable]):
def __call__(self, *args, **kwargs):
- return self.__wrapped__(*args, **kwargs)
+ # if a PublicOnlyProxy is callable, then when we call it we:
+ # - unwrap the arguments, to avoid performance issues detailed in
+ # PublicOnlyProxy.__setattr__,
+ # - call the unwrapped callable on the unwrapped arguments
+ # - wrap the result in a PublicOnlyProxy
+ args = [
+ arg.__wrapped__ if isinstance(arg, PublicOnlyProxy) else arg
+ for arg in args
+ ]
+ kwargs = {
+ k: v.__wrapped__ if isinstance(v, PublicOnlyProxy) else v
+ for k, v in kwargs.items()
+ }
+ return self.create(self.__wrapped__(*args, **kwargs))
def in_main_thread_py() -> bool:
| {"golden_diff": "diff --git a/napari/utils/_proxies.py b/napari/utils/_proxies.py\n--- a/napari/utils/_proxies.py\n+++ b/napari/utils/_proxies.py\n@@ -119,6 +119,24 @@\n typ = type(self.__wrapped__).__name__\n self._private_attr_warning(name, typ)\n \n+ if isinstance(value, PublicOnlyProxy):\n+ # if we want to set an attribute on a PublicOnlyProxy *and* the\n+ # value that we want to set is itself a PublicOnlyProxy, we unwrap\n+ # the value. This has two benefits:\n+ #\n+ # 1. Checking the attribute later will incur a significant\n+ # performance cost, because _is_called_from_napari() will be\n+ # checked on each attribute access and it involves inspecting the\n+ # calling frame, which is expensive.\n+ # 2. Certain equality checks fail when objects are\n+ # PublicOnlyProxies. Notably, equality checks fail when such\n+ # objects are included in a Qt data model. For example, plugins can\n+ # grab a layer from the viewer; this layer will be wrapped by the\n+ # PublicOnlyProxy, and then using this object to set the current\n+ # layer selection will not propagate the selection to the Viewer.\n+ # See https://github.com/napari/napari/issues/5767\n+ value = value.__wrapped__\n+\n setattr(self.__wrapped__, name, value)\n return None\n \n@@ -134,7 +152,15 @@\n @classmethod\n def create(cls, obj: Any) -> Union['PublicOnlyProxy', Any]:\n # restrict the scope of this proxy to napari objects\n- mod = getattr(type(obj), '__module__', None) or ''\n+ if type(obj).__name__ == 'method':\n+ # If the given object is a method, we check the module *of the\n+ # object to which that method is bound*. Otherwise, the module of a\n+ # method is just builtins!\n+ mod = getattr(type(obj.__self__), '__module__', None) or ''\n+ else:\n+ # Otherwise, the module is of an object just given by the\n+ # __module__ attribute.\n+ mod = getattr(type(obj), '__module__', None) or ''\n if not mod.startswith('napari'):\n return obj\n if isinstance(obj, PublicOnlyProxy):\n@@ -146,7 +172,20 @@\n \n class CallablePublicOnlyProxy(PublicOnlyProxy[Callable]):\n def __call__(self, *args, **kwargs):\n- return self.__wrapped__(*args, **kwargs)\n+ # if a PublicOnlyProxy is callable, then when we call it we:\n+ # - unwrap the arguments, to avoid performance issues detailed in\n+ # PublicOnlyProxy.__setattr__,\n+ # - call the unwrapped callable on the unwrapped arguments\n+ # - wrap the result in a PublicOnlyProxy\n+ args = [\n+ arg.__wrapped__ if isinstance(arg, PublicOnlyProxy) else arg\n+ for arg in args\n+ ]\n+ kwargs = {\n+ k: v.__wrapped__ if isinstance(v, PublicOnlyProxy) else v\n+ for k, v in kwargs.items()\n+ }\n+ return self.create(self.__wrapped__(*args, **kwargs))\n \n \n def in_main_thread_py() -> bool:\n", "issue": "Setting active layer from a plugin on PublicOnlyProxy works partially, but not in GUI\n## \ud83d\udc1b Bug\r\n\r\nWe\u2019re writing a [classifier plugin](https://github.com/fractal-napari-plugins-collection/napari-feature-classifier/tree/classifier_refactor) that allows user to annotate a label layer and train classifiers based on those annotations. The way we show user annotations is by adding a label layer and coloring the objects based on user annotation.\r\n\r\nWhen we add an annotation layer from our plugin, we don\u2019t want to make it the selected layer (because we use layer selection to decide which label layer is currently being annotated).\r\n\r\nChanging the active layer by setting `viewer.layers.selection.active` to the layer I want selected does not seem to work from a plugin (for layers that where not created by the plugin). I suspect it has something to do with the plugin not getting the actual viewer object, but a napari.utils._proxies.PublicOnlyProxy \u21d2 only setting active state on that proxy.\r\n\r\nIt\u2019s confusing though, because checking the active layer (via viewer.layers.selection.active) returns what I\u2019d expect, but it\u2019s not shown in the GUI.\r\n\r\n## To Reproduce\r\n\r\nHere's some sample code to reproduce this behavior:\r\n```python\r\nfrom pathlib import Path\r\n\r\nimport imageio\r\nimport napari\r\nimport napari.layers\r\nimport napari.viewer\r\nimport numpy as np\r\nimport pandas as pd\r\nfrom magicgui.widgets import Container, Label\r\nfrom napari.utils.notifications import show_info\r\n\r\ndef main():\r\n lbls = imageio.v2.imread(Path(\"sample_data/test_labels.tif\"))\r\n lbls2 = np.zeros_like(lbls)\r\n lbls2[:, 3:, 2:] = lbls[:, :-3, :-2]\r\n lbls2 = lbls2 * 20\r\n\r\n labels = np.unique(lbls)[1:]\r\n labels_2 = np.unique(lbls2)[1:]\r\n\r\n viewer = napari.Viewer()\r\n lbls_layer = viewer.add_labels(lbls)\r\n lbls_layer2 = viewer.add_labels(lbls2)\r\n\r\n # Add the widget directly via code:\r\n label_selector_widget = LabelSelector(viewer) # Comment out to reproduce issue\r\n viewer.window.add_dock_widget(label_selector_widget) # Comment out to reproduce issue\r\n\r\n viewer.show(block=True)\r\n\r\n\r\nclass LabelSelector(Container):\r\n def __init__(\r\n self,\r\n viewer: napari.viewer.Viewer,\r\n ):\r\n self._viewer = viewer\r\n self.label = Label(label='Test')\r\n super().__init__(\r\n widgets=[\r\n self.label\r\n ]\r\n )\r\n self._last_selected_label_layer = self._viewer.layers[1]\r\n annotation_layer = self._viewer.add_labels(\r\n self._last_selected_label_layer.data,\r\n scale=self._last_selected_label_layer.scale,\r\n name=\"Annotations\",\r\n )\r\n self._viewer.layers.selection.active = self._viewer.layers[0]\r\n print(f'Selected Layer at the end: {self._viewer.layers.selection.active}')\r\n print(f\"Type of annotation layer: {type(annotation_layer)}\")\r\n print(f\"Type of first label layer: {type(self._viewer.layers[0])}\")\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\nIf I run it as above (i.e. adding the dockwidget from Python), I get the expected behavior and the correct layer (the first one) is selected after the new Annotations layer was added:\r\n<img width=\"1198\" alt=\"Screenshot 2023-04-25 at 09 16 44\" src=\"https://user-images.githubusercontent.com/18033446/234206865-8bd2fe29-a7c9-4a0b-aa73-659c51acdcbe.png\">\r\n\r\nThe printing output is:\r\n```\r\nSelected Layer at the end: lbls\r\nType of annotation layer: <class 'napari.layers.labels.labels.Labels'>\r\nType of first label layer: <class 'napari.layers.labels.labels.Labels'>\r\n```\r\n\r\nIf the two lines that are adding the widget manually are commented out:\r\n```python\r\n # label_selector_widget = LabelSelector(viewer)\r\n # viewer.window.add_dock_widget(label_selector_widget)\r\n```\r\nand instead the dockwidget is added as a plugin, which is started from the GUI, we get this behavior:\r\n\r\nAccording to viewer.layers.selection.active, the first layer was selected. But the GUI does not show any layer selected.\r\nThe GUI still reacts to changes in the layer controls (i.e. changing opacity) and applies it to the layer that is selected behind the scenes. The user just isn't shown that it applies to that layer.\r\n\r\nThis is the print output:\r\n```\r\nSelected Layer at the end: lbls\r\nType of annotation layer: <class 'napari.layers.labels.labels.Labels'>\r\nType of first label layer: <class 'napari.utils._proxies.PublicOnlyProxy'>\r\n```\r\n\r\n## Expected behavior\r\n\r\nI would expect the plugin flow to behave the same as when I manually add a widget: The GUI shows the selected layer.\r\nEspecially given that some parts of the GUI react to the layer selection (e.g. the layer controls), the actively selected layer should be shown.\r\n\r\n## Environment\r\n\r\n - Please copy and paste the information at napari info option in help menubar here:\r\n```\r\nnapari: 0.5.0a2.dev42+g9e911040\r\nPlatform: macOS-13.2.1-arm64-arm-64bit\r\nSystem: MacOS 13.2.1\r\nPython: 3.10.9 | packaged by conda-forge | (main, Feb 2 2023, 20:26:08) [Clang 14.0.6 ]\r\nQt: 5.15.6\r\nPyQt5: 5.15.7\r\nNumPy: 1.24.2\r\nSciPy: 1.10.1\r\nDask: 2023.3.0\r\nVisPy: 0.12.1\r\nmagicgui: 0.7.2\r\nsuperqt: unknown\r\nin-n-out: 0.1.7\r\napp-model: 0.1.2\r\nnpe2: 0.6.2\r\n\r\nOpenGL:\r\n- GL version: 2.1 Metal - 83\r\n- MAX_TEXTURE_SIZE: 16384\r\n\r\nScreens:\r\n- screen 1: resolution 2560x1440, scale 2.0\r\n- screen 2: resolution 1512x982, scale 2.0\r\n\r\nSettings path:\r\n- /Users/joel/Library/Application Support/napari/classifier-dev-napari-main_2751af53b3d3e49c82e2e47937e51f1f537130c2/settings.yaml\r\n```\r\n\r\n - Any other relevant information:\r\n I also tested it in napari 0.4.17 and in more recent napari nightly builds (0.5.0a2.dev71+g66df74d5) and always get the same behavior\r\n\r\n\n", "before_files": [{"content": "import os\nimport re\nimport sys\nimport warnings\nfrom typing import Any, Callable, Generic, TypeVar, Union\n\nimport wrapt\n\nfrom napari.utils import misc\nfrom napari.utils.translations import trans\n\n_T = TypeVar(\"_T\")\n\n\nclass ReadOnlyWrapper(wrapt.ObjectProxy):\n \"\"\"\n Disable item and attribute setting with the exception of ``__wrapped__``.\n \"\"\"\n\n def __init__(self, wrapped, exceptions=()):\n super().__init__(wrapped)\n self._self_exceptions = exceptions\n\n def __setattr__(self, name, val):\n if (\n name not in ('__wrapped__', '_self_exceptions')\n and name not in self._self_exceptions\n ):\n raise TypeError(\n trans._(\n 'cannot set attribute {name}',\n deferred=True,\n name=name,\n )\n )\n\n super().__setattr__(name, val)\n\n def __setitem__(self, name, val):\n if name not in self._self_exceptions:\n raise TypeError(\n trans._('cannot set item {name}', deferred=True, name=name)\n )\n super().__setitem__(name, val)\n\n\n_SUNDER = re.compile('^_[^_]')\n\n\nclass PublicOnlyProxy(wrapt.ObjectProxy, Generic[_T]):\n \"\"\"Proxy to prevent private attribute and item access, recursively.\"\"\"\n\n __wrapped__: _T\n\n @staticmethod\n def _is_private_attr(name: str) -> bool:\n return name.startswith(\"_\") and not (\n name.startswith('__') and name.endswith('__')\n )\n\n @staticmethod\n def _private_attr_warning(name: str, typ: str):\n warnings.warn(\n trans._(\n \"Private attribute access ('{typ}.{name}') in this context (e.g. inside a plugin widget or dock widget) is deprecated and will be unavailable in version 0.5.0\",\n deferred=True,\n name=name,\n typ=typ,\n ),\n category=FutureWarning,\n stacklevel=3,\n )\n\n # This is code prepared for a moment where we want to block access to private attributes\n # raise AttributeError(\n # trans._(\n # \"Private attribute set/access ('{typ}.{name}') not allowed in this context.\",\n # deferred=True,\n # name=name,\n # typ=typ,\n # )\n # )\n\n @staticmethod\n def _is_called_from_napari():\n \"\"\"\n Check if the getter or setter is called from inner napari.\n \"\"\"\n if hasattr(sys, \"_getframe\"):\n frame = sys._getframe(2)\n return frame.f_code.co_filename.startswith(misc.ROOT_DIR)\n return False\n\n def __getattr__(self, name: str):\n if self._is_private_attr(name):\n # allow napari to access private attributes and get an non-proxy\n if self._is_called_from_napari():\n return super().__getattr__(name)\n\n typ = type(self.__wrapped__).__name__\n\n self._private_attr_warning(name, typ)\n\n return self.create(super().__getattr__(name))\n\n def __setattr__(self, name: str, value: Any):\n if (\n os.environ.get(\"NAPARI_ENSURE_PLUGIN_MAIN_THREAD\", \"0\")\n not in (\"0\", \"False\")\n ) and not in_main_thread():\n raise RuntimeError(\n \"Setting attributes on a napari object is only allowed from the main Qt thread.\"\n )\n\n if self._is_private_attr(name):\n if self._is_called_from_napari():\n return super().__setattr__(name, value)\n\n typ = type(self.__wrapped__).__name__\n self._private_attr_warning(name, typ)\n\n setattr(self.__wrapped__, name, value)\n return None\n\n def __getitem__(self, key):\n return self.create(super().__getitem__(key))\n\n def __repr__(self):\n return repr(self.__wrapped__)\n\n def __dir__(self):\n return [x for x in dir(self.__wrapped__) if not _SUNDER.match(x)]\n\n @classmethod\n def create(cls, obj: Any) -> Union['PublicOnlyProxy', Any]:\n # restrict the scope of this proxy to napari objects\n mod = getattr(type(obj), '__module__', None) or ''\n if not mod.startswith('napari'):\n return obj\n if isinstance(obj, PublicOnlyProxy):\n return obj # don't double-wrap\n if callable(obj):\n return CallablePublicOnlyProxy(obj)\n return PublicOnlyProxy(obj)\n\n\nclass CallablePublicOnlyProxy(PublicOnlyProxy[Callable]):\n def __call__(self, *args, **kwargs):\n return self.__wrapped__(*args, **kwargs)\n\n\ndef in_main_thread_py() -> bool:\n \"\"\"\n Check if caller is in main python thread.\n\n Returns\n -------\n thread_flag : bool\n True if we are in the main thread, False otherwise.\n \"\"\"\n import threading\n\n return threading.current_thread() == threading.main_thread()\n\n\ndef _in_main_thread() -> bool:\n \"\"\"\n General implementation of checking if we are in a proper thread.\n If Qt is available and Application is created then assign :py:func:`in_qt_main_thread` to `in_main_thread`.\n If Qt liba are not available then assign :py:func:`in_main_thread_py` to in_main_thread.\n IF Qt libs are available but there is no Application ti wil emmit warning and return result of in_main_thread_py.\n\n Returns\n -------\n thread_flag : bool\n True if we are in the main thread, False otherwise.\n \"\"\"\n\n global in_main_thread\n try:\n from napari._qt.utils import in_qt_main_thread\n\n res = in_qt_main_thread()\n in_main_thread = in_qt_main_thread\n except ImportError:\n in_main_thread = in_main_thread_py\n return in_main_thread_py()\n except AttributeError:\n warnings.warn(\n \"Qt libs are available but no QtApplication instance is created\"\n )\n return in_main_thread_py()\n return res\n\n\nin_main_thread = _in_main_thread\n", "path": "napari/utils/_proxies.py"}], "after_files": [{"content": "import os\nimport re\nimport sys\nimport warnings\nfrom typing import Any, Callable, Generic, TypeVar, Union\n\nimport wrapt\n\nfrom napari.utils import misc\nfrom napari.utils.translations import trans\n\n_T = TypeVar(\"_T\")\n\n\nclass ReadOnlyWrapper(wrapt.ObjectProxy):\n \"\"\"\n Disable item and attribute setting with the exception of ``__wrapped__``.\n \"\"\"\n\n def __init__(self, wrapped, exceptions=()):\n super().__init__(wrapped)\n self._self_exceptions = exceptions\n\n def __setattr__(self, name, val):\n if (\n name not in ('__wrapped__', '_self_exceptions')\n and name not in self._self_exceptions\n ):\n raise TypeError(\n trans._(\n 'cannot set attribute {name}',\n deferred=True,\n name=name,\n )\n )\n\n super().__setattr__(name, val)\n\n def __setitem__(self, name, val):\n if name not in self._self_exceptions:\n raise TypeError(\n trans._('cannot set item {name}', deferred=True, name=name)\n )\n super().__setitem__(name, val)\n\n\n_SUNDER = re.compile('^_[^_]')\n\n\nclass PublicOnlyProxy(wrapt.ObjectProxy, Generic[_T]):\n \"\"\"Proxy to prevent private attribute and item access, recursively.\"\"\"\n\n __wrapped__: _T\n\n @staticmethod\n def _is_private_attr(name: str) -> bool:\n return name.startswith(\"_\") and not (\n name.startswith('__') and name.endswith('__')\n )\n\n @staticmethod\n def _private_attr_warning(name: str, typ: str):\n warnings.warn(\n trans._(\n \"Private attribute access ('{typ}.{name}') in this context (e.g. inside a plugin widget or dock widget) is deprecated and will be unavailable in version 0.5.0\",\n deferred=True,\n name=name,\n typ=typ,\n ),\n category=FutureWarning,\n stacklevel=3,\n )\n\n # This is code prepared for a moment where we want to block access to private attributes\n # raise AttributeError(\n # trans._(\n # \"Private attribute set/access ('{typ}.{name}') not allowed in this context.\",\n # deferred=True,\n # name=name,\n # typ=typ,\n # )\n # )\n\n @staticmethod\n def _is_called_from_napari():\n \"\"\"\n Check if the getter or setter is called from inner napari.\n \"\"\"\n if hasattr(sys, \"_getframe\"):\n frame = sys._getframe(2)\n return frame.f_code.co_filename.startswith(misc.ROOT_DIR)\n return False\n\n def __getattr__(self, name: str):\n if self._is_private_attr(name):\n # allow napari to access private attributes and get an non-proxy\n if self._is_called_from_napari():\n return super().__getattr__(name)\n\n typ = type(self.__wrapped__).__name__\n\n self._private_attr_warning(name, typ)\n\n return self.create(super().__getattr__(name))\n\n def __setattr__(self, name: str, value: Any):\n if (\n os.environ.get(\"NAPARI_ENSURE_PLUGIN_MAIN_THREAD\", \"0\")\n not in (\"0\", \"False\")\n ) and not in_main_thread():\n raise RuntimeError(\n \"Setting attributes on a napari object is only allowed from the main Qt thread.\"\n )\n\n if self._is_private_attr(name):\n if self._is_called_from_napari():\n return super().__setattr__(name, value)\n\n typ = type(self.__wrapped__).__name__\n self._private_attr_warning(name, typ)\n\n if isinstance(value, PublicOnlyProxy):\n # if we want to set an attribute on a PublicOnlyProxy *and* the\n # value that we want to set is itself a PublicOnlyProxy, we unwrap\n # the value. This has two benefits:\n #\n # 1. Checking the attribute later will incur a significant\n # performance cost, because _is_called_from_napari() will be\n # checked on each attribute access and it involves inspecting the\n # calling frame, which is expensive.\n # 2. Certain equality checks fail when objects are\n # PublicOnlyProxies. Notably, equality checks fail when such\n # objects are included in a Qt data model. For example, plugins can\n # grab a layer from the viewer; this layer will be wrapped by the\n # PublicOnlyProxy, and then using this object to set the current\n # layer selection will not propagate the selection to the Viewer.\n # See https://github.com/napari/napari/issues/5767\n value = value.__wrapped__\n\n setattr(self.__wrapped__, name, value)\n return None\n\n def __getitem__(self, key):\n return self.create(super().__getitem__(key))\n\n def __repr__(self):\n return repr(self.__wrapped__)\n\n def __dir__(self):\n return [x for x in dir(self.__wrapped__) if not _SUNDER.match(x)]\n\n @classmethod\n def create(cls, obj: Any) -> Union['PublicOnlyProxy', Any]:\n # restrict the scope of this proxy to napari objects\n if type(obj).__name__ == 'method':\n # If the given object is a method, we check the module *of the\n # object to which that method is bound*. Otherwise, the module of a\n # method is just builtins!\n mod = getattr(type(obj.__self__), '__module__', None) or ''\n else:\n # Otherwise, the module is of an object just given by the\n # __module__ attribute.\n mod = getattr(type(obj), '__module__', None) or ''\n if not mod.startswith('napari'):\n return obj\n if isinstance(obj, PublicOnlyProxy):\n return obj # don't double-wrap\n if callable(obj):\n return CallablePublicOnlyProxy(obj)\n return PublicOnlyProxy(obj)\n\n\nclass CallablePublicOnlyProxy(PublicOnlyProxy[Callable]):\n def __call__(self, *args, **kwargs):\n # if a PublicOnlyProxy is callable, then when we call it we:\n # - unwrap the arguments, to avoid performance issues detailed in\n # PublicOnlyProxy.__setattr__,\n # - call the unwrapped callable on the unwrapped arguments\n # - wrap the result in a PublicOnlyProxy\n args = [\n arg.__wrapped__ if isinstance(arg, PublicOnlyProxy) else arg\n for arg in args\n ]\n kwargs = {\n k: v.__wrapped__ if isinstance(v, PublicOnlyProxy) else v\n for k, v in kwargs.items()\n }\n return self.create(self.__wrapped__(*args, **kwargs))\n\n\ndef in_main_thread_py() -> bool:\n \"\"\"\n Check if caller is in main python thread.\n\n Returns\n -------\n thread_flag : bool\n True if we are in the main thread, False otherwise.\n \"\"\"\n import threading\n\n return threading.current_thread() == threading.main_thread()\n\n\ndef _in_main_thread() -> bool:\n \"\"\"\n General implementation of checking if we are in a proper thread.\n If Qt is available and Application is created then assign :py:func:`in_qt_main_thread` to `in_main_thread`.\n If Qt liba are not available then assign :py:func:`in_main_thread_py` to in_main_thread.\n IF Qt libs are available but there is no Application ti wil emmit warning and return result of in_main_thread_py.\n\n Returns\n -------\n thread_flag : bool\n True if we are in the main thread, False otherwise.\n \"\"\"\n\n global in_main_thread\n try:\n from napari._qt.utils import in_qt_main_thread\n\n res = in_qt_main_thread()\n in_main_thread = in_qt_main_thread\n except ImportError:\n in_main_thread = in_main_thread_py\n return in_main_thread_py()\n except AttributeError:\n warnings.warn(\n \"Qt libs are available but no QtApplication instance is created\"\n )\n return in_main_thread_py()\n return res\n\n\nin_main_thread = _in_main_thread\n", "path": "napari/utils/_proxies.py"}]} | 3,683 | 766 |
gh_patches_debug_47576 | rasdani/github-patches | git_diff | getpelican__pelican-905 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pelican-quickstart: error with accented characters
Hello,
I've got a problem with pelican-quickstart, when I put accented characters in answers.
Here is the output I got :
> Who will be the author of this web site? Guillaume LAMÉ
> Traceback (most recent call last):
> File "/home/lomig/StaticGen/Pelican/bin/pelican-quickstart", line 9, in <module>
> load_entry_point('pelican==3.2.1', 'console_scripts', 'pelican-quickstart')()
> File "/home/lomig/StaticGen/Pelican/local/lib/python2.7/site-packages/pelican/tools/pelican_quickstart.py", line 184, in main
> CONF['author'] = ask('Who will be the author of this web site?', answer=str_compat, default=args.author)
> File "/home/lomig/StaticGen/Pelican/local/lib/python2.7/site-packages/pelican/tools/pelican_quickstart.py", line 57, in wrapper
> return out.decode(sys.stdin.encoding)
> File "/home/lomig/StaticGen/Pelican/lib/python2.7/encodings/utf_8.py", line 16, in decode
> return codecs.utf_8_decode(input, errors, True)
> UnicodeEncodeError: 'ascii' codec can't encode character u'\xc9' in position 13: ordinal not in range(128)
Thanks.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pelican/tools/pelican_quickstart.py`
Content:
```
1 #!/usr/bin/env python
2
3 # -*- coding: utf-8 -*-
4 from __future__ import unicode_literals, print_function
5 import six
6
7 import os
8 import string
9 import argparse
10 import sys
11 import codecs
12
13 from pelican import __version__
14
15 _TEMPLATES_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)),
16 "templates")
17
18 CONF = {
19 'pelican': 'pelican',
20 'pelicanopts': '',
21 'basedir': os.curdir,
22 'ftp_host': 'localhost',
23 'ftp_user': 'anonymous',
24 'ftp_target_dir': '/',
25 'ssh_host': 'localhost',
26 'ssh_port': 22,
27 'ssh_user': 'root',
28 'ssh_target_dir': '/var/www',
29 's3_bucket': 'my_s3_bucket',
30 'dropbox_dir': '~/Dropbox/Public/',
31 'default_pagination': 10,
32 'siteurl': '',
33 'lang': 'en'
34 }
35
36 def _input_compat(prompt):
37 if six.PY3:
38 r = input(prompt)
39 else:
40 # FIXME: why use this with @decoding_strings?
41 r = raw_input(prompt).decode('utf-8')
42 return r
43
44 if six.PY3:
45 str_compat = str
46 else:
47 str_compat = unicode
48
49 def decoding_strings(f):
50 def wrapper(*args, **kwargs):
51 out = f(*args, **kwargs)
52 if isinstance(out, six.string_types) and not six.PY3:
53 # todo: make encoding configurable?
54 if six.PY3:
55 return out
56 else:
57 return out.decode(sys.stdin.encoding)
58 return out
59 return wrapper
60
61
62 def get_template(name, as_encoding='utf-8'):
63 template = os.path.join(_TEMPLATES_DIR, "{0}.in".format(name))
64
65 if not os.path.isfile(template):
66 raise RuntimeError("Cannot open {0}".format(template))
67
68 with codecs.open(template, 'r', as_encoding) as fd:
69 line = fd.readline()
70 while line:
71 yield line
72 line = fd.readline()
73 fd.close()
74
75
76 @decoding_strings
77 def ask(question, answer=str_compat, default=None, l=None):
78 if answer == str_compat:
79 r = ''
80 while True:
81 if default:
82 r = _input_compat('> {0} [{1}] '.format(question, default))
83 else:
84 r = _input_compat('> {0} '.format(question, default))
85
86 r = r.strip()
87
88 if len(r) <= 0:
89 if default:
90 r = default
91 break
92 else:
93 print('You must enter something')
94 else:
95 if l and len(r) != l:
96 print('You must enter a {0} letters long string'.format(l))
97 else:
98 break
99
100 return r
101
102 elif answer == bool:
103 r = None
104 while True:
105 if default is True:
106 r = _input_compat('> {0} (Y/n) '.format(question))
107 elif default is False:
108 r = _input_compat('> {0} (y/N) '.format(question))
109 else:
110 r = _input_compat('> {0} (y/n) '.format(question))
111
112 r = r.strip().lower()
113
114 if r in ('y', 'yes'):
115 r = True
116 break
117 elif r in ('n', 'no'):
118 r = False
119 break
120 elif not r:
121 r = default
122 break
123 else:
124 print("You must answer 'yes' or 'no'")
125 return r
126 elif answer == int:
127 r = None
128 while True:
129 if default:
130 r = _input_compat('> {0} [{1}] '.format(question, default))
131 else:
132 r = _input_compat('> {0} '.format(question))
133
134 r = r.strip()
135
136 if not r:
137 r = default
138 break
139
140 try:
141 r = int(r)
142 break
143 except:
144 print('You must enter an integer')
145 return r
146 else:
147 raise NotImplemented('Argument `answer` must be str_compat, bool, or integer')
148
149
150 def main():
151 parser = argparse.ArgumentParser(
152 description="A kickstarter for Pelican",
153 formatter_class=argparse.ArgumentDefaultsHelpFormatter)
154 parser.add_argument('-p', '--path', default=os.curdir,
155 help="The path to generate the blog into")
156 parser.add_argument('-t', '--title', metavar="title",
157 help='Set the title of the website')
158 parser.add_argument('-a', '--author', metavar="author",
159 help='Set the author name of the website')
160 parser.add_argument('-l', '--lang', metavar="lang",
161 help='Set the default web site language')
162
163 args = parser.parse_args()
164
165 print('''Welcome to pelican-quickstart v{v}.
166
167 This script will help you create a new Pelican-based website.
168
169 Please answer the following questions so this script can generate the files
170 needed by Pelican.
171
172 '''.format(v=__version__))
173
174 project = os.path.join(
175 os.environ.get('VIRTUAL_ENV', os.curdir), '.project')
176 if os.path.isfile(project):
177 CONF['basedir'] = open(project, 'r').read().rstrip("\n")
178 print('Using project associated with current virtual environment.'
179 'Will save to:\n%s\n' % CONF['basedir'])
180 else:
181 CONF['basedir'] = os.path.abspath(ask('Where do you want to create your new web site?', answer=str_compat, default=args.path))
182
183 CONF['sitename'] = ask('What will be the title of this web site?', answer=str_compat, default=args.title)
184 CONF['author'] = ask('Who will be the author of this web site?', answer=str_compat, default=args.author)
185 CONF['lang'] = ask('What will be the default language of this web site?', str_compat, args.lang or CONF['lang'], 2)
186
187 if ask('Do you want to specify a URL prefix? e.g., http://example.com ', answer=bool, default=True):
188 CONF['siteurl'] = ask('What is your URL prefix? (see above example; no trailing slash)', str_compat, CONF['siteurl'])
189
190 CONF['with_pagination'] = ask('Do you want to enable article pagination?', bool, bool(CONF['default_pagination']))
191
192 if CONF['with_pagination']:
193 CONF['default_pagination'] = ask('How many articles per page do you want?', int, CONF['default_pagination'])
194 else:
195 CONF['default_pagination'] = False
196
197 mkfile = ask('Do you want to generate a Makefile to easily manage your website?', bool, True)
198 develop = ask('Do you want an auto-reload & simpleHTTP script to assist with theme and site development?', bool, True)
199
200 if mkfile:
201 if ask('Do you want to upload your website using FTP?', answer=bool, default=False):
202 CONF['ftp_host'] = ask('What is the hostname of your FTP server?', str_compat, CONF['ftp_host'])
203 CONF['ftp_user'] = ask('What is your username on that server?', str_compat, CONF['ftp_user'])
204 CONF['ftp_target_dir'] = ask('Where do you want to put your web site on that server?', str_compat, CONF['ftp_target_dir'])
205 if ask('Do you want to upload your website using SSH?', answer=bool, default=False):
206 CONF['ssh_host'] = ask('What is the hostname of your SSH server?', str_compat, CONF['ssh_host'])
207 CONF['ssh_port'] = ask('What is the port of your SSH server?', int, CONF['ssh_port'])
208 CONF['ssh_user'] = ask('What is your username on that server?', str_compat, CONF['ssh_user'])
209 CONF['ssh_target_dir'] = ask('Where do you want to put your web site on that server?', str_compat, CONF['ssh_target_dir'])
210 if ask('Do you want to upload your website using Dropbox?', answer=bool, default=False):
211 CONF['dropbox_dir'] = ask('Where is your Dropbox directory?', str_compat, CONF['dropbox_dir'])
212 if ask('Do you want to upload your website using S3?', answer=bool, default=False):
213 CONF['s3_bucket'] = ask('What is the name of your S3 bucket?', str_compat, CONF['s3_bucket'])
214
215 try:
216 os.makedirs(os.path.join(CONF['basedir'], 'content'))
217 except OSError as e:
218 print('Error: {0}'.format(e))
219
220 try:
221 os.makedirs(os.path.join(CONF['basedir'], 'output'))
222 except OSError as e:
223 print('Error: {0}'.format(e))
224
225 try:
226 with codecs.open(os.path.join(CONF['basedir'], 'pelicanconf.py'), 'w', 'utf-8') as fd:
227 conf_python = dict()
228 for key, value in CONF.items():
229 conf_python[key] = repr(value)
230
231 for line in get_template('pelicanconf.py'):
232 template = string.Template(line)
233 fd.write(template.safe_substitute(conf_python))
234 fd.close()
235 except OSError as e:
236 print('Error: {0}'.format(e))
237
238 try:
239 with codecs.open(os.path.join(CONF['basedir'], 'publishconf.py'), 'w', 'utf-8') as fd:
240 for line in get_template('publishconf.py'):
241 template = string.Template(line)
242 fd.write(template.safe_substitute(CONF))
243 fd.close()
244 except OSError as e:
245 print('Error: {0}'.format(e))
246
247 if mkfile:
248 try:
249 with codecs.open(os.path.join(CONF['basedir'], 'Makefile'), 'w', 'utf-8') as fd:
250 mkfile_template_name = 'Makefile'
251 py_v = 'PY=python'
252 if six.PY3:
253 py_v = 'PY=python3'
254 template = string.Template(py_v)
255 fd.write(template.safe_substitute(CONF))
256 fd.write('\n')
257 for line in get_template(mkfile_template_name):
258 template = string.Template(line)
259 fd.write(template.safe_substitute(CONF))
260 fd.close()
261 except OSError as e:
262 print('Error: {0}'.format(e))
263
264 if develop:
265 conf_shell = dict()
266 for key, value in CONF.items():
267 if isinstance(value, six.string_types) and ' ' in value:
268 value = '"' + value.replace('"', '\\"') + '"'
269 conf_shell[key] = value
270 try:
271 with codecs.open(os.path.join(CONF['basedir'], 'develop_server.sh'), 'w', 'utf-8') as fd:
272 lines = list(get_template('develop_server.sh'))
273 py_v = 'PY=python\n'
274 if six.PY3:
275 py_v = 'PY=python3\n'
276 lines = lines[:4] + [py_v] + lines[4:]
277 for line in lines:
278 template = string.Template(line)
279 fd.write(template.safe_substitute(conf_shell))
280 fd.close()
281 os.chmod((os.path.join(CONF['basedir'], 'develop_server.sh')), 493) # mode 0o755
282 except OSError as e:
283 print('Error: {0}'.format(e))
284
285 print('Done. Your new project is available at %s' % CONF['basedir'])
286
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pelican/tools/pelican_quickstart.py b/pelican/tools/pelican_quickstart.py
--- a/pelican/tools/pelican_quickstart.py
+++ b/pelican/tools/pelican_quickstart.py
@@ -37,8 +37,7 @@
if six.PY3:
r = input(prompt)
else:
- # FIXME: why use this with @decoding_strings?
- r = raw_input(prompt).decode('utf-8')
+ r = raw_input(prompt)
return r
if six.PY3:
| {"golden_diff": "diff --git a/pelican/tools/pelican_quickstart.py b/pelican/tools/pelican_quickstart.py\n--- a/pelican/tools/pelican_quickstart.py\n+++ b/pelican/tools/pelican_quickstart.py\n@@ -37,8 +37,7 @@\n if six.PY3:\n r = input(prompt)\n else:\n- # FIXME: why use this with @decoding_strings?\n- r = raw_input(prompt).decode('utf-8')\n+ r = raw_input(prompt)\n return r\n \n if six.PY3:\n", "issue": "pelican-quickstart: error with accented characters\nHello,\n\nI've got a problem with pelican-quickstart, when I put accented characters in answers.\n\nHere is the output I got : \n\n> Who will be the author of this web site? Guillaume LAM\u00c9\n> Traceback (most recent call last):\n> File \"/home/lomig/StaticGen/Pelican/bin/pelican-quickstart\", line 9, in <module>\n> load_entry_point('pelican==3.2.1', 'console_scripts', 'pelican-quickstart')()\n> File \"/home/lomig/StaticGen/Pelican/local/lib/python2.7/site-packages/pelican/tools/pelican_quickstart.py\", line 184, in main\n> CONF['author'] = ask('Who will be the author of this web site?', answer=str_compat, default=args.author)\n> File \"/home/lomig/StaticGen/Pelican/local/lib/python2.7/site-packages/pelican/tools/pelican_quickstart.py\", line 57, in wrapper\n> return out.decode(sys.stdin.encoding)\n> File \"/home/lomig/StaticGen/Pelican/lib/python2.7/encodings/utf_8.py\", line 16, in decode\n> return codecs.utf_8_decode(input, errors, True)\n> UnicodeEncodeError: 'ascii' codec can't encode character u'\\xc9' in position 13: ordinal not in range(128)\n\nThanks.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals, print_function\nimport six\n\nimport os\nimport string\nimport argparse\nimport sys\nimport codecs\n\nfrom pelican import __version__\n\n_TEMPLATES_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)),\n \"templates\")\n\nCONF = {\n 'pelican': 'pelican',\n 'pelicanopts': '',\n 'basedir': os.curdir,\n 'ftp_host': 'localhost',\n 'ftp_user': 'anonymous',\n 'ftp_target_dir': '/',\n 'ssh_host': 'localhost',\n 'ssh_port': 22,\n 'ssh_user': 'root',\n 'ssh_target_dir': '/var/www',\n 's3_bucket': 'my_s3_bucket',\n 'dropbox_dir': '~/Dropbox/Public/',\n 'default_pagination': 10,\n 'siteurl': '',\n 'lang': 'en'\n}\n\ndef _input_compat(prompt):\n if six.PY3:\n r = input(prompt)\n else:\n # FIXME: why use this with @decoding_strings?\n r = raw_input(prompt).decode('utf-8')\n return r\n\nif six.PY3:\n str_compat = str\nelse:\n str_compat = unicode\n\ndef decoding_strings(f):\n def wrapper(*args, **kwargs):\n out = f(*args, **kwargs)\n if isinstance(out, six.string_types) and not six.PY3:\n # todo: make encoding configurable?\n if six.PY3:\n return out\n else:\n return out.decode(sys.stdin.encoding)\n return out\n return wrapper\n\n\ndef get_template(name, as_encoding='utf-8'):\n template = os.path.join(_TEMPLATES_DIR, \"{0}.in\".format(name))\n\n if not os.path.isfile(template):\n raise RuntimeError(\"Cannot open {0}\".format(template))\n\n with codecs.open(template, 'r', as_encoding) as fd:\n line = fd.readline()\n while line:\n yield line\n line = fd.readline()\n fd.close()\n\n\n@decoding_strings\ndef ask(question, answer=str_compat, default=None, l=None):\n if answer == str_compat:\n r = ''\n while True:\n if default:\n r = _input_compat('> {0} [{1}] '.format(question, default))\n else:\n r = _input_compat('> {0} '.format(question, default))\n\n r = r.strip()\n\n if len(r) <= 0:\n if default:\n r = default\n break\n else:\n print('You must enter something')\n else:\n if l and len(r) != l:\n print('You must enter a {0} letters long string'.format(l))\n else:\n break\n\n return r\n\n elif answer == bool:\n r = None\n while True:\n if default is True:\n r = _input_compat('> {0} (Y/n) '.format(question))\n elif default is False:\n r = _input_compat('> {0} (y/N) '.format(question))\n else:\n r = _input_compat('> {0} (y/n) '.format(question))\n\n r = r.strip().lower()\n\n if r in ('y', 'yes'):\n r = True\n break\n elif r in ('n', 'no'):\n r = False\n break\n elif not r:\n r = default\n break\n else:\n print(\"You must answer 'yes' or 'no'\")\n return r\n elif answer == int:\n r = None\n while True:\n if default:\n r = _input_compat('> {0} [{1}] '.format(question, default))\n else:\n r = _input_compat('> {0} '.format(question))\n\n r = r.strip()\n\n if not r:\n r = default\n break\n\n try:\n r = int(r)\n break\n except:\n print('You must enter an integer')\n return r\n else:\n raise NotImplemented('Argument `answer` must be str_compat, bool, or integer')\n\n\ndef main():\n parser = argparse.ArgumentParser(\n description=\"A kickstarter for Pelican\",\n formatter_class=argparse.ArgumentDefaultsHelpFormatter)\n parser.add_argument('-p', '--path', default=os.curdir,\n help=\"The path to generate the blog into\")\n parser.add_argument('-t', '--title', metavar=\"title\",\n help='Set the title of the website')\n parser.add_argument('-a', '--author', metavar=\"author\",\n help='Set the author name of the website')\n parser.add_argument('-l', '--lang', metavar=\"lang\",\n help='Set the default web site language')\n\n args = parser.parse_args()\n\n print('''Welcome to pelican-quickstart v{v}.\n\nThis script will help you create a new Pelican-based website.\n\nPlease answer the following questions so this script can generate the files\nneeded by Pelican.\n\n '''.format(v=__version__))\n\n project = os.path.join(\n os.environ.get('VIRTUAL_ENV', os.curdir), '.project')\n if os.path.isfile(project):\n CONF['basedir'] = open(project, 'r').read().rstrip(\"\\n\")\n print('Using project associated with current virtual environment.'\n 'Will save to:\\n%s\\n' % CONF['basedir'])\n else:\n CONF['basedir'] = os.path.abspath(ask('Where do you want to create your new web site?', answer=str_compat, default=args.path))\n\n CONF['sitename'] = ask('What will be the title of this web site?', answer=str_compat, default=args.title)\n CONF['author'] = ask('Who will be the author of this web site?', answer=str_compat, default=args.author)\n CONF['lang'] = ask('What will be the default language of this web site?', str_compat, args.lang or CONF['lang'], 2)\n\n if ask('Do you want to specify a URL prefix? e.g., http://example.com ', answer=bool, default=True):\n CONF['siteurl'] = ask('What is your URL prefix? (see above example; no trailing slash)', str_compat, CONF['siteurl'])\n\n CONF['with_pagination'] = ask('Do you want to enable article pagination?', bool, bool(CONF['default_pagination']))\n\n if CONF['with_pagination']:\n CONF['default_pagination'] = ask('How many articles per page do you want?', int, CONF['default_pagination'])\n else:\n CONF['default_pagination'] = False\n\n mkfile = ask('Do you want to generate a Makefile to easily manage your website?', bool, True)\n develop = ask('Do you want an auto-reload & simpleHTTP script to assist with theme and site development?', bool, True)\n\n if mkfile:\n if ask('Do you want to upload your website using FTP?', answer=bool, default=False):\n CONF['ftp_host'] = ask('What is the hostname of your FTP server?', str_compat, CONF['ftp_host'])\n CONF['ftp_user'] = ask('What is your username on that server?', str_compat, CONF['ftp_user'])\n CONF['ftp_target_dir'] = ask('Where do you want to put your web site on that server?', str_compat, CONF['ftp_target_dir'])\n if ask('Do you want to upload your website using SSH?', answer=bool, default=False):\n CONF['ssh_host'] = ask('What is the hostname of your SSH server?', str_compat, CONF['ssh_host'])\n CONF['ssh_port'] = ask('What is the port of your SSH server?', int, CONF['ssh_port'])\n CONF['ssh_user'] = ask('What is your username on that server?', str_compat, CONF['ssh_user'])\n CONF['ssh_target_dir'] = ask('Where do you want to put your web site on that server?', str_compat, CONF['ssh_target_dir'])\n if ask('Do you want to upload your website using Dropbox?', answer=bool, default=False):\n CONF['dropbox_dir'] = ask('Where is your Dropbox directory?', str_compat, CONF['dropbox_dir'])\n if ask('Do you want to upload your website using S3?', answer=bool, default=False):\n CONF['s3_bucket'] = ask('What is the name of your S3 bucket?', str_compat, CONF['s3_bucket'])\n\n try:\n os.makedirs(os.path.join(CONF['basedir'], 'content'))\n except OSError as e:\n print('Error: {0}'.format(e))\n\n try:\n os.makedirs(os.path.join(CONF['basedir'], 'output'))\n except OSError as e:\n print('Error: {0}'.format(e))\n\n try:\n with codecs.open(os.path.join(CONF['basedir'], 'pelicanconf.py'), 'w', 'utf-8') as fd:\n conf_python = dict()\n for key, value in CONF.items():\n conf_python[key] = repr(value)\n\n for line in get_template('pelicanconf.py'):\n template = string.Template(line)\n fd.write(template.safe_substitute(conf_python))\n fd.close()\n except OSError as e:\n print('Error: {0}'.format(e))\n\n try:\n with codecs.open(os.path.join(CONF['basedir'], 'publishconf.py'), 'w', 'utf-8') as fd:\n for line in get_template('publishconf.py'):\n template = string.Template(line)\n fd.write(template.safe_substitute(CONF))\n fd.close()\n except OSError as e:\n print('Error: {0}'.format(e))\n\n if mkfile:\n try:\n with codecs.open(os.path.join(CONF['basedir'], 'Makefile'), 'w', 'utf-8') as fd:\n mkfile_template_name = 'Makefile'\n py_v = 'PY=python'\n if six.PY3:\n py_v = 'PY=python3'\n template = string.Template(py_v)\n fd.write(template.safe_substitute(CONF))\n fd.write('\\n')\n for line in get_template(mkfile_template_name):\n template = string.Template(line)\n fd.write(template.safe_substitute(CONF))\n fd.close()\n except OSError as e:\n print('Error: {0}'.format(e))\n\n if develop:\n conf_shell = dict()\n for key, value in CONF.items():\n if isinstance(value, six.string_types) and ' ' in value:\n value = '\"' + value.replace('\"', '\\\\\"') + '\"'\n conf_shell[key] = value\n try:\n with codecs.open(os.path.join(CONF['basedir'], 'develop_server.sh'), 'w', 'utf-8') as fd:\n lines = list(get_template('develop_server.sh'))\n py_v = 'PY=python\\n'\n if six.PY3:\n py_v = 'PY=python3\\n'\n lines = lines[:4] + [py_v] + lines[4:]\n for line in lines:\n template = string.Template(line)\n fd.write(template.safe_substitute(conf_shell))\n fd.close()\n os.chmod((os.path.join(CONF['basedir'], 'develop_server.sh')), 493) # mode 0o755\n except OSError as e:\n print('Error: {0}'.format(e))\n\n print('Done. Your new project is available at %s' % CONF['basedir'])\n", "path": "pelican/tools/pelican_quickstart.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals, print_function\nimport six\n\nimport os\nimport string\nimport argparse\nimport sys\nimport codecs\n\nfrom pelican import __version__\n\n_TEMPLATES_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)),\n \"templates\")\n\nCONF = {\n 'pelican': 'pelican',\n 'pelicanopts': '',\n 'basedir': os.curdir,\n 'ftp_host': 'localhost',\n 'ftp_user': 'anonymous',\n 'ftp_target_dir': '/',\n 'ssh_host': 'localhost',\n 'ssh_port': 22,\n 'ssh_user': 'root',\n 'ssh_target_dir': '/var/www',\n 's3_bucket': 'my_s3_bucket',\n 'dropbox_dir': '~/Dropbox/Public/',\n 'default_pagination': 10,\n 'siteurl': '',\n 'lang': 'en'\n}\n\ndef _input_compat(prompt):\n if six.PY3:\n r = input(prompt)\n else:\n r = raw_input(prompt)\n return r\n\nif six.PY3:\n str_compat = str\nelse:\n str_compat = unicode\n\ndef decoding_strings(f):\n def wrapper(*args, **kwargs):\n out = f(*args, **kwargs)\n if isinstance(out, six.string_types) and not six.PY3:\n # todo: make encoding configurable?\n if six.PY3:\n return out\n else:\n return out.decode(sys.stdin.encoding)\n return out\n return wrapper\n\n\ndef get_template(name, as_encoding='utf-8'):\n template = os.path.join(_TEMPLATES_DIR, \"{0}.in\".format(name))\n\n if not os.path.isfile(template):\n raise RuntimeError(\"Cannot open {0}\".format(template))\n\n with codecs.open(template, 'r', as_encoding) as fd:\n line = fd.readline()\n while line:\n yield line\n line = fd.readline()\n fd.close()\n\n\n@decoding_strings\ndef ask(question, answer=str_compat, default=None, l=None):\n if answer == str_compat:\n r = ''\n while True:\n if default:\n r = _input_compat('> {0} [{1}] '.format(question, default))\n else:\n r = _input_compat('> {0} '.format(question, default))\n\n r = r.strip()\n\n if len(r) <= 0:\n if default:\n r = default\n break\n else:\n print('You must enter something')\n else:\n if l and len(r) != l:\n print('You must enter a {0} letters long string'.format(l))\n else:\n break\n\n return r\n\n elif answer == bool:\n r = None\n while True:\n if default is True:\n r = _input_compat('> {0} (Y/n) '.format(question))\n elif default is False:\n r = _input_compat('> {0} (y/N) '.format(question))\n else:\n r = _input_compat('> {0} (y/n) '.format(question))\n\n r = r.strip().lower()\n\n if r in ('y', 'yes'):\n r = True\n break\n elif r in ('n', 'no'):\n r = False\n break\n elif not r:\n r = default\n break\n else:\n print(\"You must answer 'yes' or 'no'\")\n return r\n elif answer == int:\n r = None\n while True:\n if default:\n r = _input_compat('> {0} [{1}] '.format(question, default))\n else:\n r = _input_compat('> {0} '.format(question))\n\n r = r.strip()\n\n if not r:\n r = default\n break\n\n try:\n r = int(r)\n break\n except:\n print('You must enter an integer')\n return r\n else:\n raise NotImplemented('Argument `answer` must be str_compat, bool, or integer')\n\n\ndef main():\n parser = argparse.ArgumentParser(\n description=\"A kickstarter for Pelican\",\n formatter_class=argparse.ArgumentDefaultsHelpFormatter)\n parser.add_argument('-p', '--path', default=os.curdir,\n help=\"The path to generate the blog into\")\n parser.add_argument('-t', '--title', metavar=\"title\",\n help='Set the title of the website')\n parser.add_argument('-a', '--author', metavar=\"author\",\n help='Set the author name of the website')\n parser.add_argument('-l', '--lang', metavar=\"lang\",\n help='Set the default web site language')\n\n args = parser.parse_args()\n\n print('''Welcome to pelican-quickstart v{v}.\n\nThis script will help you create a new Pelican-based website.\n\nPlease answer the following questions so this script can generate the files\nneeded by Pelican.\n\n '''.format(v=__version__))\n\n project = os.path.join(\n os.environ.get('VIRTUAL_ENV', os.curdir), '.project')\n if os.path.isfile(project):\n CONF['basedir'] = open(project, 'r').read().rstrip(\"\\n\")\n print('Using project associated with current virtual environment.'\n 'Will save to:\\n%s\\n' % CONF['basedir'])\n else:\n CONF['basedir'] = os.path.abspath(ask('Where do you want to create your new web site?', answer=str_compat, default=args.path))\n\n CONF['sitename'] = ask('What will be the title of this web site?', answer=str_compat, default=args.title)\n CONF['author'] = ask('Who will be the author of this web site?', answer=str_compat, default=args.author)\n CONF['lang'] = ask('What will be the default language of this web site?', str_compat, args.lang or CONF['lang'], 2)\n\n if ask('Do you want to specify a URL prefix? e.g., http://example.com ', answer=bool, default=True):\n CONF['siteurl'] = ask('What is your URL prefix? (see above example; no trailing slash)', str_compat, CONF['siteurl'])\n\n CONF['with_pagination'] = ask('Do you want to enable article pagination?', bool, bool(CONF['default_pagination']))\n\n if CONF['with_pagination']:\n CONF['default_pagination'] = ask('How many articles per page do you want?', int, CONF['default_pagination'])\n else:\n CONF['default_pagination'] = False\n\n mkfile = ask('Do you want to generate a Makefile to easily manage your website?', bool, True)\n develop = ask('Do you want an auto-reload & simpleHTTP script to assist with theme and site development?', bool, True)\n\n if mkfile:\n if ask('Do you want to upload your website using FTP?', answer=bool, default=False):\n CONF['ftp_host'] = ask('What is the hostname of your FTP server?', str_compat, CONF['ftp_host'])\n CONF['ftp_user'] = ask('What is your username on that server?', str_compat, CONF['ftp_user'])\n CONF['ftp_target_dir'] = ask('Where do you want to put your web site on that server?', str_compat, CONF['ftp_target_dir'])\n if ask('Do you want to upload your website using SSH?', answer=bool, default=False):\n CONF['ssh_host'] = ask('What is the hostname of your SSH server?', str_compat, CONF['ssh_host'])\n CONF['ssh_port'] = ask('What is the port of your SSH server?', int, CONF['ssh_port'])\n CONF['ssh_user'] = ask('What is your username on that server?', str_compat, CONF['ssh_user'])\n CONF['ssh_target_dir'] = ask('Where do you want to put your web site on that server?', str_compat, CONF['ssh_target_dir'])\n if ask('Do you want to upload your website using Dropbox?', answer=bool, default=False):\n CONF['dropbox_dir'] = ask('Where is your Dropbox directory?', str_compat, CONF['dropbox_dir'])\n if ask('Do you want to upload your website using S3?', answer=bool, default=False):\n CONF['s3_bucket'] = ask('What is the name of your S3 bucket?', str_compat, CONF['s3_bucket'])\n\n try:\n os.makedirs(os.path.join(CONF['basedir'], 'content'))\n except OSError as e:\n print('Error: {0}'.format(e))\n\n try:\n os.makedirs(os.path.join(CONF['basedir'], 'output'))\n except OSError as e:\n print('Error: {0}'.format(e))\n\n try:\n with codecs.open(os.path.join(CONF['basedir'], 'pelicanconf.py'), 'w', 'utf-8') as fd:\n conf_python = dict()\n for key, value in CONF.items():\n conf_python[key] = repr(value)\n\n for line in get_template('pelicanconf.py'):\n template = string.Template(line)\n fd.write(template.safe_substitute(conf_python))\n fd.close()\n except OSError as e:\n print('Error: {0}'.format(e))\n\n try:\n with codecs.open(os.path.join(CONF['basedir'], 'publishconf.py'), 'w', 'utf-8') as fd:\n for line in get_template('publishconf.py'):\n template = string.Template(line)\n fd.write(template.safe_substitute(CONF))\n fd.close()\n except OSError as e:\n print('Error: {0}'.format(e))\n\n if mkfile:\n try:\n with codecs.open(os.path.join(CONF['basedir'], 'Makefile'), 'w', 'utf-8') as fd:\n mkfile_template_name = 'Makefile'\n py_v = 'PY=python'\n if six.PY3:\n py_v = 'PY=python3'\n template = string.Template(py_v)\n fd.write(template.safe_substitute(CONF))\n fd.write('\\n')\n for line in get_template(mkfile_template_name):\n template = string.Template(line)\n fd.write(template.safe_substitute(CONF))\n fd.close()\n except OSError as e:\n print('Error: {0}'.format(e))\n\n if develop:\n conf_shell = dict()\n for key, value in CONF.items():\n if isinstance(value, six.string_types) and ' ' in value:\n value = '\"' + value.replace('\"', '\\\\\"') + '\"'\n conf_shell[key] = value\n try:\n with codecs.open(os.path.join(CONF['basedir'], 'develop_server.sh'), 'w', 'utf-8') as fd:\n lines = list(get_template('develop_server.sh'))\n py_v = 'PY=python\\n'\n if six.PY3:\n py_v = 'PY=python3\\n'\n lines = lines[:4] + [py_v] + lines[4:]\n for line in lines:\n template = string.Template(line)\n fd.write(template.safe_substitute(conf_shell))\n fd.close()\n os.chmod((os.path.join(CONF['basedir'], 'develop_server.sh')), 493) # mode 0o755\n except OSError as e:\n print('Error: {0}'.format(e))\n\n print('Done. Your new project is available at %s' % CONF['basedir'])\n", "path": "pelican/tools/pelican_quickstart.py"}]} | 3,814 | 126 |
gh_patches_debug_21314 | rasdani/github-patches | git_diff | matrix-org__synapse-4330 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
500s calling /messages on matrix.org
as seen by @schiessle on riot-android at https://github.com/vector-im/riot-android/issues/2802
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `synapse/handlers/pagination.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright 2014 - 2016 OpenMarket Ltd
3 # Copyright 2017 - 2018 New Vector Ltd
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16 import logging
17
18 from twisted.internet import defer
19 from twisted.python.failure import Failure
20
21 from synapse.api.constants import EventTypes, Membership
22 from synapse.api.errors import SynapseError
23 from synapse.events.utils import serialize_event
24 from synapse.storage.state import StateFilter
25 from synapse.types import RoomStreamToken
26 from synapse.util.async_helpers import ReadWriteLock
27 from synapse.util.logcontext import run_in_background
28 from synapse.util.stringutils import random_string
29 from synapse.visibility import filter_events_for_client
30
31 logger = logging.getLogger(__name__)
32
33
34 class PurgeStatus(object):
35 """Object tracking the status of a purge request
36
37 This class contains information on the progress of a purge request, for
38 return by get_purge_status.
39
40 Attributes:
41 status (int): Tracks whether this request has completed. One of
42 STATUS_{ACTIVE,COMPLETE,FAILED}
43 """
44
45 STATUS_ACTIVE = 0
46 STATUS_COMPLETE = 1
47 STATUS_FAILED = 2
48
49 STATUS_TEXT = {
50 STATUS_ACTIVE: "active",
51 STATUS_COMPLETE: "complete",
52 STATUS_FAILED: "failed",
53 }
54
55 def __init__(self):
56 self.status = PurgeStatus.STATUS_ACTIVE
57
58 def asdict(self):
59 return {
60 "status": PurgeStatus.STATUS_TEXT[self.status]
61 }
62
63
64 class PaginationHandler(object):
65 """Handles pagination and purge history requests.
66
67 These are in the same handler due to the fact we need to block clients
68 paginating during a purge.
69 """
70
71 def __init__(self, hs):
72 self.hs = hs
73 self.auth = hs.get_auth()
74 self.store = hs.get_datastore()
75 self.clock = hs.get_clock()
76
77 self.pagination_lock = ReadWriteLock()
78 self._purges_in_progress_by_room = set()
79 # map from purge id to PurgeStatus
80 self._purges_by_id = {}
81
82 def start_purge_history(self, room_id, token,
83 delete_local_events=False):
84 """Start off a history purge on a room.
85
86 Args:
87 room_id (str): The room to purge from
88
89 token (str): topological token to delete events before
90 delete_local_events (bool): True to delete local events as well as
91 remote ones
92
93 Returns:
94 str: unique ID for this purge transaction.
95 """
96 if room_id in self._purges_in_progress_by_room:
97 raise SynapseError(
98 400,
99 "History purge already in progress for %s" % (room_id, ),
100 )
101
102 purge_id = random_string(16)
103
104 # we log the purge_id here so that it can be tied back to the
105 # request id in the log lines.
106 logger.info("[purge] starting purge_id %s", purge_id)
107
108 self._purges_by_id[purge_id] = PurgeStatus()
109 run_in_background(
110 self._purge_history,
111 purge_id, room_id, token, delete_local_events,
112 )
113 return purge_id
114
115 @defer.inlineCallbacks
116 def _purge_history(self, purge_id, room_id, token,
117 delete_local_events):
118 """Carry out a history purge on a room.
119
120 Args:
121 purge_id (str): The id for this purge
122 room_id (str): The room to purge from
123 token (str): topological token to delete events before
124 delete_local_events (bool): True to delete local events as well as
125 remote ones
126
127 Returns:
128 Deferred
129 """
130 self._purges_in_progress_by_room.add(room_id)
131 try:
132 with (yield self.pagination_lock.write(room_id)):
133 yield self.store.purge_history(
134 room_id, token, delete_local_events,
135 )
136 logger.info("[purge] complete")
137 self._purges_by_id[purge_id].status = PurgeStatus.STATUS_COMPLETE
138 except Exception:
139 logger.error("[purge] failed: %s", Failure().getTraceback().rstrip())
140 self._purges_by_id[purge_id].status = PurgeStatus.STATUS_FAILED
141 finally:
142 self._purges_in_progress_by_room.discard(room_id)
143
144 # remove the purge from the list 24 hours after it completes
145 def clear_purge():
146 del self._purges_by_id[purge_id]
147 self.hs.get_reactor().callLater(24 * 3600, clear_purge)
148
149 def get_purge_status(self, purge_id):
150 """Get the current status of an active purge
151
152 Args:
153 purge_id (str): purge_id returned by start_purge_history
154
155 Returns:
156 PurgeStatus|None
157 """
158 return self._purges_by_id.get(purge_id)
159
160 @defer.inlineCallbacks
161 def get_messages(self, requester, room_id=None, pagin_config=None,
162 as_client_event=True, event_filter=None):
163 """Get messages in a room.
164
165 Args:
166 requester (Requester): The user requesting messages.
167 room_id (str): The room they want messages from.
168 pagin_config (synapse.api.streams.PaginationConfig): The pagination
169 config rules to apply, if any.
170 as_client_event (bool): True to get events in client-server format.
171 event_filter (Filter): Filter to apply to results or None
172 Returns:
173 dict: Pagination API results
174 """
175 user_id = requester.user.to_string()
176
177 if pagin_config.from_token:
178 room_token = pagin_config.from_token.room_key
179 else:
180 pagin_config.from_token = (
181 yield self.hs.get_event_sources().get_current_token_for_room(
182 room_id=room_id
183 )
184 )
185 room_token = pagin_config.from_token.room_key
186
187 room_token = RoomStreamToken.parse(room_token)
188
189 pagin_config.from_token = pagin_config.from_token.copy_and_replace(
190 "room_key", str(room_token)
191 )
192
193 source_config = pagin_config.get_source_config("room")
194
195 with (yield self.pagination_lock.read(room_id)):
196 membership, member_event_id = yield self.auth.check_in_room_or_world_readable(
197 room_id, user_id
198 )
199
200 if source_config.direction == 'b':
201 # if we're going backwards, we might need to backfill. This
202 # requires that we have a topo token.
203 if room_token.topological:
204 max_topo = room_token.topological
205 else:
206 max_topo = yield self.store.get_max_topological_token(
207 room_id, room_token.stream
208 )
209
210 if membership == Membership.LEAVE:
211 # If they have left the room then clamp the token to be before
212 # they left the room, to save the effort of loading from the
213 # database.
214 leave_token = yield self.store.get_topological_token_for_event(
215 member_event_id
216 )
217 leave_token = RoomStreamToken.parse(leave_token)
218 if leave_token.topological < max_topo:
219 source_config.from_key = str(leave_token)
220
221 yield self.hs.get_handlers().federation_handler.maybe_backfill(
222 room_id, max_topo
223 )
224
225 events, next_key = yield self.store.paginate_room_events(
226 room_id=room_id,
227 from_key=source_config.from_key,
228 to_key=source_config.to_key,
229 direction=source_config.direction,
230 limit=source_config.limit,
231 event_filter=event_filter,
232 )
233
234 next_token = pagin_config.from_token.copy_and_replace(
235 "room_key", next_key
236 )
237
238 if not events:
239 defer.returnValue({
240 "chunk": [],
241 "start": pagin_config.from_token.to_string(),
242 "end": next_token.to_string(),
243 })
244
245 if event_filter:
246 events = event_filter.filter(events)
247
248 events = yield filter_events_for_client(
249 self.store,
250 user_id,
251 events,
252 is_peeking=(member_event_id is None),
253 )
254
255 state = None
256 if event_filter and event_filter.lazy_load_members():
257 # TODO: remove redundant members
258
259 # FIXME: we also care about invite targets etc.
260 state_filter = StateFilter.from_types(
261 (EventTypes.Member, event.sender)
262 for event in events
263 )
264
265 state_ids = yield self.store.get_state_ids_for_event(
266 events[0].event_id, state_filter=state_filter,
267 )
268
269 if state_ids:
270 state = yield self.store.get_events(list(state_ids.values()))
271 state = state.values()
272
273 time_now = self.clock.time_msec()
274
275 chunk = {
276 "chunk": [
277 serialize_event(e, time_now, as_client_event)
278 for e in events
279 ],
280 "start": pagin_config.from_token.to_string(),
281 "end": next_token.to_string(),
282 }
283
284 if state:
285 chunk["state"] = [
286 serialize_event(e, time_now, as_client_event)
287 for e in state
288 ]
289
290 defer.returnValue(chunk)
291
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/synapse/handlers/pagination.py b/synapse/handlers/pagination.py
--- a/synapse/handlers/pagination.py
+++ b/synapse/handlers/pagination.py
@@ -235,6 +235,17 @@
"room_key", next_key
)
+ if events:
+ if event_filter:
+ events = event_filter.filter(events)
+
+ events = yield filter_events_for_client(
+ self.store,
+ user_id,
+ events,
+ is_peeking=(member_event_id is None),
+ )
+
if not events:
defer.returnValue({
"chunk": [],
@@ -242,16 +253,6 @@
"end": next_token.to_string(),
})
- if event_filter:
- events = event_filter.filter(events)
-
- events = yield filter_events_for_client(
- self.store,
- user_id,
- events,
- is_peeking=(member_event_id is None),
- )
-
state = None
if event_filter and event_filter.lazy_load_members():
# TODO: remove redundant members
| {"golden_diff": "diff --git a/synapse/handlers/pagination.py b/synapse/handlers/pagination.py\n--- a/synapse/handlers/pagination.py\n+++ b/synapse/handlers/pagination.py\n@@ -235,6 +235,17 @@\n \"room_key\", next_key\n )\n \n+ if events:\n+ if event_filter:\n+ events = event_filter.filter(events)\n+\n+ events = yield filter_events_for_client(\n+ self.store,\n+ user_id,\n+ events,\n+ is_peeking=(member_event_id is None),\n+ )\n+\n if not events:\n defer.returnValue({\n \"chunk\": [],\n@@ -242,16 +253,6 @@\n \"end\": next_token.to_string(),\n })\n \n- if event_filter:\n- events = event_filter.filter(events)\n-\n- events = yield filter_events_for_client(\n- self.store,\n- user_id,\n- events,\n- is_peeking=(member_event_id is None),\n- )\n-\n state = None\n if event_filter and event_filter.lazy_load_members():\n # TODO: remove redundant members\n", "issue": "500s calling /messages on matrix.org\nas seen by @schiessle on riot-android at https://github.com/vector-im/riot-android/issues/2802\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2014 - 2016 OpenMarket Ltd\n# Copyright 2017 - 2018 New Vector Ltd\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport logging\n\nfrom twisted.internet import defer\nfrom twisted.python.failure import Failure\n\nfrom synapse.api.constants import EventTypes, Membership\nfrom synapse.api.errors import SynapseError\nfrom synapse.events.utils import serialize_event\nfrom synapse.storage.state import StateFilter\nfrom synapse.types import RoomStreamToken\nfrom synapse.util.async_helpers import ReadWriteLock\nfrom synapse.util.logcontext import run_in_background\nfrom synapse.util.stringutils import random_string\nfrom synapse.visibility import filter_events_for_client\n\nlogger = logging.getLogger(__name__)\n\n\nclass PurgeStatus(object):\n \"\"\"Object tracking the status of a purge request\n\n This class contains information on the progress of a purge request, for\n return by get_purge_status.\n\n Attributes:\n status (int): Tracks whether this request has completed. One of\n STATUS_{ACTIVE,COMPLETE,FAILED}\n \"\"\"\n\n STATUS_ACTIVE = 0\n STATUS_COMPLETE = 1\n STATUS_FAILED = 2\n\n STATUS_TEXT = {\n STATUS_ACTIVE: \"active\",\n STATUS_COMPLETE: \"complete\",\n STATUS_FAILED: \"failed\",\n }\n\n def __init__(self):\n self.status = PurgeStatus.STATUS_ACTIVE\n\n def asdict(self):\n return {\n \"status\": PurgeStatus.STATUS_TEXT[self.status]\n }\n\n\nclass PaginationHandler(object):\n \"\"\"Handles pagination and purge history requests.\n\n These are in the same handler due to the fact we need to block clients\n paginating during a purge.\n \"\"\"\n\n def __init__(self, hs):\n self.hs = hs\n self.auth = hs.get_auth()\n self.store = hs.get_datastore()\n self.clock = hs.get_clock()\n\n self.pagination_lock = ReadWriteLock()\n self._purges_in_progress_by_room = set()\n # map from purge id to PurgeStatus\n self._purges_by_id = {}\n\n def start_purge_history(self, room_id, token,\n delete_local_events=False):\n \"\"\"Start off a history purge on a room.\n\n Args:\n room_id (str): The room to purge from\n\n token (str): topological token to delete events before\n delete_local_events (bool): True to delete local events as well as\n remote ones\n\n Returns:\n str: unique ID for this purge transaction.\n \"\"\"\n if room_id in self._purges_in_progress_by_room:\n raise SynapseError(\n 400,\n \"History purge already in progress for %s\" % (room_id, ),\n )\n\n purge_id = random_string(16)\n\n # we log the purge_id here so that it can be tied back to the\n # request id in the log lines.\n logger.info(\"[purge] starting purge_id %s\", purge_id)\n\n self._purges_by_id[purge_id] = PurgeStatus()\n run_in_background(\n self._purge_history,\n purge_id, room_id, token, delete_local_events,\n )\n return purge_id\n\n @defer.inlineCallbacks\n def _purge_history(self, purge_id, room_id, token,\n delete_local_events):\n \"\"\"Carry out a history purge on a room.\n\n Args:\n purge_id (str): The id for this purge\n room_id (str): The room to purge from\n token (str): topological token to delete events before\n delete_local_events (bool): True to delete local events as well as\n remote ones\n\n Returns:\n Deferred\n \"\"\"\n self._purges_in_progress_by_room.add(room_id)\n try:\n with (yield self.pagination_lock.write(room_id)):\n yield self.store.purge_history(\n room_id, token, delete_local_events,\n )\n logger.info(\"[purge] complete\")\n self._purges_by_id[purge_id].status = PurgeStatus.STATUS_COMPLETE\n except Exception:\n logger.error(\"[purge] failed: %s\", Failure().getTraceback().rstrip())\n self._purges_by_id[purge_id].status = PurgeStatus.STATUS_FAILED\n finally:\n self._purges_in_progress_by_room.discard(room_id)\n\n # remove the purge from the list 24 hours after it completes\n def clear_purge():\n del self._purges_by_id[purge_id]\n self.hs.get_reactor().callLater(24 * 3600, clear_purge)\n\n def get_purge_status(self, purge_id):\n \"\"\"Get the current status of an active purge\n\n Args:\n purge_id (str): purge_id returned by start_purge_history\n\n Returns:\n PurgeStatus|None\n \"\"\"\n return self._purges_by_id.get(purge_id)\n\n @defer.inlineCallbacks\n def get_messages(self, requester, room_id=None, pagin_config=None,\n as_client_event=True, event_filter=None):\n \"\"\"Get messages in a room.\n\n Args:\n requester (Requester): The user requesting messages.\n room_id (str): The room they want messages from.\n pagin_config (synapse.api.streams.PaginationConfig): The pagination\n config rules to apply, if any.\n as_client_event (bool): True to get events in client-server format.\n event_filter (Filter): Filter to apply to results or None\n Returns:\n dict: Pagination API results\n \"\"\"\n user_id = requester.user.to_string()\n\n if pagin_config.from_token:\n room_token = pagin_config.from_token.room_key\n else:\n pagin_config.from_token = (\n yield self.hs.get_event_sources().get_current_token_for_room(\n room_id=room_id\n )\n )\n room_token = pagin_config.from_token.room_key\n\n room_token = RoomStreamToken.parse(room_token)\n\n pagin_config.from_token = pagin_config.from_token.copy_and_replace(\n \"room_key\", str(room_token)\n )\n\n source_config = pagin_config.get_source_config(\"room\")\n\n with (yield self.pagination_lock.read(room_id)):\n membership, member_event_id = yield self.auth.check_in_room_or_world_readable(\n room_id, user_id\n )\n\n if source_config.direction == 'b':\n # if we're going backwards, we might need to backfill. This\n # requires that we have a topo token.\n if room_token.topological:\n max_topo = room_token.topological\n else:\n max_topo = yield self.store.get_max_topological_token(\n room_id, room_token.stream\n )\n\n if membership == Membership.LEAVE:\n # If they have left the room then clamp the token to be before\n # they left the room, to save the effort of loading from the\n # database.\n leave_token = yield self.store.get_topological_token_for_event(\n member_event_id\n )\n leave_token = RoomStreamToken.parse(leave_token)\n if leave_token.topological < max_topo:\n source_config.from_key = str(leave_token)\n\n yield self.hs.get_handlers().federation_handler.maybe_backfill(\n room_id, max_topo\n )\n\n events, next_key = yield self.store.paginate_room_events(\n room_id=room_id,\n from_key=source_config.from_key,\n to_key=source_config.to_key,\n direction=source_config.direction,\n limit=source_config.limit,\n event_filter=event_filter,\n )\n\n next_token = pagin_config.from_token.copy_and_replace(\n \"room_key\", next_key\n )\n\n if not events:\n defer.returnValue({\n \"chunk\": [],\n \"start\": pagin_config.from_token.to_string(),\n \"end\": next_token.to_string(),\n })\n\n if event_filter:\n events = event_filter.filter(events)\n\n events = yield filter_events_for_client(\n self.store,\n user_id,\n events,\n is_peeking=(member_event_id is None),\n )\n\n state = None\n if event_filter and event_filter.lazy_load_members():\n # TODO: remove redundant members\n\n # FIXME: we also care about invite targets etc.\n state_filter = StateFilter.from_types(\n (EventTypes.Member, event.sender)\n for event in events\n )\n\n state_ids = yield self.store.get_state_ids_for_event(\n events[0].event_id, state_filter=state_filter,\n )\n\n if state_ids:\n state = yield self.store.get_events(list(state_ids.values()))\n state = state.values()\n\n time_now = self.clock.time_msec()\n\n chunk = {\n \"chunk\": [\n serialize_event(e, time_now, as_client_event)\n for e in events\n ],\n \"start\": pagin_config.from_token.to_string(),\n \"end\": next_token.to_string(),\n }\n\n if state:\n chunk[\"state\"] = [\n serialize_event(e, time_now, as_client_event)\n for e in state\n ]\n\n defer.returnValue(chunk)\n", "path": "synapse/handlers/pagination.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2014 - 2016 OpenMarket Ltd\n# Copyright 2017 - 2018 New Vector Ltd\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport logging\n\nfrom twisted.internet import defer\nfrom twisted.python.failure import Failure\n\nfrom synapse.api.constants import EventTypes, Membership\nfrom synapse.api.errors import SynapseError\nfrom synapse.events.utils import serialize_event\nfrom synapse.storage.state import StateFilter\nfrom synapse.types import RoomStreamToken\nfrom synapse.util.async_helpers import ReadWriteLock\nfrom synapse.util.logcontext import run_in_background\nfrom synapse.util.stringutils import random_string\nfrom synapse.visibility import filter_events_for_client\n\nlogger = logging.getLogger(__name__)\n\n\nclass PurgeStatus(object):\n \"\"\"Object tracking the status of a purge request\n\n This class contains information on the progress of a purge request, for\n return by get_purge_status.\n\n Attributes:\n status (int): Tracks whether this request has completed. One of\n STATUS_{ACTIVE,COMPLETE,FAILED}\n \"\"\"\n\n STATUS_ACTIVE = 0\n STATUS_COMPLETE = 1\n STATUS_FAILED = 2\n\n STATUS_TEXT = {\n STATUS_ACTIVE: \"active\",\n STATUS_COMPLETE: \"complete\",\n STATUS_FAILED: \"failed\",\n }\n\n def __init__(self):\n self.status = PurgeStatus.STATUS_ACTIVE\n\n def asdict(self):\n return {\n \"status\": PurgeStatus.STATUS_TEXT[self.status]\n }\n\n\nclass PaginationHandler(object):\n \"\"\"Handles pagination and purge history requests.\n\n These are in the same handler due to the fact we need to block clients\n paginating during a purge.\n \"\"\"\n\n def __init__(self, hs):\n self.hs = hs\n self.auth = hs.get_auth()\n self.store = hs.get_datastore()\n self.clock = hs.get_clock()\n\n self.pagination_lock = ReadWriteLock()\n self._purges_in_progress_by_room = set()\n # map from purge id to PurgeStatus\n self._purges_by_id = {}\n\n def start_purge_history(self, room_id, token,\n delete_local_events=False):\n \"\"\"Start off a history purge on a room.\n\n Args:\n room_id (str): The room to purge from\n\n token (str): topological token to delete events before\n delete_local_events (bool): True to delete local events as well as\n remote ones\n\n Returns:\n str: unique ID for this purge transaction.\n \"\"\"\n if room_id in self._purges_in_progress_by_room:\n raise SynapseError(\n 400,\n \"History purge already in progress for %s\" % (room_id, ),\n )\n\n purge_id = random_string(16)\n\n # we log the purge_id here so that it can be tied back to the\n # request id in the log lines.\n logger.info(\"[purge] starting purge_id %s\", purge_id)\n\n self._purges_by_id[purge_id] = PurgeStatus()\n run_in_background(\n self._purge_history,\n purge_id, room_id, token, delete_local_events,\n )\n return purge_id\n\n @defer.inlineCallbacks\n def _purge_history(self, purge_id, room_id, token,\n delete_local_events):\n \"\"\"Carry out a history purge on a room.\n\n Args:\n purge_id (str): The id for this purge\n room_id (str): The room to purge from\n token (str): topological token to delete events before\n delete_local_events (bool): True to delete local events as well as\n remote ones\n\n Returns:\n Deferred\n \"\"\"\n self._purges_in_progress_by_room.add(room_id)\n try:\n with (yield self.pagination_lock.write(room_id)):\n yield self.store.purge_history(\n room_id, token, delete_local_events,\n )\n logger.info(\"[purge] complete\")\n self._purges_by_id[purge_id].status = PurgeStatus.STATUS_COMPLETE\n except Exception:\n logger.error(\"[purge] failed: %s\", Failure().getTraceback().rstrip())\n self._purges_by_id[purge_id].status = PurgeStatus.STATUS_FAILED\n finally:\n self._purges_in_progress_by_room.discard(room_id)\n\n # remove the purge from the list 24 hours after it completes\n def clear_purge():\n del self._purges_by_id[purge_id]\n self.hs.get_reactor().callLater(24 * 3600, clear_purge)\n\n def get_purge_status(self, purge_id):\n \"\"\"Get the current status of an active purge\n\n Args:\n purge_id (str): purge_id returned by start_purge_history\n\n Returns:\n PurgeStatus|None\n \"\"\"\n return self._purges_by_id.get(purge_id)\n\n @defer.inlineCallbacks\n def get_messages(self, requester, room_id=None, pagin_config=None,\n as_client_event=True, event_filter=None):\n \"\"\"Get messages in a room.\n\n Args:\n requester (Requester): The user requesting messages.\n room_id (str): The room they want messages from.\n pagin_config (synapse.api.streams.PaginationConfig): The pagination\n config rules to apply, if any.\n as_client_event (bool): True to get events in client-server format.\n event_filter (Filter): Filter to apply to results or None\n Returns:\n dict: Pagination API results\n \"\"\"\n user_id = requester.user.to_string()\n\n if pagin_config.from_token:\n room_token = pagin_config.from_token.room_key\n else:\n pagin_config.from_token = (\n yield self.hs.get_event_sources().get_current_token_for_room(\n room_id=room_id\n )\n )\n room_token = pagin_config.from_token.room_key\n\n room_token = RoomStreamToken.parse(room_token)\n\n pagin_config.from_token = pagin_config.from_token.copy_and_replace(\n \"room_key\", str(room_token)\n )\n\n source_config = pagin_config.get_source_config(\"room\")\n\n with (yield self.pagination_lock.read(room_id)):\n membership, member_event_id = yield self.auth.check_in_room_or_world_readable(\n room_id, user_id\n )\n\n if source_config.direction == 'b':\n # if we're going backwards, we might need to backfill. This\n # requires that we have a topo token.\n if room_token.topological:\n max_topo = room_token.topological\n else:\n max_topo = yield self.store.get_max_topological_token(\n room_id, room_token.stream\n )\n\n if membership == Membership.LEAVE:\n # If they have left the room then clamp the token to be before\n # they left the room, to save the effort of loading from the\n # database.\n leave_token = yield self.store.get_topological_token_for_event(\n member_event_id\n )\n leave_token = RoomStreamToken.parse(leave_token)\n if leave_token.topological < max_topo:\n source_config.from_key = str(leave_token)\n\n yield self.hs.get_handlers().federation_handler.maybe_backfill(\n room_id, max_topo\n )\n\n events, next_key = yield self.store.paginate_room_events(\n room_id=room_id,\n from_key=source_config.from_key,\n to_key=source_config.to_key,\n direction=source_config.direction,\n limit=source_config.limit,\n event_filter=event_filter,\n )\n\n next_token = pagin_config.from_token.copy_and_replace(\n \"room_key\", next_key\n )\n\n if events:\n if event_filter:\n events = event_filter.filter(events)\n\n events = yield filter_events_for_client(\n self.store,\n user_id,\n events,\n is_peeking=(member_event_id is None),\n )\n\n if not events:\n defer.returnValue({\n \"chunk\": [],\n \"start\": pagin_config.from_token.to_string(),\n \"end\": next_token.to_string(),\n })\n\n state = None\n if event_filter and event_filter.lazy_load_members():\n # TODO: remove redundant members\n\n # FIXME: we also care about invite targets etc.\n state_filter = StateFilter.from_types(\n (EventTypes.Member, event.sender)\n for event in events\n )\n\n state_ids = yield self.store.get_state_ids_for_event(\n events[0].event_id, state_filter=state_filter,\n )\n\n if state_ids:\n state = yield self.store.get_events(list(state_ids.values()))\n state = state.values()\n\n time_now = self.clock.time_msec()\n\n chunk = {\n \"chunk\": [\n serialize_event(e, time_now, as_client_event)\n for e in events\n ],\n \"start\": pagin_config.from_token.to_string(),\n \"end\": next_token.to_string(),\n }\n\n if state:\n chunk[\"state\"] = [\n serialize_event(e, time_now, as_client_event)\n for e in state\n ]\n\n defer.returnValue(chunk)\n", "path": "synapse/handlers/pagination.py"}]} | 3,162 | 258 |
gh_patches_debug_23812 | rasdani/github-patches | git_diff | interlegis__sapl-2091 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Auto-numeração de Norma Jurídica
Retirado do tíquete nº 132457
"Bom dia,
Achei muito útil a funcionalidade de numeração automática das normas jurídicas no SAPL 3.1
Outra novidade que gostei muito é a aceitação de leis com letra no final, do tipo "lei 2133A"
Porém, quando insiro alguma lei com letra, a auto-numeração das leis seguintes deixa de funcionar.
Peço então que, por gentileza, revisem esse problema.
Atenciosamente,
Marcos F. Scher."
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sapl/norma/views.py`
Content:
```
1
2 import weasyprint
3 from django.contrib.auth.mixins import PermissionRequiredMixin
4 from django.core.exceptions import ObjectDoesNotExist
5 from django.core.urlresolvers import reverse
6 from django.http import HttpResponse, JsonResponse
7 from django.template import RequestContext, loader
8 from django.utils import timezone
9 from django.utils.translation import ugettext_lazy as _
10 from django.views.generic import TemplateView, UpdateView
11 from django.views.generic.base import RedirectView
12 from django.views.generic.edit import FormView
13 from django_filters.views import FilterView
14
15 from sapl.base.models import AppConfig
16 from sapl.compilacao.views import IntegracaoTaView
17 from sapl.crud.base import (RP_DETAIL, RP_LIST, Crud, CrudAux,
18 MasterDetailCrud, make_pagination)
19 from sapl.utils import show_results_filter_set
20
21 from .forms import (NormaFilterSet, NormaJuridicaForm,
22 NormaPesquisaSimplesForm, NormaRelacionadaForm)
23 from .models import (AssuntoNorma, NormaJuridica, NormaRelacionada,
24 TipoNormaJuridica, TipoVinculoNormaJuridica)
25
26 # LegislacaoCitadaCrud = Crud.build(LegislacaoCitada, '')
27 AssuntoNormaCrud = CrudAux.build(AssuntoNorma, 'assunto_norma_juridica',
28 list_field_names=['assunto', 'descricao'])
29
30
31 TipoNormaCrud = CrudAux.build(
32 TipoNormaJuridica, 'tipo_norma_juridica',
33 list_field_names=['sigla', 'descricao', 'equivalente_lexml'])
34 TipoVinculoNormaJuridicaCrud = CrudAux.build(
35 TipoVinculoNormaJuridica, '',
36 list_field_names=['sigla', 'descricao_ativa', 'descricao_passiva'])
37
38
39 class NormaRelacionadaCrud(MasterDetailCrud):
40 model = NormaRelacionada
41 parent_field = 'norma_principal'
42 help_topic = 'norma_juridica'
43
44 class BaseMixin(MasterDetailCrud.BaseMixin):
45 list_field_names = ['norma_relacionada', 'tipo_vinculo']
46
47 class CreateView(MasterDetailCrud.CreateView):
48 form_class = NormaRelacionadaForm
49
50 class UpdateView(MasterDetailCrud.UpdateView):
51 form_class = NormaRelacionadaForm
52
53 def get_initial(self):
54 initial = super(UpdateView, self).get_initial()
55 initial['tipo'] = self.object.norma_relacionada.tipo.id
56 initial['numero'] = self.object.norma_relacionada.numero
57 initial['ano'] = self.object.norma_relacionada.ano
58 initial['ementa'] = self.object.norma_relacionada.ementa
59 return initial
60
61 class DetailView(MasterDetailCrud.DetailView):
62
63 layout_key = 'NormaRelacionadaDetail'
64
65
66 class NormaPesquisaView(FilterView):
67 model = NormaJuridica
68 filterset_class = NormaFilterSet
69 paginate_by = 10
70
71 def get_queryset(self):
72 qs = super().get_queryset()
73
74 qs.select_related('tipo', 'materia')
75
76 return qs
77
78 def get_context_data(self, **kwargs):
79 context = super(NormaPesquisaView, self).get_context_data(**kwargs)
80
81 context['title'] = _('Pesquisar Norma Jurídica')
82
83 qr = self.request.GET.copy()
84
85 if 'page' in qr:
86 del qr['page']
87
88 paginator = context['paginator']
89 page_obj = context['page_obj']
90
91 context['page_range'] = make_pagination(
92 page_obj.number, paginator.num_pages)
93
94 context['filter_url'] = ('&' + qr.urlencode()) if len(qr) > 0 else ''
95
96 context['show_results'] = show_results_filter_set(qr)
97
98 return context
99
100
101 class NormaTaView(IntegracaoTaView):
102 model = NormaJuridica
103 model_type_foreignkey = TipoNormaJuridica
104 map_fields = {
105 'data': 'data',
106 'ementa': 'ementa',
107 'observacao': 'observacao',
108 'numero': 'numero',
109 'ano': 'ano',
110 }
111
112 map_funcs = {
113 'publicacao_func': True
114 }
115
116 def get(self, request, *args, **kwargs):
117 """
118 Para manter a app compilacao isolada das outras aplicações,
119 este get foi implementado para tratar uma prerrogativa externa
120 de usuário.
121 """
122 if AppConfig.attr('texto_articulado_norma'):
123 return IntegracaoTaView.get(self, request, *args, **kwargs)
124 else:
125 return self.get_redirect_deactivated()
126
127
128 class NormaCrud(Crud):
129 model = NormaJuridica
130 help_topic = 'norma_juridica'
131 public = [RP_LIST, RP_DETAIL]
132
133 class BaseMixin(Crud.BaseMixin):
134 list_field_names = ['tipo', 'numero', 'ano', 'ementa']
135
136 list_url = ''
137
138 @property
139 def search_url(self):
140 namespace = self.model._meta.app_config.name
141 return reverse('%s:%s' % (namespace, 'norma_pesquisa'))
142
143 class DetailView(Crud.DetailView):
144 pass
145
146 class DeleteView(Crud.DeleteView):
147
148 def get_success_url(self):
149 return self.search_url
150
151 class CreateView(Crud.CreateView):
152 form_class = NormaJuridicaForm
153
154 @property
155 def cancel_url(self):
156 return self.search_url
157
158 layout_key = 'NormaJuridicaCreate'
159
160 class ListView(Crud.ListView, RedirectView):
161
162 def get_redirect_url(self, *args, **kwargs):
163 namespace = self.model._meta.app_config.name
164 return reverse('%s:%s' % (namespace, 'norma_pesquisa'))
165
166 def get(self, request, *args, **kwargs):
167 return RedirectView.get(self, request, *args, **kwargs)
168
169 class UpdateView(Crud.UpdateView):
170 form_class = NormaJuridicaForm
171
172 layout_key = 'NormaJuridicaCreate'
173
174 def get_initial(self):
175 initial = super(UpdateView, self).get_initial()
176 norma = NormaJuridica.objects.get(id=self.kwargs['pk'])
177 if norma.materia:
178 initial['tipo_materia'] = norma.materia.tipo
179 initial['ano_materia'] = norma.materia.ano
180 initial['numero_materia'] = norma.materia.numero
181 return initial
182
183
184 def recuperar_norma(request):
185 tipo = TipoNormaJuridica.objects.get(pk=request.GET['tipo'])
186 numero = request.GET['numero']
187 ano = request.GET['ano']
188
189 try:
190 norma = NormaJuridica.objects.get(tipo=tipo,
191 ano=ano,
192 numero=numero)
193 response = JsonResponse({'ementa': norma.ementa,
194 'id': norma.id})
195 except ObjectDoesNotExist:
196 response = JsonResponse({'ementa': '', 'id': 0})
197
198 return response
199
200
201 def recuperar_numero_norma(request):
202 tipo = TipoNormaJuridica.objects.get(pk=request.GET['tipo'])
203 ano = request.GET.get('ano', '')
204
205 param = {'tipo': tipo}
206 param['ano'] = ano if ano else timezone.now().year
207 norma = NormaJuridica.objects.filter(**param).extra(
208 {'numero_id': "CAST(numero as INTEGER)"}).order_by(
209 'tipo', 'ano','numero_id').values_list('numero', 'ano').last()
210 if norma:
211 response = JsonResponse({'numero': int(norma[0]) + 1,
212 'ano': norma[1]})
213 else:
214 response = JsonResponse(
215 {'numero': 1, 'ano': ano})
216
217 return response
218
219
220 class ImpressosView(PermissionRequiredMixin, TemplateView):
221 template_name = 'materia/impressos/impressos.html'
222 permission_required = ('materia.can_access_impressos', )
223
224
225 def gerar_pdf_impressos(request, context, template_name):
226 template = loader.get_template(template_name)
227 html = template.render(RequestContext(request, context))
228 pdf = weasyprint.HTML(string=html, base_url=request.build_absolute_uri()
229 ).write_pdf()
230
231 response = HttpResponse(pdf, content_type='application/pdf')
232 response['Content-Disposition'] = (
233 'inline; filename="relatorio_impressos.pdf"')
234 response['Content-Transfer-Encoding'] = 'binary'
235
236 return response
237
238
239 class NormaPesquisaSimplesView(PermissionRequiredMixin, FormView):
240 form_class = NormaPesquisaSimplesForm
241 template_name = 'materia/impressos/norma.html'
242 permission_required = ('materia.can_access_impressos', )
243
244 def form_valid(self, form):
245 normas = NormaJuridica.objects.all().order_by(
246 'numero')
247 template_norma = 'materia/impressos/normas_pdf.html'
248
249 titulo = form.cleaned_data['titulo']
250
251 if form.cleaned_data['tipo_norma']:
252 normas = normas.filter(tipo=form.cleaned_data['tipo_norma'])
253
254 if form.cleaned_data['data_inicial']:
255 normas = normas.filter(
256 data__gte=form.cleaned_data['data_inicial'],
257 data__lte=form.cleaned_data['data_final'])
258
259 qtd_resultados = len(normas)
260 if qtd_resultados > 2000:
261 normas = normas[:2000]
262
263 context = {'quantidade': qtd_resultados,
264 'titulo': titulo,
265 'normas': normas}
266
267 return gerar_pdf_impressos(self.request, context, template_norma)
268
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sapl/norma/views.py b/sapl/norma/views.py
--- a/sapl/norma/views.py
+++ b/sapl/norma/views.py
@@ -1,4 +1,5 @@
+import re
import weasyprint
from django.contrib.auth.mixins import PermissionRequiredMixin
from django.core.exceptions import ObjectDoesNotExist
@@ -201,14 +202,12 @@
def recuperar_numero_norma(request):
tipo = TipoNormaJuridica.objects.get(pk=request.GET['tipo'])
ano = request.GET.get('ano', '')
-
param = {'tipo': tipo}
param['ano'] = ano if ano else timezone.now().year
- norma = NormaJuridica.objects.filter(**param).extra(
- {'numero_id': "CAST(numero as INTEGER)"}).order_by(
- 'tipo', 'ano','numero_id').values_list('numero', 'ano').last()
+ norma = NormaJuridica.objects.filter(**param).order_by(
+ 'tipo', 'ano').values_list('numero', 'ano').last()
if norma:
- response = JsonResponse({'numero': int(norma[0]) + 1,
+ response = JsonResponse({'numero': int(re.sub("[^0-9].*", '', norma[0])) + 1,
'ano': norma[1]})
else:
response = JsonResponse(
| {"golden_diff": "diff --git a/sapl/norma/views.py b/sapl/norma/views.py\n--- a/sapl/norma/views.py\n+++ b/sapl/norma/views.py\n@@ -1,4 +1,5 @@\n \n+import re\n import weasyprint\n from django.contrib.auth.mixins import PermissionRequiredMixin\n from django.core.exceptions import ObjectDoesNotExist\n@@ -201,14 +202,12 @@\n def recuperar_numero_norma(request):\n tipo = TipoNormaJuridica.objects.get(pk=request.GET['tipo'])\n ano = request.GET.get('ano', '')\n-\n param = {'tipo': tipo}\n param['ano'] = ano if ano else timezone.now().year\n- norma = NormaJuridica.objects.filter(**param).extra(\n- {'numero_id': \"CAST(numero as INTEGER)\"}).order_by(\n- 'tipo', 'ano','numero_id').values_list('numero', 'ano').last()\n+ norma = NormaJuridica.objects.filter(**param).order_by(\n+ 'tipo', 'ano').values_list('numero', 'ano').last()\n if norma:\n- response = JsonResponse({'numero': int(norma[0]) + 1,\n+ response = JsonResponse({'numero': int(re.sub(\"[^0-9].*\", '', norma[0])) + 1,\n 'ano': norma[1]})\n else:\n response = JsonResponse(\n", "issue": "Auto-numera\u00e7\u00e3o de Norma Jur\u00eddica\nRetirado do t\u00edquete n\u00ba 132457\r\n\"Bom dia,\r\nAchei muito \u00fatil a funcionalidade de numera\u00e7\u00e3o autom\u00e1tica das normas jur\u00eddicas no SAPL 3.1\r\nOutra novidade que gostei muito \u00e9 a aceita\u00e7\u00e3o de leis com letra no final, do tipo \"lei 2133A\"\r\nPor\u00e9m, quando insiro alguma lei com letra, a auto-numera\u00e7\u00e3o das leis seguintes deixa de funcionar. \r\nPe\u00e7o ent\u00e3o que, por gentileza, revisem esse problema. \r\nAtenciosamente,\r\nMarcos F. Scher.\"\n", "before_files": [{"content": "\nimport weasyprint\nfrom django.contrib.auth.mixins import PermissionRequiredMixin\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.core.urlresolvers import reverse\nfrom django.http import HttpResponse, JsonResponse\nfrom django.template import RequestContext, loader\nfrom django.utils import timezone\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.views.generic import TemplateView, UpdateView\nfrom django.views.generic.base import RedirectView\nfrom django.views.generic.edit import FormView\nfrom django_filters.views import FilterView\n\nfrom sapl.base.models import AppConfig\nfrom sapl.compilacao.views import IntegracaoTaView\nfrom sapl.crud.base import (RP_DETAIL, RP_LIST, Crud, CrudAux,\n MasterDetailCrud, make_pagination)\nfrom sapl.utils import show_results_filter_set\n\nfrom .forms import (NormaFilterSet, NormaJuridicaForm,\n NormaPesquisaSimplesForm, NormaRelacionadaForm)\nfrom .models import (AssuntoNorma, NormaJuridica, NormaRelacionada,\n TipoNormaJuridica, TipoVinculoNormaJuridica)\n\n# LegislacaoCitadaCrud = Crud.build(LegislacaoCitada, '')\nAssuntoNormaCrud = CrudAux.build(AssuntoNorma, 'assunto_norma_juridica',\n list_field_names=['assunto', 'descricao'])\n\n\nTipoNormaCrud = CrudAux.build(\n TipoNormaJuridica, 'tipo_norma_juridica',\n list_field_names=['sigla', 'descricao', 'equivalente_lexml'])\nTipoVinculoNormaJuridicaCrud = CrudAux.build(\n TipoVinculoNormaJuridica, '',\n list_field_names=['sigla', 'descricao_ativa', 'descricao_passiva'])\n\n\nclass NormaRelacionadaCrud(MasterDetailCrud):\n model = NormaRelacionada\n parent_field = 'norma_principal'\n help_topic = 'norma_juridica'\n\n class BaseMixin(MasterDetailCrud.BaseMixin):\n list_field_names = ['norma_relacionada', 'tipo_vinculo']\n\n class CreateView(MasterDetailCrud.CreateView):\n form_class = NormaRelacionadaForm\n\n class UpdateView(MasterDetailCrud.UpdateView):\n form_class = NormaRelacionadaForm\n\n def get_initial(self):\n initial = super(UpdateView, self).get_initial()\n initial['tipo'] = self.object.norma_relacionada.tipo.id\n initial['numero'] = self.object.norma_relacionada.numero\n initial['ano'] = self.object.norma_relacionada.ano\n initial['ementa'] = self.object.norma_relacionada.ementa\n return initial\n\n class DetailView(MasterDetailCrud.DetailView):\n\n layout_key = 'NormaRelacionadaDetail'\n\n\nclass NormaPesquisaView(FilterView):\n model = NormaJuridica\n filterset_class = NormaFilterSet\n paginate_by = 10\n\n def get_queryset(self):\n qs = super().get_queryset()\n\n qs.select_related('tipo', 'materia')\n\n return qs\n\n def get_context_data(self, **kwargs):\n context = super(NormaPesquisaView, self).get_context_data(**kwargs)\n\n context['title'] = _('Pesquisar Norma Jur\u00eddica')\n\n qr = self.request.GET.copy()\n\n if 'page' in qr:\n del qr['page']\n\n paginator = context['paginator']\n page_obj = context['page_obj']\n\n context['page_range'] = make_pagination(\n page_obj.number, paginator.num_pages)\n\n context['filter_url'] = ('&' + qr.urlencode()) if len(qr) > 0 else ''\n\n context['show_results'] = show_results_filter_set(qr)\n\n return context\n\n\nclass NormaTaView(IntegracaoTaView):\n model = NormaJuridica\n model_type_foreignkey = TipoNormaJuridica\n map_fields = {\n 'data': 'data',\n 'ementa': 'ementa',\n 'observacao': 'observacao',\n 'numero': 'numero',\n 'ano': 'ano',\n }\n\n map_funcs = {\n 'publicacao_func': True\n }\n\n def get(self, request, *args, **kwargs):\n \"\"\"\n Para manter a app compilacao isolada das outras aplica\u00e7\u00f5es,\n este get foi implementado para tratar uma prerrogativa externa\n de usu\u00e1rio.\n \"\"\"\n if AppConfig.attr('texto_articulado_norma'):\n return IntegracaoTaView.get(self, request, *args, **kwargs)\n else:\n return self.get_redirect_deactivated()\n\n\nclass NormaCrud(Crud):\n model = NormaJuridica\n help_topic = 'norma_juridica'\n public = [RP_LIST, RP_DETAIL]\n\n class BaseMixin(Crud.BaseMixin):\n list_field_names = ['tipo', 'numero', 'ano', 'ementa']\n\n list_url = ''\n\n @property\n def search_url(self):\n namespace = self.model._meta.app_config.name\n return reverse('%s:%s' % (namespace, 'norma_pesquisa'))\n\n class DetailView(Crud.DetailView):\n pass\n\n class DeleteView(Crud.DeleteView):\n\n def get_success_url(self):\n return self.search_url\n\n class CreateView(Crud.CreateView):\n form_class = NormaJuridicaForm\n\n @property\n def cancel_url(self):\n return self.search_url\n\n layout_key = 'NormaJuridicaCreate'\n\n class ListView(Crud.ListView, RedirectView):\n\n def get_redirect_url(self, *args, **kwargs):\n namespace = self.model._meta.app_config.name\n return reverse('%s:%s' % (namespace, 'norma_pesquisa'))\n\n def get(self, request, *args, **kwargs):\n return RedirectView.get(self, request, *args, **kwargs)\n\n class UpdateView(Crud.UpdateView):\n form_class = NormaJuridicaForm\n\n layout_key = 'NormaJuridicaCreate'\n\n def get_initial(self):\n initial = super(UpdateView, self).get_initial()\n norma = NormaJuridica.objects.get(id=self.kwargs['pk'])\n if norma.materia:\n initial['tipo_materia'] = norma.materia.tipo\n initial['ano_materia'] = norma.materia.ano\n initial['numero_materia'] = norma.materia.numero\n return initial\n\n\ndef recuperar_norma(request):\n tipo = TipoNormaJuridica.objects.get(pk=request.GET['tipo'])\n numero = request.GET['numero']\n ano = request.GET['ano']\n\n try:\n norma = NormaJuridica.objects.get(tipo=tipo,\n ano=ano,\n numero=numero)\n response = JsonResponse({'ementa': norma.ementa,\n 'id': norma.id})\n except ObjectDoesNotExist:\n response = JsonResponse({'ementa': '', 'id': 0})\n\n return response\n\n\ndef recuperar_numero_norma(request):\n tipo = TipoNormaJuridica.objects.get(pk=request.GET['tipo'])\n ano = request.GET.get('ano', '')\n\n param = {'tipo': tipo}\n param['ano'] = ano if ano else timezone.now().year\n norma = NormaJuridica.objects.filter(**param).extra(\n {'numero_id': \"CAST(numero as INTEGER)\"}).order_by(\n 'tipo', 'ano','numero_id').values_list('numero', 'ano').last()\n if norma:\n response = JsonResponse({'numero': int(norma[0]) + 1,\n 'ano': norma[1]})\n else:\n response = JsonResponse(\n {'numero': 1, 'ano': ano})\n\n return response\n\n\nclass ImpressosView(PermissionRequiredMixin, TemplateView):\n template_name = 'materia/impressos/impressos.html'\n permission_required = ('materia.can_access_impressos', )\n\n\ndef gerar_pdf_impressos(request, context, template_name):\n template = loader.get_template(template_name)\n html = template.render(RequestContext(request, context))\n pdf = weasyprint.HTML(string=html, base_url=request.build_absolute_uri()\n ).write_pdf()\n\n response = HttpResponse(pdf, content_type='application/pdf')\n response['Content-Disposition'] = (\n 'inline; filename=\"relatorio_impressos.pdf\"')\n response['Content-Transfer-Encoding'] = 'binary'\n\n return response\n\n\nclass NormaPesquisaSimplesView(PermissionRequiredMixin, FormView):\n form_class = NormaPesquisaSimplesForm\n template_name = 'materia/impressos/norma.html'\n permission_required = ('materia.can_access_impressos', )\n\n def form_valid(self, form):\n normas = NormaJuridica.objects.all().order_by(\n 'numero')\n template_norma = 'materia/impressos/normas_pdf.html'\n\n titulo = form.cleaned_data['titulo']\n\n if form.cleaned_data['tipo_norma']:\n normas = normas.filter(tipo=form.cleaned_data['tipo_norma'])\n\n if form.cleaned_data['data_inicial']:\n normas = normas.filter(\n data__gte=form.cleaned_data['data_inicial'],\n data__lte=form.cleaned_data['data_final'])\n\n qtd_resultados = len(normas)\n if qtd_resultados > 2000:\n normas = normas[:2000]\n\n context = {'quantidade': qtd_resultados,\n 'titulo': titulo,\n 'normas': normas}\n\n return gerar_pdf_impressos(self.request, context, template_norma)\n", "path": "sapl/norma/views.py"}], "after_files": [{"content": "\nimport re\nimport weasyprint\nfrom django.contrib.auth.mixins import PermissionRequiredMixin\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.core.urlresolvers import reverse\nfrom django.http import HttpResponse, JsonResponse\nfrom django.template import RequestContext, loader\nfrom django.utils import timezone\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.views.generic import TemplateView, UpdateView\nfrom django.views.generic.base import RedirectView\nfrom django.views.generic.edit import FormView\nfrom django_filters.views import FilterView\n\nfrom sapl.base.models import AppConfig\nfrom sapl.compilacao.views import IntegracaoTaView\nfrom sapl.crud.base import (RP_DETAIL, RP_LIST, Crud, CrudAux,\n MasterDetailCrud, make_pagination)\nfrom sapl.utils import show_results_filter_set\n\nfrom .forms import (NormaFilterSet, NormaJuridicaForm,\n NormaPesquisaSimplesForm, NormaRelacionadaForm)\nfrom .models import (AssuntoNorma, NormaJuridica, NormaRelacionada,\n TipoNormaJuridica, TipoVinculoNormaJuridica)\n\n# LegislacaoCitadaCrud = Crud.build(LegislacaoCitada, '')\nAssuntoNormaCrud = CrudAux.build(AssuntoNorma, 'assunto_norma_juridica',\n list_field_names=['assunto', 'descricao'])\n\n\nTipoNormaCrud = CrudAux.build(\n TipoNormaJuridica, 'tipo_norma_juridica',\n list_field_names=['sigla', 'descricao', 'equivalente_lexml'])\nTipoVinculoNormaJuridicaCrud = CrudAux.build(\n TipoVinculoNormaJuridica, '',\n list_field_names=['sigla', 'descricao_ativa', 'descricao_passiva'])\n\n\nclass NormaRelacionadaCrud(MasterDetailCrud):\n model = NormaRelacionada\n parent_field = 'norma_principal'\n help_topic = 'norma_juridica'\n\n class BaseMixin(MasterDetailCrud.BaseMixin):\n list_field_names = ['norma_relacionada', 'tipo_vinculo']\n\n class CreateView(MasterDetailCrud.CreateView):\n form_class = NormaRelacionadaForm\n\n class UpdateView(MasterDetailCrud.UpdateView):\n form_class = NormaRelacionadaForm\n\n def get_initial(self):\n initial = super(UpdateView, self).get_initial()\n initial['tipo'] = self.object.norma_relacionada.tipo.id\n initial['numero'] = self.object.norma_relacionada.numero\n initial['ano'] = self.object.norma_relacionada.ano\n initial['ementa'] = self.object.norma_relacionada.ementa\n return initial\n\n class DetailView(MasterDetailCrud.DetailView):\n\n layout_key = 'NormaRelacionadaDetail'\n\n\nclass NormaPesquisaView(FilterView):\n model = NormaJuridica\n filterset_class = NormaFilterSet\n paginate_by = 10\n\n def get_queryset(self):\n qs = super().get_queryset()\n\n qs.select_related('tipo', 'materia')\n\n return qs\n\n def get_context_data(self, **kwargs):\n context = super(NormaPesquisaView, self).get_context_data(**kwargs)\n\n context['title'] = _('Pesquisar Norma Jur\u00eddica')\n\n qr = self.request.GET.copy()\n\n if 'page' in qr:\n del qr['page']\n\n paginator = context['paginator']\n page_obj = context['page_obj']\n\n context['page_range'] = make_pagination(\n page_obj.number, paginator.num_pages)\n\n context['filter_url'] = ('&' + qr.urlencode()) if len(qr) > 0 else ''\n\n context['show_results'] = show_results_filter_set(qr)\n\n return context\n\n\nclass NormaTaView(IntegracaoTaView):\n model = NormaJuridica\n model_type_foreignkey = TipoNormaJuridica\n map_fields = {\n 'data': 'data',\n 'ementa': 'ementa',\n 'observacao': 'observacao',\n 'numero': 'numero',\n 'ano': 'ano',\n }\n\n map_funcs = {\n 'publicacao_func': True\n }\n\n def get(self, request, *args, **kwargs):\n \"\"\"\n Para manter a app compilacao isolada das outras aplica\u00e7\u00f5es,\n este get foi implementado para tratar uma prerrogativa externa\n de usu\u00e1rio.\n \"\"\"\n if AppConfig.attr('texto_articulado_norma'):\n return IntegracaoTaView.get(self, request, *args, **kwargs)\n else:\n return self.get_redirect_deactivated()\n\n\nclass NormaCrud(Crud):\n model = NormaJuridica\n help_topic = 'norma_juridica'\n public = [RP_LIST, RP_DETAIL]\n\n class BaseMixin(Crud.BaseMixin):\n list_field_names = ['tipo', 'numero', 'ano', 'ementa']\n\n list_url = ''\n\n @property\n def search_url(self):\n namespace = self.model._meta.app_config.name\n return reverse('%s:%s' % (namespace, 'norma_pesquisa'))\n\n class DetailView(Crud.DetailView):\n pass\n\n class DeleteView(Crud.DeleteView):\n\n def get_success_url(self):\n return self.search_url\n\n class CreateView(Crud.CreateView):\n form_class = NormaJuridicaForm\n\n @property\n def cancel_url(self):\n return self.search_url\n\n layout_key = 'NormaJuridicaCreate'\n\n class ListView(Crud.ListView, RedirectView):\n\n def get_redirect_url(self, *args, **kwargs):\n namespace = self.model._meta.app_config.name\n return reverse('%s:%s' % (namespace, 'norma_pesquisa'))\n\n def get(self, request, *args, **kwargs):\n return RedirectView.get(self, request, *args, **kwargs)\n\n class UpdateView(Crud.UpdateView):\n form_class = NormaJuridicaForm\n\n layout_key = 'NormaJuridicaCreate'\n\n def get_initial(self):\n initial = super(UpdateView, self).get_initial()\n norma = NormaJuridica.objects.get(id=self.kwargs['pk'])\n if norma.materia:\n initial['tipo_materia'] = norma.materia.tipo\n initial['ano_materia'] = norma.materia.ano\n initial['numero_materia'] = norma.materia.numero\n return initial\n\n\ndef recuperar_norma(request):\n tipo = TipoNormaJuridica.objects.get(pk=request.GET['tipo'])\n numero = request.GET['numero']\n ano = request.GET['ano']\n\n try:\n norma = NormaJuridica.objects.get(tipo=tipo,\n ano=ano,\n numero=numero)\n response = JsonResponse({'ementa': norma.ementa,\n 'id': norma.id})\n except ObjectDoesNotExist:\n response = JsonResponse({'ementa': '', 'id': 0})\n\n return response\n\n\ndef recuperar_numero_norma(request):\n tipo = TipoNormaJuridica.objects.get(pk=request.GET['tipo'])\n ano = request.GET.get('ano', '')\n param = {'tipo': tipo}\n param['ano'] = ano if ano else timezone.now().year\n norma = NormaJuridica.objects.filter(**param).order_by(\n 'tipo', 'ano').values_list('numero', 'ano').last()\n if norma:\n response = JsonResponse({'numero': int(re.sub(\"[^0-9].*\", '', norma[0])) + 1,\n 'ano': norma[1]})\n else:\n response = JsonResponse(\n {'numero': 1, 'ano': ano})\n\n return response\n\n\nclass ImpressosView(PermissionRequiredMixin, TemplateView):\n template_name = 'materia/impressos/impressos.html'\n permission_required = ('materia.can_access_impressos', )\n\n\ndef gerar_pdf_impressos(request, context, template_name):\n template = loader.get_template(template_name)\n html = template.render(RequestContext(request, context))\n pdf = weasyprint.HTML(string=html, base_url=request.build_absolute_uri()\n ).write_pdf()\n\n response = HttpResponse(pdf, content_type='application/pdf')\n response['Content-Disposition'] = (\n 'inline; filename=\"relatorio_impressos.pdf\"')\n response['Content-Transfer-Encoding'] = 'binary'\n\n return response\n\n\nclass NormaPesquisaSimplesView(PermissionRequiredMixin, FormView):\n form_class = NormaPesquisaSimplesForm\n template_name = 'materia/impressos/norma.html'\n permission_required = ('materia.can_access_impressos', )\n\n def form_valid(self, form):\n normas = NormaJuridica.objects.all().order_by(\n 'numero')\n template_norma = 'materia/impressos/normas_pdf.html'\n\n titulo = form.cleaned_data['titulo']\n\n if form.cleaned_data['tipo_norma']:\n normas = normas.filter(tipo=form.cleaned_data['tipo_norma'])\n\n if form.cleaned_data['data_inicial']:\n normas = normas.filter(\n data__gte=form.cleaned_data['data_inicial'],\n data__lte=form.cleaned_data['data_final'])\n\n qtd_resultados = len(normas)\n if qtd_resultados > 2000:\n normas = normas[:2000]\n\n context = {'quantidade': qtd_resultados,\n 'titulo': titulo,\n 'normas': normas}\n\n return gerar_pdf_impressos(self.request, context, template_norma)\n", "path": "sapl/norma/views.py"}]} | 3,250 | 313 |
gh_patches_debug_14947 | rasdani/github-patches | git_diff | python-telegram-bot__python-telegram-bot-751 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
RegexHandler error with stickers, voices, images...
RegexHandler does not check if update.effective_message.text exists.
### Steps to reproduce
1. Add a RegexHandler
2. Run the bot
3. Send a sticker
### Expected behaviour
The handler should not capture the sticker
### Actual behaviour
The handler capture the sticker and gives an error
### Configuration
Does not matter
**Version of Python, python-telegram-bot & dependencies:**
``python-telegram-bot 7.0.0``
### Logs
2017-07-26 14:02:47,301 - telegram.ext.dispatcher - ERROR - An uncaught error was raised while processing the update
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/telegram/ext/dispatcher.py", line 269, in process_update
if handler.check_update(update):
File "/usr/local/lib/python3.5/dist-packages/telegram/ext/regexhandler.py", line 150, in check_update
match = re.match(self.pattern, update.effective_message.text)
File "/usr/lib/python3.5/re.py", line 163, in match
return _compile(pattern, flags).match(string)
TypeError: expected string or bytes-like object
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `telegram/ext/regexhandler.py`
Content:
```
1 #!/usr/bin/env python
2 #
3 # A library that provides a Python interface to the Telegram Bot API
4 # Copyright (C) 2015-2017
5 # Leandro Toledo de Souza <[email protected]>
6 #
7 # This program is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU Lesser Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # This program is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU Lesser Public License for more details.
16 #
17 # You should have received a copy of the GNU Lesser Public License
18 # along with this program. If not, see [http://www.gnu.org/licenses/].
19 # TODO: Remove allow_edited
20 """ This module contains the RegexHandler class """
21
22 import re
23 import warnings
24
25 from future.utils import string_types
26
27 from telegram import Update
28 from .handler import Handler
29
30
31 class RegexHandler(Handler):
32 """
33 Handler class to handle Telegram updates based on a regex. It uses a
34 regular expression to check text messages. Read the documentation of the
35 ``re`` module for more information. The ``re.match`` function is used to
36 determine if an update should be handled by this handler.
37
38 Attributes:
39 pattern (:obj:`str` | :obj:`Pattern`): The regex pattern.
40 callback (:obj:`callable`): The callback function for this handler.
41 pass_groups (:obj:`bool`): Optional. Determines whether ``groups`` will be passed to the
42 callback function.
43 pass_groupdict (:obj:`bool`): Optional. Determines whether ``groupdict``. will be passed to
44 the callback function.
45 pass_update_queue (:obj:`bool`): Optional. Determines whether ``update_queue`` will be
46 passed to the callback function.
47 pass_job_queue (:obj:`bool`): Optional. Determines whether ``job_queue`` will be passed to
48 the callback function.
49 pass_user_data (:obj:`bool`): Optional. Determines whether ``user_data`` will be passed to
50 the callback function.
51 pass_chat_data (:obj:`bool`): Optional. Determines whether ``chat_data`` will be passed to
52 the callback function.
53
54 Note:
55 :attr:`pass_user_data` and :attr:`pass_chat_data` determine whether a ``dict`` you
56 can use to keep any data in will be sent to the :attr:`callback` function.. Related to
57 either the user or the chat that the update was sent in. For each update from the same user
58 or in the same chat, it will be the same ``dict``.
59
60 Args:
61 pattern (:obj:`str` | :obj:`Pattern`): The regex pattern.
62 callback (:obj:`callable`): A function that takes ``bot, update`` as positional arguments.
63 It will be called when the :attr:`check_update` has determined that an update should be
64 processed by this handler.
65 pass_groups (:obj:`bool`, optional): If the callback should be passed the result of
66 ``re.match(pattern, data).groups()`` as a keyword argument called ``groups``.
67 Default is ``False``
68 pass_groupdict (:obj:`bool`, optional): If the callback should be passed the result of
69 ``re.match(pattern, data).groupdict()`` as a keyword argument called ``groupdict``.
70 Default is ``False``
71 pass_update_queue (:obj:`bool`, optional): If set to ``True``, a keyword argument called
72 ``update_queue`` will be passed to the callback function. It will be the ``Queue``
73 instance used by the :class:`telegram.ext.Updater` and :class:`telegram.ext.Dispatcher`
74 that contains new updates which can be used to insert updates. Default is ``False``.
75 pass_job_queue (:obj:`bool`, optional): If set to ``True``, a keyword argument called
76 ``job_queue`` will be passed to the callback function. It will be a
77 :class:`telegram.ext.JobQueue` instance created by the :class:`telegram.ext.Updater`
78 which can be used to schedule new jobs. Default is ``False``.
79 pass_user_data (:obj:`bool`, optional): If set to ``True``, a keyword argument called
80 ``user_data`` will be passed to the callback function. Default is ``False``.
81 pass_chat_data (:obj:`bool`, optional): If set to ``True``, a keyword argument called
82 ``chat_data`` will be passed to the callback function. Default is ``False``.
83 message_updates (:obj:`bool`, optional): Should "normal" message updates be handled?
84 Default is ``True``.
85 channel_post_updates (:obj:`bool`, optional): Should channel posts updates be handled?
86 Default is ``True``.
87 edited_updates (:obj:`bool`, optional): Should "edited" message updates be handled? Default
88 is ``False``.
89 allow_edited (:obj:`bool`, optional): If the handler should also accept edited messages.
90 Default is ``False`` - Deprecated. use edited_updates instead.
91
92 Raises:
93 ValueError
94 """
95
96 def __init__(self,
97 pattern,
98 callback,
99 pass_groups=False,
100 pass_groupdict=False,
101 pass_update_queue=False,
102 pass_job_queue=False,
103 pass_user_data=False,
104 pass_chat_data=False,
105 allow_edited=False,
106 message_updates=True,
107 channel_post_updates=False,
108 edited_updates=False
109 ):
110 if not message_updates and not channel_post_updates and not edited_updates:
111 raise ValueError(
112 'message_updates, channel_post_updates and edited_updates are all False')
113 if allow_edited:
114 warnings.warn('allow_edited is getting deprecated, please use edited_updates instead')
115 edited_updates = allow_edited
116
117 super(RegexHandler, self).__init__(
118 callback,
119 pass_update_queue=pass_update_queue,
120 pass_job_queue=pass_job_queue,
121 pass_user_data=pass_user_data,
122 pass_chat_data=pass_chat_data)
123
124 if isinstance(pattern, string_types):
125 pattern = re.compile(pattern)
126
127 self.pattern = pattern
128 self.pass_groups = pass_groups
129 self.pass_groupdict = pass_groupdict
130 self.allow_edited = allow_edited
131 self.message_updates = message_updates
132 self.channel_post_updates = channel_post_updates
133 self.edited_updates = edited_updates
134
135 def check_update(self, update):
136 """
137 Determines whether an update should be passed to this handlers :attr:`callback`.
138
139 Args:
140 update (:class:`telegram.Update`): Incoming telegram update.
141
142 Returns:
143 :obj:`bool`
144 """
145
146 if any([(self.message_updates and update.message),
147 (self.edited_updates and update.edited_message),
148 (self.channel_post_updates and update.channel_post)]) and (
149 isinstance(update, Update)):
150 match = re.match(self.pattern, update.effective_message.text)
151 return bool(match)
152 return False
153
154 def handle_update(self, update, dispatcher):
155 """
156 Send the update to the :attr:`callback`.
157
158 Args:
159 update (:class:`telegram.Update`): Incoming telegram update.
160 dispatcher (:class:`telegram.ext.Dispatcher`): Dispatcher that originated the Update.
161 """
162
163 optional_args = self.collect_optional_args(dispatcher, update)
164 match = re.match(self.pattern, update.effective_message.text)
165
166 if self.pass_groups:
167 optional_args['groups'] = match.groups()
168 if self.pass_groupdict:
169 optional_args['groupdict'] = match.groupdict()
170
171 return self.callback(dispatcher.bot, update, **optional_args)
172
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/telegram/ext/regexhandler.py b/telegram/ext/regexhandler.py
--- a/telegram/ext/regexhandler.py
+++ b/telegram/ext/regexhandler.py
@@ -142,11 +142,12 @@
Returns:
:obj:`bool`
"""
-
+ if not isinstance(update, Update) and not update.effective_message:
+ return False
if any([(self.message_updates and update.message),
(self.edited_updates and update.edited_message),
- (self.channel_post_updates and update.channel_post)]) and (
- isinstance(update, Update)):
+ (self.channel_post_updates and update.channel_post)]) and \
+ update.effective_message.text:
match = re.match(self.pattern, update.effective_message.text)
return bool(match)
return False
| {"golden_diff": "diff --git a/telegram/ext/regexhandler.py b/telegram/ext/regexhandler.py\n--- a/telegram/ext/regexhandler.py\n+++ b/telegram/ext/regexhandler.py\n@@ -142,11 +142,12 @@\n Returns:\n :obj:`bool`\n \"\"\"\n-\n+ if not isinstance(update, Update) and not update.effective_message:\n+ return False\n if any([(self.message_updates and update.message),\n (self.edited_updates and update.edited_message),\n- (self.channel_post_updates and update.channel_post)]) and (\n- isinstance(update, Update)):\n+ (self.channel_post_updates and update.channel_post)]) and \\\n+ update.effective_message.text:\n match = re.match(self.pattern, update.effective_message.text)\n return bool(match)\n return False\n", "issue": "RegexHandler error with stickers, voices, images...\nRegexHandler does not check if update.effective_message.text exists.\r\n\r\n### Steps to reproduce\r\n1. Add a RegexHandler\r\n2. Run the bot\r\n3. Send a sticker\r\n\r\n### Expected behaviour\r\nThe handler should not capture the sticker\r\n\r\n### Actual behaviour\r\nThe handler capture the sticker and gives an error\r\n\r\n### Configuration\r\nDoes not matter\r\n\r\n\r\n**Version of Python, python-telegram-bot & dependencies:**\r\n\r\n``python-telegram-bot 7.0.0``\r\n\r\n### Logs\r\n2017-07-26 14:02:47,301 - telegram.ext.dispatcher - ERROR - An uncaught error was raised while processing the update\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.5/dist-packages/telegram/ext/dispatcher.py\", line 269, in process_update\r\n if handler.check_update(update):\r\n File \"/usr/local/lib/python3.5/dist-packages/telegram/ext/regexhandler.py\", line 150, in check_update\r\n match = re.match(self.pattern, update.effective_message.text)\r\n File \"/usr/lib/python3.5/re.py\", line 163, in match\r\n return _compile(pattern, flags).match(string)\r\nTypeError: expected string or bytes-like object\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2017\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n# TODO: Remove allow_edited\n\"\"\" This module contains the RegexHandler class \"\"\"\n\nimport re\nimport warnings\n\nfrom future.utils import string_types\n\nfrom telegram import Update\nfrom .handler import Handler\n\n\nclass RegexHandler(Handler):\n \"\"\"\n Handler class to handle Telegram updates based on a regex. It uses a\n regular expression to check text messages. Read the documentation of the\n ``re`` module for more information. The ``re.match`` function is used to\n determine if an update should be handled by this handler.\n\n Attributes:\n pattern (:obj:`str` | :obj:`Pattern`): The regex pattern.\n callback (:obj:`callable`): The callback function for this handler.\n pass_groups (:obj:`bool`): Optional. Determines whether ``groups`` will be passed to the\n callback function.\n pass_groupdict (:obj:`bool`): Optional. Determines whether ``groupdict``. will be passed to\n the callback function.\n pass_update_queue (:obj:`bool`): Optional. Determines whether ``update_queue`` will be\n passed to the callback function.\n pass_job_queue (:obj:`bool`): Optional. Determines whether ``job_queue`` will be passed to\n the callback function.\n pass_user_data (:obj:`bool`): Optional. Determines whether ``user_data`` will be passed to\n the callback function.\n pass_chat_data (:obj:`bool`): Optional. Determines whether ``chat_data`` will be passed to\n the callback function.\n\n Note:\n :attr:`pass_user_data` and :attr:`pass_chat_data` determine whether a ``dict`` you\n can use to keep any data in will be sent to the :attr:`callback` function.. Related to\n either the user or the chat that the update was sent in. For each update from the same user\n or in the same chat, it will be the same ``dict``.\n\n Args:\n pattern (:obj:`str` | :obj:`Pattern`): The regex pattern.\n callback (:obj:`callable`): A function that takes ``bot, update`` as positional arguments.\n It will be called when the :attr:`check_update` has determined that an update should be\n processed by this handler.\n pass_groups (:obj:`bool`, optional): If the callback should be passed the result of\n ``re.match(pattern, data).groups()`` as a keyword argument called ``groups``.\n Default is ``False``\n pass_groupdict (:obj:`bool`, optional): If the callback should be passed the result of\n ``re.match(pattern, data).groupdict()`` as a keyword argument called ``groupdict``.\n Default is ``False``\n pass_update_queue (:obj:`bool`, optional): If set to ``True``, a keyword argument called\n ``update_queue`` will be passed to the callback function. It will be the ``Queue``\n instance used by the :class:`telegram.ext.Updater` and :class:`telegram.ext.Dispatcher`\n that contains new updates which can be used to insert updates. Default is ``False``.\n pass_job_queue (:obj:`bool`, optional): If set to ``True``, a keyword argument called\n ``job_queue`` will be passed to the callback function. It will be a\n :class:`telegram.ext.JobQueue` instance created by the :class:`telegram.ext.Updater`\n which can be used to schedule new jobs. Default is ``False``.\n pass_user_data (:obj:`bool`, optional): If set to ``True``, a keyword argument called\n ``user_data`` will be passed to the callback function. Default is ``False``.\n pass_chat_data (:obj:`bool`, optional): If set to ``True``, a keyword argument called\n ``chat_data`` will be passed to the callback function. Default is ``False``.\n message_updates (:obj:`bool`, optional): Should \"normal\" message updates be handled?\n Default is ``True``.\n channel_post_updates (:obj:`bool`, optional): Should channel posts updates be handled?\n Default is ``True``.\n edited_updates (:obj:`bool`, optional): Should \"edited\" message updates be handled? Default\n is ``False``.\n allow_edited (:obj:`bool`, optional): If the handler should also accept edited messages.\n Default is ``False`` - Deprecated. use edited_updates instead.\n\n Raises:\n ValueError\n \"\"\"\n\n def __init__(self,\n pattern,\n callback,\n pass_groups=False,\n pass_groupdict=False,\n pass_update_queue=False,\n pass_job_queue=False,\n pass_user_data=False,\n pass_chat_data=False,\n allow_edited=False,\n message_updates=True,\n channel_post_updates=False,\n edited_updates=False\n ):\n if not message_updates and not channel_post_updates and not edited_updates:\n raise ValueError(\n 'message_updates, channel_post_updates and edited_updates are all False')\n if allow_edited:\n warnings.warn('allow_edited is getting deprecated, please use edited_updates instead')\n edited_updates = allow_edited\n\n super(RegexHandler, self).__init__(\n callback,\n pass_update_queue=pass_update_queue,\n pass_job_queue=pass_job_queue,\n pass_user_data=pass_user_data,\n pass_chat_data=pass_chat_data)\n\n if isinstance(pattern, string_types):\n pattern = re.compile(pattern)\n\n self.pattern = pattern\n self.pass_groups = pass_groups\n self.pass_groupdict = pass_groupdict\n self.allow_edited = allow_edited\n self.message_updates = message_updates\n self.channel_post_updates = channel_post_updates\n self.edited_updates = edited_updates\n\n def check_update(self, update):\n \"\"\"\n Determines whether an update should be passed to this handlers :attr:`callback`.\n\n Args:\n update (:class:`telegram.Update`): Incoming telegram update.\n\n Returns:\n :obj:`bool`\n \"\"\"\n\n if any([(self.message_updates and update.message),\n (self.edited_updates and update.edited_message),\n (self.channel_post_updates and update.channel_post)]) and (\n isinstance(update, Update)):\n match = re.match(self.pattern, update.effective_message.text)\n return bool(match)\n return False\n\n def handle_update(self, update, dispatcher):\n \"\"\"\n Send the update to the :attr:`callback`.\n\n Args:\n update (:class:`telegram.Update`): Incoming telegram update.\n dispatcher (:class:`telegram.ext.Dispatcher`): Dispatcher that originated the Update.\n \"\"\"\n\n optional_args = self.collect_optional_args(dispatcher, update)\n match = re.match(self.pattern, update.effective_message.text)\n\n if self.pass_groups:\n optional_args['groups'] = match.groups()\n if self.pass_groupdict:\n optional_args['groupdict'] = match.groupdict()\n\n return self.callback(dispatcher.bot, update, **optional_args)\n", "path": "telegram/ext/regexhandler.py"}], "after_files": [{"content": "#!/usr/bin/env python\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2017\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n# TODO: Remove allow_edited\n\"\"\" This module contains the RegexHandler class \"\"\"\n\nimport re\nimport warnings\n\nfrom future.utils import string_types\n\nfrom telegram import Update\nfrom .handler import Handler\n\n\nclass RegexHandler(Handler):\n \"\"\"\n Handler class to handle Telegram updates based on a regex. It uses a\n regular expression to check text messages. Read the documentation of the\n ``re`` module for more information. The ``re.match`` function is used to\n determine if an update should be handled by this handler.\n\n Attributes:\n pattern (:obj:`str` | :obj:`Pattern`): The regex pattern.\n callback (:obj:`callable`): The callback function for this handler.\n pass_groups (:obj:`bool`): Optional. Determines whether ``groups`` will be passed to the\n callback function.\n pass_groupdict (:obj:`bool`): Optional. Determines whether ``groupdict``. will be passed to\n the callback function.\n pass_update_queue (:obj:`bool`): Optional. Determines whether ``update_queue`` will be\n passed to the callback function.\n pass_job_queue (:obj:`bool`): Optional. Determines whether ``job_queue`` will be passed to\n the callback function.\n pass_user_data (:obj:`bool`): Optional. Determines whether ``user_data`` will be passed to\n the callback function.\n pass_chat_data (:obj:`bool`): Optional. Determines whether ``chat_data`` will be passed to\n the callback function.\n\n Note:\n :attr:`pass_user_data` and :attr:`pass_chat_data` determine whether a ``dict`` you\n can use to keep any data in will be sent to the :attr:`callback` function.. Related to\n either the user or the chat that the update was sent in. For each update from the same user\n or in the same chat, it will be the same ``dict``.\n\n Args:\n pattern (:obj:`str` | :obj:`Pattern`): The regex pattern.\n callback (:obj:`callable`): A function that takes ``bot, update`` as positional arguments.\n It will be called when the :attr:`check_update` has determined that an update should be\n processed by this handler.\n pass_groups (:obj:`bool`, optional): If the callback should be passed the result of\n ``re.match(pattern, data).groups()`` as a keyword argument called ``groups``.\n Default is ``False``\n pass_groupdict (:obj:`bool`, optional): If the callback should be passed the result of\n ``re.match(pattern, data).groupdict()`` as a keyword argument called ``groupdict``.\n Default is ``False``\n pass_update_queue (:obj:`bool`, optional): If set to ``True``, a keyword argument called\n ``update_queue`` will be passed to the callback function. It will be the ``Queue``\n instance used by the :class:`telegram.ext.Updater` and :class:`telegram.ext.Dispatcher`\n that contains new updates which can be used to insert updates. Default is ``False``.\n pass_job_queue (:obj:`bool`, optional): If set to ``True``, a keyword argument called\n ``job_queue`` will be passed to the callback function. It will be a\n :class:`telegram.ext.JobQueue` instance created by the :class:`telegram.ext.Updater`\n which can be used to schedule new jobs. Default is ``False``.\n pass_user_data (:obj:`bool`, optional): If set to ``True``, a keyword argument called\n ``user_data`` will be passed to the callback function. Default is ``False``.\n pass_chat_data (:obj:`bool`, optional): If set to ``True``, a keyword argument called\n ``chat_data`` will be passed to the callback function. Default is ``False``.\n message_updates (:obj:`bool`, optional): Should \"normal\" message updates be handled?\n Default is ``True``.\n channel_post_updates (:obj:`bool`, optional): Should channel posts updates be handled?\n Default is ``True``.\n edited_updates (:obj:`bool`, optional): Should \"edited\" message updates be handled? Default\n is ``False``.\n allow_edited (:obj:`bool`, optional): If the handler should also accept edited messages.\n Default is ``False`` - Deprecated. use edited_updates instead.\n\n Raises:\n ValueError\n \"\"\"\n\n def __init__(self,\n pattern,\n callback,\n pass_groups=False,\n pass_groupdict=False,\n pass_update_queue=False,\n pass_job_queue=False,\n pass_user_data=False,\n pass_chat_data=False,\n allow_edited=False,\n message_updates=True,\n channel_post_updates=False,\n edited_updates=False\n ):\n if not message_updates and not channel_post_updates and not edited_updates:\n raise ValueError(\n 'message_updates, channel_post_updates and edited_updates are all False')\n if allow_edited:\n warnings.warn('allow_edited is getting deprecated, please use edited_updates instead')\n edited_updates = allow_edited\n\n super(RegexHandler, self).__init__(\n callback,\n pass_update_queue=pass_update_queue,\n pass_job_queue=pass_job_queue,\n pass_user_data=pass_user_data,\n pass_chat_data=pass_chat_data)\n\n if isinstance(pattern, string_types):\n pattern = re.compile(pattern)\n\n self.pattern = pattern\n self.pass_groups = pass_groups\n self.pass_groupdict = pass_groupdict\n self.allow_edited = allow_edited\n self.message_updates = message_updates\n self.channel_post_updates = channel_post_updates\n self.edited_updates = edited_updates\n\n def check_update(self, update):\n \"\"\"\n Determines whether an update should be passed to this handlers :attr:`callback`.\n\n Args:\n update (:class:`telegram.Update`): Incoming telegram update.\n\n Returns:\n :obj:`bool`\n \"\"\"\n if not isinstance(update, Update) and not update.effective_message:\n return False\n if any([(self.message_updates and update.message),\n (self.edited_updates and update.edited_message),\n (self.channel_post_updates and update.channel_post)]) and \\\n update.effective_message.text:\n match = re.match(self.pattern, update.effective_message.text)\n return bool(match)\n return False\n\n def handle_update(self, update, dispatcher):\n \"\"\"\n Send the update to the :attr:`callback`.\n\n Args:\n update (:class:`telegram.Update`): Incoming telegram update.\n dispatcher (:class:`telegram.ext.Dispatcher`): Dispatcher that originated the Update.\n \"\"\"\n\n optional_args = self.collect_optional_args(dispatcher, update)\n match = re.match(self.pattern, update.effective_message.text)\n\n if self.pass_groups:\n optional_args['groups'] = match.groups()\n if self.pass_groupdict:\n optional_args['groupdict'] = match.groupdict()\n\n return self.callback(dispatcher.bot, update, **optional_args)\n", "path": "telegram/ext/regexhandler.py"}]} | 2,607 | 178 |
gh_patches_debug_34803 | rasdani/github-patches | git_diff | learningequality__kolibri-5393 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Auto sign-out from tablets during normal class
**Observed behavior**
Kids are signout out of their tabs automatically while watching videos or doing quizzes
**Expected behavior**
The user should not be signed out automatically.
**User-facing consequences**
While the kids are in the middle of any exercise or video, they are being automatically signed out. They need to again re-login to continue the session
**Steps to reproduce**
This happens in some of the tabs during a normal classroom session.
**Context**
Kolibri version : Kolibri 0.12.2
Operating system : ubuntu 14.04
Browser : Chrome

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kolibri/core/logger/api.py`
Content:
```
1 from django.core.exceptions import ObjectDoesNotExist
2 from django.db.models.query import F
3 from django.http import Http404
4 from django_filters import ModelChoiceFilter
5 from django_filters.rest_framework import CharFilter
6 from django_filters.rest_framework import DjangoFilterBackend
7 from django_filters.rest_framework import FilterSet
8 from rest_framework import filters
9 from rest_framework import viewsets
10 from rest_framework.response import Response
11
12 from .models import AttemptLog
13 from .models import ContentSessionLog
14 from .models import ContentSummaryLog
15 from .models import ExamAttemptLog
16 from .models import ExamLog
17 from .models import MasteryLog
18 from .models import UserSessionLog
19 from .permissions import ExamActivePermissions
20 from .serializers import AttemptLogSerializer
21 from .serializers import ContentSessionLogSerializer
22 from .serializers import ContentSummaryLogSerializer
23 from .serializers import ExamAttemptLogSerializer
24 from .serializers import ExamLogSerializer
25 from .serializers import MasteryLogSerializer
26 from .serializers import TotalContentProgressSerializer
27 from .serializers import UserSessionLogSerializer
28 from kolibri.core.auth.api import KolibriAuthPermissions
29 from kolibri.core.auth.api import KolibriAuthPermissionsFilter
30 from kolibri.core.auth.filters import HierarchyRelationsFilter
31 from kolibri.core.auth.models import Classroom
32 from kolibri.core.auth.models import Collection
33 from kolibri.core.auth.models import Facility
34 from kolibri.core.auth.models import FacilityUser
35 from kolibri.core.auth.models import LearnerGroup
36 from kolibri.core.content.api import OptionalPageNumberPagination
37 from kolibri.core.exams.models import Exam
38
39
40 class BaseLogFilter(FilterSet):
41 facility = ModelChoiceFilter(method="filter_facility", queryset=Facility.objects.all())
42 classroom = ModelChoiceFilter(method="filter_classroom", queryset=Classroom.objects.all())
43 learner_group = ModelChoiceFilter(method="filter_learner_group", queryset=LearnerGroup.objects.all())
44
45 # Only a superuser can filter by facilities
46 def filter_facility(self, queryset, name, value):
47 return queryset.filter(user__facility=value)
48
49 def filter_classroom(self, queryset, name, value):
50 return HierarchyRelationsFilter(queryset).filter_by_hierarchy(
51 ancestor_collection=value,
52 target_user=F("user"),
53 )
54
55 def filter_learner_group(self, queryset, name, value):
56 return HierarchyRelationsFilter(queryset).filter_by_hierarchy(
57 ancestor_collection=value,
58 target_user=F("user"),
59 )
60
61
62 class LoggerViewSet(viewsets.ModelViewSet):
63 def update(self, request, *args, **kwargs):
64 partial = kwargs.pop('partial', False)
65 model = self.queryset.model
66 lookup_url_kwarg = self.lookup_url_kwarg or self.lookup_field
67 try:
68 instance = model.objects.get(id=self.kwargs[lookup_url_kwarg])
69 self.check_object_permissions(request, instance)
70 except (ValueError, ObjectDoesNotExist):
71 raise Http404
72 serializer = self.get_serializer(instance, data=request.data, partial=partial)
73 serializer.is_valid(raise_exception=True)
74 self.perform_update(serializer)
75
76 if getattr(instance, '_prefetched_objects_cache', None):
77 # If 'prefetch_related' has been applied to a queryset, we need to
78 # forcibly invalidate the prefetch cache on the instance.
79 instance._prefetched_objects_cache = {}
80 default_response = dict(request.data)
81 # First look if the computed fields to be updated are listed:
82 updating_fields = getattr(serializer.root, 'update_fields', None)
83 # If not, fetch all the fields that are computed methods:
84 if updating_fields is None:
85 updating_fields = [field for field in serializer.fields if getattr(serializer.fields[field], 'method_name', None)]
86 for field in updating_fields:
87 method_name = getattr(serializer.fields[field], 'method_name', None)
88 if method_name:
89 method = getattr(serializer.root, method_name)
90 default_response[field] = method(instance)
91 return Response(default_response)
92
93
94 class ContentSessionLogFilter(BaseLogFilter):
95
96 class Meta:
97 model = ContentSessionLog
98 fields = ['user_id', 'content_id']
99
100
101 class ContentSessionLogViewSet(LoggerViewSet):
102 permission_classes = (KolibriAuthPermissions,)
103 filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend)
104 queryset = ContentSessionLog.objects.all()
105 serializer_class = ContentSessionLogSerializer
106 pagination_class = OptionalPageNumberPagination
107 filter_class = ContentSessionLogFilter
108
109
110 class ContentSummaryLogFilter(BaseLogFilter):
111
112 class Meta:
113 model = ContentSummaryLog
114 fields = ['user_id', 'content_id']
115
116
117 class ContentSummaryLogViewSet(LoggerViewSet):
118 permission_classes = (KolibriAuthPermissions,)
119 filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend)
120 queryset = ContentSummaryLog.objects.all()
121 serializer_class = ContentSummaryLogSerializer
122 pagination_class = OptionalPageNumberPagination
123 filter_class = ContentSummaryLogFilter
124
125
126 class TotalContentProgressViewSet(viewsets.ModelViewSet):
127 permission_classes = (KolibriAuthPermissions,)
128 filter_backends = (KolibriAuthPermissionsFilter,)
129 queryset = FacilityUser.objects.all()
130 serializer_class = TotalContentProgressSerializer
131
132
133 class UserSessionLogFilter(BaseLogFilter):
134
135 class Meta:
136 model = UserSessionLog
137 fields = ['user_id']
138
139
140 class UserSessionLogViewSet(LoggerViewSet):
141 permission_classes = (KolibriAuthPermissions,)
142 filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend)
143 queryset = UserSessionLog.objects.all()
144 serializer_class = UserSessionLogSerializer
145 pagination_class = OptionalPageNumberPagination
146 filter_class = UserSessionLogFilter
147
148
149 class MasteryFilter(FilterSet):
150
151 class Meta:
152 model = MasteryLog
153 fields = ['summarylog']
154
155
156 class MasteryLogViewSet(LoggerViewSet):
157 permission_classes = (KolibriAuthPermissions,)
158 filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend)
159 queryset = MasteryLog.objects.all()
160 serializer_class = MasteryLogSerializer
161 pagination_class = OptionalPageNumberPagination
162 filter_class = MasteryFilter
163
164
165 class AttemptFilter(BaseLogFilter):
166 content = CharFilter(method="filter_content")
167
168 def filter_content(self, queryset, name, value):
169 return queryset.filter(masterylog__summarylog__content_id=value)
170
171 class Meta:
172 model = AttemptLog
173 fields = ['masterylog', 'complete', 'user', 'content', 'item']
174
175
176 class AttemptLogViewSet(LoggerViewSet):
177 permission_classes = (KolibriAuthPermissions,)
178 filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend, filters.OrderingFilter)
179 queryset = AttemptLog.objects.all()
180 serializer_class = AttemptLogSerializer
181 pagination_class = OptionalPageNumberPagination
182 filter_class = AttemptFilter
183 ordering_fields = ('end_timestamp',)
184 ordering = ('end_timestamp',)
185
186
187 class ExamAttemptFilter(BaseLogFilter):
188 exam = ModelChoiceFilter(method="filter_exam", queryset=Exam.objects.all())
189 user = ModelChoiceFilter(method="filter_user", queryset=FacilityUser.objects.all())
190 content = CharFilter(field_name="content_id")
191
192 def filter_exam(self, queryset, name, value):
193 return queryset.filter(examlog__exam=value)
194
195 def filter_user(self, queryset, name, value):
196 return queryset.filter(examlog__user=value)
197
198 class Meta:
199 model = ExamAttemptLog
200 fields = ['examlog', 'exam', 'user', 'content', 'item']
201
202
203 class ExamAttemptLogViewSet(LoggerViewSet):
204 permission_classes = (ExamActivePermissions, KolibriAuthPermissions, )
205 filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend, filters.OrderingFilter)
206 queryset = ExamAttemptLog.objects.all()
207 serializer_class = ExamAttemptLogSerializer
208 pagination_class = OptionalPageNumberPagination
209 filter_class = ExamAttemptFilter
210
211
212 class ExamLogFilter(BaseLogFilter):
213
214 collection = ModelChoiceFilter(method="filter_collection", queryset=Collection.objects.all())
215
216 def filter_collection(self, queryset, name, collection):
217 return HierarchyRelationsFilter(queryset).filter_by_hierarchy(
218 target_user=F('user'),
219 ancestor_collection=collection,
220 )
221
222 class Meta:
223 model = ExamLog
224 fields = ['user', 'exam']
225
226
227 class ExamLogViewSet(viewsets.ModelViewSet):
228 permission_classes = (KolibriAuthPermissions,)
229 filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend)
230 queryset = ExamLog.objects.all()
231 serializer_class = ExamLogSerializer
232 pagination_class = OptionalPageNumberPagination
233 filter_class = ExamLogFilter
234
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kolibri/core/logger/api.py b/kolibri/core/logger/api.py
--- a/kolibri/core/logger/api.py
+++ b/kolibri/core/logger/api.py
@@ -1,12 +1,17 @@
+import logging
+
from django.core.exceptions import ObjectDoesNotExist
from django.db.models.query import F
+from django.db.utils import IntegrityError
from django.http import Http404
from django_filters import ModelChoiceFilter
from django_filters.rest_framework import CharFilter
from django_filters.rest_framework import DjangoFilterBackend
from django_filters.rest_framework import FilterSet
from rest_framework import filters
+from rest_framework import status
from rest_framework import viewsets
+from rest_framework.exceptions import ValidationError
from rest_framework.response import Response
from .models import AttemptLog
@@ -36,6 +41,8 @@
from kolibri.core.content.api import OptionalPageNumberPagination
from kolibri.core.exams.models import Exam
+logger = logging.getLogger(__name__)
+
class BaseLogFilter(FilterSet):
facility = ModelChoiceFilter(method="filter_facility", queryset=Facility.objects.all())
@@ -90,6 +97,21 @@
default_response[field] = method(instance)
return Response(default_response)
+ def create(self, request, *args, **kwargs):
+ try:
+ return super(LoggerViewSet, self).create(request, *args, **kwargs)
+ except IntegrityError:
+ # The object has been created previously: let's calculate its id and return it
+ serializer = self.get_serializer(data=request.data)
+ serializer.is_valid(raise_exception=True)
+ obj = serializer.Meta.model(**serializer.validated_data)
+ obj.id = obj.calculate_uuid()
+ final_obj = self.get_serializer(obj)
+ return Response(final_obj.data)
+ except ValidationError as e:
+ logger.error("Failed to validate data: {}".format(e))
+ return Response(request.data, status.HTTP_400_BAD_REQUEST)
+
class ContentSessionLogFilter(BaseLogFilter):
| {"golden_diff": "diff --git a/kolibri/core/logger/api.py b/kolibri/core/logger/api.py\n--- a/kolibri/core/logger/api.py\n+++ b/kolibri/core/logger/api.py\n@@ -1,12 +1,17 @@\n+import logging\n+\n from django.core.exceptions import ObjectDoesNotExist\n from django.db.models.query import F\n+from django.db.utils import IntegrityError\n from django.http import Http404\n from django_filters import ModelChoiceFilter\n from django_filters.rest_framework import CharFilter\n from django_filters.rest_framework import DjangoFilterBackend\n from django_filters.rest_framework import FilterSet\n from rest_framework import filters\n+from rest_framework import status\n from rest_framework import viewsets\n+from rest_framework.exceptions import ValidationError\n from rest_framework.response import Response\n \n from .models import AttemptLog\n@@ -36,6 +41,8 @@\n from kolibri.core.content.api import OptionalPageNumberPagination\n from kolibri.core.exams.models import Exam\n \n+logger = logging.getLogger(__name__)\n+\n \n class BaseLogFilter(FilterSet):\n facility = ModelChoiceFilter(method=\"filter_facility\", queryset=Facility.objects.all())\n@@ -90,6 +97,21 @@\n default_response[field] = method(instance)\n return Response(default_response)\n \n+ def create(self, request, *args, **kwargs):\n+ try:\n+ return super(LoggerViewSet, self).create(request, *args, **kwargs)\n+ except IntegrityError:\n+ # The object has been created previously: let's calculate its id and return it\n+ serializer = self.get_serializer(data=request.data)\n+ serializer.is_valid(raise_exception=True)\n+ obj = serializer.Meta.model(**serializer.validated_data)\n+ obj.id = obj.calculate_uuid()\n+ final_obj = self.get_serializer(obj)\n+ return Response(final_obj.data)\n+ except ValidationError as e:\n+ logger.error(\"Failed to validate data: {}\".format(e))\n+ return Response(request.data, status.HTTP_400_BAD_REQUEST)\n+\n \n class ContentSessionLogFilter(BaseLogFilter):\n", "issue": "Auto sign-out from tablets during normal class\n**Observed behavior**\r\nKids are signout out of their tabs automatically while watching videos or doing quizzes\r\n\r\n**Expected behavior**\r\nThe user should not be signed out automatically.\r\n\r\n\r\n**User-facing consequences**\r\nWhile the kids are in the middle of any exercise or video, they are being automatically signed out. They need to again re-login to continue the session\r\n\r\n**Steps to reproduce**\r\nThis happens in some of the tabs during a normal classroom session.\r\n\r\n\r\n**Context**\r\nKolibri version : Kolibri 0.12.2\r\nOperating system : ubuntu 14.04\r\nBrowser : Chrome\r\n\r\n\r\n\n", "before_files": [{"content": "from django.core.exceptions import ObjectDoesNotExist\nfrom django.db.models.query import F\nfrom django.http import Http404\nfrom django_filters import ModelChoiceFilter\nfrom django_filters.rest_framework import CharFilter\nfrom django_filters.rest_framework import DjangoFilterBackend\nfrom django_filters.rest_framework import FilterSet\nfrom rest_framework import filters\nfrom rest_framework import viewsets\nfrom rest_framework.response import Response\n\nfrom .models import AttemptLog\nfrom .models import ContentSessionLog\nfrom .models import ContentSummaryLog\nfrom .models import ExamAttemptLog\nfrom .models import ExamLog\nfrom .models import MasteryLog\nfrom .models import UserSessionLog\nfrom .permissions import ExamActivePermissions\nfrom .serializers import AttemptLogSerializer\nfrom .serializers import ContentSessionLogSerializer\nfrom .serializers import ContentSummaryLogSerializer\nfrom .serializers import ExamAttemptLogSerializer\nfrom .serializers import ExamLogSerializer\nfrom .serializers import MasteryLogSerializer\nfrom .serializers import TotalContentProgressSerializer\nfrom .serializers import UserSessionLogSerializer\nfrom kolibri.core.auth.api import KolibriAuthPermissions\nfrom kolibri.core.auth.api import KolibriAuthPermissionsFilter\nfrom kolibri.core.auth.filters import HierarchyRelationsFilter\nfrom kolibri.core.auth.models import Classroom\nfrom kolibri.core.auth.models import Collection\nfrom kolibri.core.auth.models import Facility\nfrom kolibri.core.auth.models import FacilityUser\nfrom kolibri.core.auth.models import LearnerGroup\nfrom kolibri.core.content.api import OptionalPageNumberPagination\nfrom kolibri.core.exams.models import Exam\n\n\nclass BaseLogFilter(FilterSet):\n facility = ModelChoiceFilter(method=\"filter_facility\", queryset=Facility.objects.all())\n classroom = ModelChoiceFilter(method=\"filter_classroom\", queryset=Classroom.objects.all())\n learner_group = ModelChoiceFilter(method=\"filter_learner_group\", queryset=LearnerGroup.objects.all())\n\n # Only a superuser can filter by facilities\n def filter_facility(self, queryset, name, value):\n return queryset.filter(user__facility=value)\n\n def filter_classroom(self, queryset, name, value):\n return HierarchyRelationsFilter(queryset).filter_by_hierarchy(\n ancestor_collection=value,\n target_user=F(\"user\"),\n )\n\n def filter_learner_group(self, queryset, name, value):\n return HierarchyRelationsFilter(queryset).filter_by_hierarchy(\n ancestor_collection=value,\n target_user=F(\"user\"),\n )\n\n\nclass LoggerViewSet(viewsets.ModelViewSet):\n def update(self, request, *args, **kwargs):\n partial = kwargs.pop('partial', False)\n model = self.queryset.model\n lookup_url_kwarg = self.lookup_url_kwarg or self.lookup_field\n try:\n instance = model.objects.get(id=self.kwargs[lookup_url_kwarg])\n self.check_object_permissions(request, instance)\n except (ValueError, ObjectDoesNotExist):\n raise Http404\n serializer = self.get_serializer(instance, data=request.data, partial=partial)\n serializer.is_valid(raise_exception=True)\n self.perform_update(serializer)\n\n if getattr(instance, '_prefetched_objects_cache', None):\n # If 'prefetch_related' has been applied to a queryset, we need to\n # forcibly invalidate the prefetch cache on the instance.\n instance._prefetched_objects_cache = {}\n default_response = dict(request.data)\n # First look if the computed fields to be updated are listed:\n updating_fields = getattr(serializer.root, 'update_fields', None)\n # If not, fetch all the fields that are computed methods:\n if updating_fields is None:\n updating_fields = [field for field in serializer.fields if getattr(serializer.fields[field], 'method_name', None)]\n for field in updating_fields:\n method_name = getattr(serializer.fields[field], 'method_name', None)\n if method_name:\n method = getattr(serializer.root, method_name)\n default_response[field] = method(instance)\n return Response(default_response)\n\n\nclass ContentSessionLogFilter(BaseLogFilter):\n\n class Meta:\n model = ContentSessionLog\n fields = ['user_id', 'content_id']\n\n\nclass ContentSessionLogViewSet(LoggerViewSet):\n permission_classes = (KolibriAuthPermissions,)\n filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend)\n queryset = ContentSessionLog.objects.all()\n serializer_class = ContentSessionLogSerializer\n pagination_class = OptionalPageNumberPagination\n filter_class = ContentSessionLogFilter\n\n\nclass ContentSummaryLogFilter(BaseLogFilter):\n\n class Meta:\n model = ContentSummaryLog\n fields = ['user_id', 'content_id']\n\n\nclass ContentSummaryLogViewSet(LoggerViewSet):\n permission_classes = (KolibriAuthPermissions,)\n filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend)\n queryset = ContentSummaryLog.objects.all()\n serializer_class = ContentSummaryLogSerializer\n pagination_class = OptionalPageNumberPagination\n filter_class = ContentSummaryLogFilter\n\n\nclass TotalContentProgressViewSet(viewsets.ModelViewSet):\n permission_classes = (KolibriAuthPermissions,)\n filter_backends = (KolibriAuthPermissionsFilter,)\n queryset = FacilityUser.objects.all()\n serializer_class = TotalContentProgressSerializer\n\n\nclass UserSessionLogFilter(BaseLogFilter):\n\n class Meta:\n model = UserSessionLog\n fields = ['user_id']\n\n\nclass UserSessionLogViewSet(LoggerViewSet):\n permission_classes = (KolibriAuthPermissions,)\n filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend)\n queryset = UserSessionLog.objects.all()\n serializer_class = UserSessionLogSerializer\n pagination_class = OptionalPageNumberPagination\n filter_class = UserSessionLogFilter\n\n\nclass MasteryFilter(FilterSet):\n\n class Meta:\n model = MasteryLog\n fields = ['summarylog']\n\n\nclass MasteryLogViewSet(LoggerViewSet):\n permission_classes = (KolibriAuthPermissions,)\n filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend)\n queryset = MasteryLog.objects.all()\n serializer_class = MasteryLogSerializer\n pagination_class = OptionalPageNumberPagination\n filter_class = MasteryFilter\n\n\nclass AttemptFilter(BaseLogFilter):\n content = CharFilter(method=\"filter_content\")\n\n def filter_content(self, queryset, name, value):\n return queryset.filter(masterylog__summarylog__content_id=value)\n\n class Meta:\n model = AttemptLog\n fields = ['masterylog', 'complete', 'user', 'content', 'item']\n\n\nclass AttemptLogViewSet(LoggerViewSet):\n permission_classes = (KolibriAuthPermissions,)\n filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend, filters.OrderingFilter)\n queryset = AttemptLog.objects.all()\n serializer_class = AttemptLogSerializer\n pagination_class = OptionalPageNumberPagination\n filter_class = AttemptFilter\n ordering_fields = ('end_timestamp',)\n ordering = ('end_timestamp',)\n\n\nclass ExamAttemptFilter(BaseLogFilter):\n exam = ModelChoiceFilter(method=\"filter_exam\", queryset=Exam.objects.all())\n user = ModelChoiceFilter(method=\"filter_user\", queryset=FacilityUser.objects.all())\n content = CharFilter(field_name=\"content_id\")\n\n def filter_exam(self, queryset, name, value):\n return queryset.filter(examlog__exam=value)\n\n def filter_user(self, queryset, name, value):\n return queryset.filter(examlog__user=value)\n\n class Meta:\n model = ExamAttemptLog\n fields = ['examlog', 'exam', 'user', 'content', 'item']\n\n\nclass ExamAttemptLogViewSet(LoggerViewSet):\n permission_classes = (ExamActivePermissions, KolibriAuthPermissions, )\n filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend, filters.OrderingFilter)\n queryset = ExamAttemptLog.objects.all()\n serializer_class = ExamAttemptLogSerializer\n pagination_class = OptionalPageNumberPagination\n filter_class = ExamAttemptFilter\n\n\nclass ExamLogFilter(BaseLogFilter):\n\n collection = ModelChoiceFilter(method=\"filter_collection\", queryset=Collection.objects.all())\n\n def filter_collection(self, queryset, name, collection):\n return HierarchyRelationsFilter(queryset).filter_by_hierarchy(\n target_user=F('user'),\n ancestor_collection=collection,\n )\n\n class Meta:\n model = ExamLog\n fields = ['user', 'exam']\n\n\nclass ExamLogViewSet(viewsets.ModelViewSet):\n permission_classes = (KolibriAuthPermissions,)\n filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend)\n queryset = ExamLog.objects.all()\n serializer_class = ExamLogSerializer\n pagination_class = OptionalPageNumberPagination\n filter_class = ExamLogFilter\n", "path": "kolibri/core/logger/api.py"}], "after_files": [{"content": "import logging\n\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.db.models.query import F\nfrom django.db.utils import IntegrityError\nfrom django.http import Http404\nfrom django_filters import ModelChoiceFilter\nfrom django_filters.rest_framework import CharFilter\nfrom django_filters.rest_framework import DjangoFilterBackend\nfrom django_filters.rest_framework import FilterSet\nfrom rest_framework import filters\nfrom rest_framework import status\nfrom rest_framework import viewsets\nfrom rest_framework.exceptions import ValidationError\nfrom rest_framework.response import Response\n\nfrom .models import AttemptLog\nfrom .models import ContentSessionLog\nfrom .models import ContentSummaryLog\nfrom .models import ExamAttemptLog\nfrom .models import ExamLog\nfrom .models import MasteryLog\nfrom .models import UserSessionLog\nfrom .permissions import ExamActivePermissions\nfrom .serializers import AttemptLogSerializer\nfrom .serializers import ContentSessionLogSerializer\nfrom .serializers import ContentSummaryLogSerializer\nfrom .serializers import ExamAttemptLogSerializer\nfrom .serializers import ExamLogSerializer\nfrom .serializers import MasteryLogSerializer\nfrom .serializers import TotalContentProgressSerializer\nfrom .serializers import UserSessionLogSerializer\nfrom kolibri.core.auth.api import KolibriAuthPermissions\nfrom kolibri.core.auth.api import KolibriAuthPermissionsFilter\nfrom kolibri.core.auth.filters import HierarchyRelationsFilter\nfrom kolibri.core.auth.models import Classroom\nfrom kolibri.core.auth.models import Collection\nfrom kolibri.core.auth.models import Facility\nfrom kolibri.core.auth.models import FacilityUser\nfrom kolibri.core.auth.models import LearnerGroup\nfrom kolibri.core.content.api import OptionalPageNumberPagination\nfrom kolibri.core.exams.models import Exam\n\nlogger = logging.getLogger(__name__)\n\n\nclass BaseLogFilter(FilterSet):\n facility = ModelChoiceFilter(method=\"filter_facility\", queryset=Facility.objects.all())\n classroom = ModelChoiceFilter(method=\"filter_classroom\", queryset=Classroom.objects.all())\n learner_group = ModelChoiceFilter(method=\"filter_learner_group\", queryset=LearnerGroup.objects.all())\n\n # Only a superuser can filter by facilities\n def filter_facility(self, queryset, name, value):\n return queryset.filter(user__facility=value)\n\n def filter_classroom(self, queryset, name, value):\n return HierarchyRelationsFilter(queryset).filter_by_hierarchy(\n ancestor_collection=value,\n target_user=F(\"user\"),\n )\n\n def filter_learner_group(self, queryset, name, value):\n return HierarchyRelationsFilter(queryset).filter_by_hierarchy(\n ancestor_collection=value,\n target_user=F(\"user\"),\n )\n\n\nclass LoggerViewSet(viewsets.ModelViewSet):\n def update(self, request, *args, **kwargs):\n partial = kwargs.pop('partial', False)\n model = self.queryset.model\n lookup_url_kwarg = self.lookup_url_kwarg or self.lookup_field\n try:\n instance = model.objects.get(id=self.kwargs[lookup_url_kwarg])\n self.check_object_permissions(request, instance)\n except (ValueError, ObjectDoesNotExist):\n raise Http404\n serializer = self.get_serializer(instance, data=request.data, partial=partial)\n serializer.is_valid(raise_exception=True)\n self.perform_update(serializer)\n\n if getattr(instance, '_prefetched_objects_cache', None):\n # If 'prefetch_related' has been applied to a queryset, we need to\n # forcibly invalidate the prefetch cache on the instance.\n instance._prefetched_objects_cache = {}\n default_response = dict(request.data)\n # First look if the computed fields to be updated are listed:\n updating_fields = getattr(serializer.root, 'update_fields', None)\n # If not, fetch all the fields that are computed methods:\n if updating_fields is None:\n updating_fields = [field for field in serializer.fields if getattr(serializer.fields[field], 'method_name', None)]\n for field in updating_fields:\n method_name = getattr(serializer.fields[field], 'method_name', None)\n if method_name:\n method = getattr(serializer.root, method_name)\n default_response[field] = method(instance)\n return Response(default_response)\n\n def create(self, request, *args, **kwargs):\n try:\n return super(LoggerViewSet, self).create(request, *args, **kwargs)\n except IntegrityError:\n # The object has been created previously: let's calculate its id and return it\n serializer = self.get_serializer(data=request.data)\n serializer.is_valid(raise_exception=True)\n obj = serializer.Meta.model(**serializer.validated_data)\n obj.id = obj.calculate_uuid()\n final_obj = self.get_serializer(obj)\n return Response(final_obj.data)\n except ValidationError as e:\n logger.error(\"Failed to validate data: {}\".format(e))\n return Response(request.data, status.HTTP_400_BAD_REQUEST)\n\n\nclass ContentSessionLogFilter(BaseLogFilter):\n\n class Meta:\n model = ContentSessionLog\n fields = ['user_id', 'content_id']\n\n\nclass ContentSessionLogViewSet(LoggerViewSet):\n permission_classes = (KolibriAuthPermissions,)\n filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend)\n queryset = ContentSessionLog.objects.all()\n serializer_class = ContentSessionLogSerializer\n pagination_class = OptionalPageNumberPagination\n filter_class = ContentSessionLogFilter\n\n\nclass ContentSummaryLogFilter(BaseLogFilter):\n\n class Meta:\n model = ContentSummaryLog\n fields = ['user_id', 'content_id']\n\n\nclass ContentSummaryLogViewSet(LoggerViewSet):\n permission_classes = (KolibriAuthPermissions,)\n filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend)\n queryset = ContentSummaryLog.objects.all()\n serializer_class = ContentSummaryLogSerializer\n pagination_class = OptionalPageNumberPagination\n filter_class = ContentSummaryLogFilter\n\n\nclass TotalContentProgressViewSet(viewsets.ModelViewSet):\n permission_classes = (KolibriAuthPermissions,)\n filter_backends = (KolibriAuthPermissionsFilter,)\n queryset = FacilityUser.objects.all()\n serializer_class = TotalContentProgressSerializer\n\n\nclass UserSessionLogFilter(BaseLogFilter):\n\n class Meta:\n model = UserSessionLog\n fields = ['user_id']\n\n\nclass UserSessionLogViewSet(LoggerViewSet):\n permission_classes = (KolibriAuthPermissions,)\n filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend)\n queryset = UserSessionLog.objects.all()\n serializer_class = UserSessionLogSerializer\n pagination_class = OptionalPageNumberPagination\n filter_class = UserSessionLogFilter\n\n\nclass MasteryFilter(FilterSet):\n\n class Meta:\n model = MasteryLog\n fields = ['summarylog']\n\n\nclass MasteryLogViewSet(LoggerViewSet):\n permission_classes = (KolibriAuthPermissions,)\n filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend)\n queryset = MasteryLog.objects.all()\n serializer_class = MasteryLogSerializer\n pagination_class = OptionalPageNumberPagination\n filter_class = MasteryFilter\n\n\nclass AttemptFilter(BaseLogFilter):\n content = CharFilter(method=\"filter_content\")\n\n def filter_content(self, queryset, name, value):\n return queryset.filter(masterylog__summarylog__content_id=value)\n\n class Meta:\n model = AttemptLog\n fields = ['masterylog', 'complete', 'user', 'content', 'item']\n\n\nclass AttemptLogViewSet(LoggerViewSet):\n permission_classes = (KolibriAuthPermissions,)\n filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend, filters.OrderingFilter)\n queryset = AttemptLog.objects.all()\n serializer_class = AttemptLogSerializer\n pagination_class = OptionalPageNumberPagination\n filter_class = AttemptFilter\n ordering_fields = ('end_timestamp',)\n ordering = ('end_timestamp',)\n\n\nclass ExamAttemptFilter(BaseLogFilter):\n exam = ModelChoiceFilter(method=\"filter_exam\", queryset=Exam.objects.all())\n user = ModelChoiceFilter(method=\"filter_user\", queryset=FacilityUser.objects.all())\n content = CharFilter(field_name=\"content_id\")\n\n def filter_exam(self, queryset, name, value):\n return queryset.filter(examlog__exam=value)\n\n def filter_user(self, queryset, name, value):\n return queryset.filter(examlog__user=value)\n\n class Meta:\n model = ExamAttemptLog\n fields = ['examlog', 'exam', 'user', 'content', 'item']\n\n\nclass ExamAttemptLogViewSet(LoggerViewSet):\n permission_classes = (ExamActivePermissions, KolibriAuthPermissions, )\n filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend, filters.OrderingFilter)\n queryset = ExamAttemptLog.objects.all()\n serializer_class = ExamAttemptLogSerializer\n pagination_class = OptionalPageNumberPagination\n filter_class = ExamAttemptFilter\n\n\nclass ExamLogFilter(BaseLogFilter):\n\n collection = ModelChoiceFilter(method=\"filter_collection\", queryset=Collection.objects.all())\n\n def filter_collection(self, queryset, name, collection):\n return HierarchyRelationsFilter(queryset).filter_by_hierarchy(\n target_user=F('user'),\n ancestor_collection=collection,\n )\n\n class Meta:\n model = ExamLog\n fields = ['user', 'exam']\n\n\nclass ExamLogViewSet(viewsets.ModelViewSet):\n permission_classes = (KolibriAuthPermissions,)\n filter_backends = (KolibriAuthPermissionsFilter, DjangoFilterBackend)\n queryset = ExamLog.objects.all()\n serializer_class = ExamLogSerializer\n pagination_class = OptionalPageNumberPagination\n filter_class = ExamLogFilter\n", "path": "kolibri/core/logger/api.py"}]} | 2,926 | 439 |
gh_patches_debug_33690 | rasdani/github-patches | git_diff | bridgecrewio__checkov-2027 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Delimiter in wrong position when using multi-output with empty error list
**Describe the bug**
The "--- OUTPUT DELIMITER ---" message appears below two outputs when running on a project without any errors to output. This causes problems when trying to use parsers to split the output.
**To Reproduce**
Steps to reproduce the behavior:
1. mkdir -p /tmp/checkov-bug
2. cd /tmp/checkov-bug
3. docker run -it -v $PWD:/app -w /app bridgecrew/checkov:2.0.591 -o cli -o junitxml -d .
Output is:
_ _
___| |__ ___ ___| | _______ __
/ __| '_ \ / _ \/ __| |/ / _ \ \ / /
| (__| | | | __/ (__| < (_) \ V /
\___|_| |_|\___|\___|_|\_\___/ \_/
By bridgecrew.io | version: 2.0.590
Update available 2.0.590 -> 2.0.591
Run pip3 install -U checkov to update
\<?xml version="1.0" ?\>
\<testsuites/\>
--- OUTPUT DELIMITER ---
**Expected behavior**
The expected behaviour would be for the XML snippet to be below "--- OUTPUT DELIMITER ---", since it was the second output option passed to checkov.
**Screenshots**
**Desktop (please complete the following information):**
- OS: Docker image bridgecrew/checkov
- Checkov Version 2.0.591
**Additional context**
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/common/runners/runner_registry.py`
Content:
```
1 import argparse
2 import itertools
3 from json import dumps, JSONEncoder
4 from lark import Tree
5 import datetime
6 import logging
7 import os
8 from abc import abstractmethod
9 from typing import List, Union, Dict, Any, Tuple, Optional
10
11 from typing_extensions import Literal
12
13 from checkov.common.bridgecrew.integration_features.integration_feature_registry import integration_feature_registry
14 from checkov.common.output.baseline import Baseline
15 from checkov.common.output.report import Report
16 from checkov.common.runners.base_runner import BaseRunner
17 from checkov.common.util import data_structures_utils
18 from checkov.runner_filter import RunnerFilter
19 from checkov.terraform.context_parsers.registry import parser_registry
20 from checkov.terraform.runner import Runner as tf_runner
21 from checkov.terraform.parser import Parser
22 from checkov.common.parallelizer.parallel_runner import parallel_runner
23 from checkov.common.util.ext_cyclonedx_xml import ExtXml
24 from checkov.common.util.banner import tool as tool_name
25
26 CHECK_BLOCK_TYPES = frozenset(["resource", "data", "provider", "module"])
27 OUTPUT_CHOICES = ["cli", "cyclonedx", "json", "junitxml", "github_failed_only", "sarif"]
28 OUTPUT_DELIMITER = "\n--- OUTPUT DELIMITER ---\n"
29
30 class OutputEncoder(JSONEncoder):
31 def default(self, obj):
32 if isinstance(obj, set):
33 return list(obj)
34 elif isinstance(obj, Tree):
35 return str(obj)
36 elif isinstance(obj, datetime.date):
37 return str(obj)
38 return super().default(obj)
39
40 class RunnerRegistry:
41 runners: List[BaseRunner] = []
42 scan_reports: List[Report] = []
43 banner = ""
44
45 def __init__(self, banner: str, runner_filter: RunnerFilter, *runners: BaseRunner) -> None:
46 self.logger = logging.getLogger(__name__)
47 self.runner_filter = runner_filter
48 self.runners = list(runners)
49 self.banner = banner
50 self.scan_reports = []
51 self.filter_runner_framework()
52 self.tool = tool_name
53
54 @abstractmethod
55 def extract_entity_details(self, entity: Dict[str, Any]) -> Tuple[str, str, Dict[str, Any]]:
56 raise NotImplementedError()
57
58 def run(
59 self,
60 root_folder: Optional[str] = None,
61 external_checks_dir: Optional[List[str]] = None,
62 files: Optional[List[str]] = None,
63 guidelines: Optional[Dict[str, str]] = None,
64 collect_skip_comments: bool = True,
65 repo_root_for_plan_enrichment: Optional[List[Union[str, os.PathLike]]] = None,
66 ) -> List[Report]:
67 integration_feature_registry.run_pre_runner()
68 if len(self.runners) == 1:
69 reports = [self.runners[0].run(root_folder, external_checks_dir=external_checks_dir, files=files,
70 runner_filter=self.runner_filter, collect_skip_comments=collect_skip_comments)]
71 else:
72 reports = parallel_runner.run_function(
73 lambda runner: runner.run(root_folder, external_checks_dir=external_checks_dir, files=files,
74 runner_filter=self.runner_filter, collect_skip_comments=collect_skip_comments),
75 self.runners, 1)
76
77 for scan_report in reports:
78 self._handle_report(scan_report, guidelines, repo_root_for_plan_enrichment)
79 return self.scan_reports
80
81 def _handle_report(self, scan_report, guidelines, repo_root_for_plan_enrichment):
82 integration_feature_registry.run_post_runner(scan_report)
83 if guidelines:
84 RunnerRegistry.enrich_report_with_guidelines(scan_report, guidelines)
85 if repo_root_for_plan_enrichment:
86 enriched_resources = RunnerRegistry.get_enriched_resources(repo_root_for_plan_enrichment)
87 scan_report = Report("terraform_plan").enrich_plan_report(scan_report, enriched_resources)
88 scan_report = Report("terraform_plan").handle_skipped_checks(scan_report, enriched_resources)
89 self.scan_reports.append(scan_report)
90
91 def print_reports(
92 self,
93 scan_reports: List[Report],
94 config: argparse.Namespace,
95 url: Optional[str] = None,
96 created_baseline_path: Optional[str] = None,
97 baseline: Optional[Baseline] = None,
98 ) -> Literal[0, 1]:
99 output_formats = set(config.output)
100
101 if "cli" in config.output and not config.quiet:
102 print(f"{self.banner}\n")
103 exit_codes = []
104 report_jsons = []
105 sarif_reports = []
106 junit_reports = []
107 cyclonedx_reports = []
108 for report in scan_reports:
109 if not report.is_empty():
110 if "json" in config.output:
111 report_jsons.append(report.get_dict(is_quiet=config.quiet, url=url))
112 if "junitxml" in config.output:
113 junit_reports.append(report)
114 # report.print_junit_xml()
115 if "github_failed_only" in config.output:
116 report.print_failed_github_md(use_bc_ids=config.output_bc_ids)
117 if "sarif" in config.output:
118 sarif_reports.append(report)
119 if "cli" in config.output:
120 report.print_console(
121 is_quiet=config.quiet,
122 is_compact=config.compact,
123 created_baseline_path=created_baseline_path,
124 baseline=baseline,
125 use_bc_ids=config.output_bc_ids,
126 )
127 if url:
128 print("More details: {}".format(url))
129 output_formats.discard("cli")
130 if output_formats:
131 print(OUTPUT_DELIMITER)
132 if "cyclonedx" in config.output:
133 cyclonedx_reports.append(report)
134 exit_codes.append(report.get_exit_code(config.soft_fail, config.soft_fail_on, config.hard_fail_on))
135
136 if "sarif" in config.output:
137 master_report = Report("merged")
138 print(self.banner)
139 for report in sarif_reports:
140 report.print_console(
141 is_quiet=config.quiet,
142 is_compact=config.compact,
143 created_baseline_path=created_baseline_path,
144 baseline=baseline,
145 use_bc_ids=config.output_bc_ids,
146 )
147 master_report.failed_checks += report.failed_checks
148 master_report.skipped_checks += report.skipped_checks
149 if url:
150 print("More details: {}".format(url))
151 master_report.write_sarif_output(self.tool)
152 output_formats.remove("sarif")
153 if output_formats:
154 print(OUTPUT_DELIMITER)
155 if "json" in config.output:
156 if not report_jsons:
157 print(dumps(Report(None).get_summary(), indent=4))
158 elif len(report_jsons) == 1:
159 print(dumps(report_jsons[0], indent=4, cls=OutputEncoder))
160 else:
161 print(dumps(report_jsons, indent=4, cls=OutputEncoder))
162 output_formats.remove("json")
163 if output_formats:
164 print(OUTPUT_DELIMITER)
165 if "junitxml" in config.output:
166 if len(junit_reports) == 1:
167 junit_reports[0].print_junit_xml(use_bc_ids=config.output_bc_ids)
168 else:
169 master_report = Report(None)
170 for report in junit_reports:
171 master_report.skipped_checks += report.skipped_checks
172 master_report.passed_checks += report.passed_checks
173 master_report.failed_checks += report.failed_checks
174 master_report.print_junit_xml(use_bc_ids=config.output_bc_ids)
175 output_formats.remove("junitxml")
176 if output_formats:
177 print(OUTPUT_DELIMITER)
178
179 if "cyclonedx" in config.output:
180 if cyclonedx_reports:
181 # More than one Report - combine Reports first
182 report = Report(None)
183 for r in cyclonedx_reports:
184 report.passed_checks += r.passed_checks
185 report.skipped_checks += r.skipped_checks
186 report.failed_checks += r.failed_checks
187 else:
188 report = cyclonedx_reports[0]
189 cyclonedx_output = ExtXml(bom=report.get_cyclonedx_bom())
190 print(cyclonedx_output.output_as_string())
191 output_formats.remove("cyclonedx")
192 if output_formats:
193 print(OUTPUT_DELIMITER)
194
195 exit_code = 1 if 1 in exit_codes else 0
196 return exit_code
197
198 def filter_runner_framework(self) -> None:
199 if not self.runner_filter:
200 return
201 if self.runner_filter.framework is None:
202 return
203 if self.runner_filter.framework == "all":
204 return
205 self.runners = [runner for runner in self.runners if runner.check_type == self.runner_filter.framework]
206
207 def remove_runner(self, runner: BaseRunner) -> None:
208 if runner in self.runners:
209 self.runners.remove(runner)
210
211 @staticmethod
212 def enrich_report_with_guidelines(scan_report: Report, guidelines: Dict[str, str]) -> None:
213 for record in itertools.chain(scan_report.failed_checks, scan_report.passed_checks, scan_report.skipped_checks):
214 if record.check_id in guidelines:
215 record.set_guideline(guidelines[record.check_id])
216
217 @staticmethod
218 def get_enriched_resources(repo_roots: List[Union[str, os.PathLike]]) -> Dict[str, Dict[str, Any]]:
219 repo_definitions = {}
220 for repo_root in repo_roots:
221 tf_definitions = {}
222 parsing_errors = {}
223 Parser().parse_directory(
224 directory=repo_root, # assume plan file is in the repo-root
225 out_definitions=tf_definitions,
226 out_parsing_errors=parsing_errors,
227 )
228 repo_definitions[repo_root] = { 'tf_definitions': tf_definitions, 'parsing_errors': parsing_errors }
229
230 enriched_resources = {}
231 for repo_root, parse_results in repo_definitions.items():
232 for full_file_path, definition in parse_results['tf_definitions'].items():
233 definitions_context = parser_registry.enrich_definitions_context((full_file_path, definition))
234 abs_scanned_file, _ = tf_runner._strip_module_referrer(full_file_path)
235 scanned_file = os.path.relpath(abs_scanned_file, repo_root)
236 for block_type, block_value in definition.items():
237 if block_type in CHECK_BLOCK_TYPES:
238 for entity in block_value:
239 context_parser = parser_registry.context_parsers[block_type]
240 definition_path = context_parser.get_entity_context_path(entity)
241 entity_id = ".".join(definition_path)
242 entity_context_path = [block_type] + definition_path
243 entity_context = data_structures_utils.get_inner_dict(
244 definitions_context[full_file_path], entity_context_path
245 )
246 entity_lines_range = [
247 entity_context.get("start_line"),
248 entity_context.get("end_line"),
249 ]
250 entity_code_lines = entity_context.get("code_lines")
251 skipped_checks = entity_context.get("skipped_checks")
252 enriched_resources[entity_id] = {
253 "entity_code_lines": entity_code_lines,
254 "entity_lines_range": entity_lines_range,
255 "scanned_file": scanned_file,
256 "skipped_checks": skipped_checks,
257 }
258 return enriched_resources
259
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/common/runners/runner_registry.py b/checkov/common/runners/runner_registry.py
--- a/checkov/common/runners/runner_registry.py
+++ b/checkov/common/runners/runner_registry.py
@@ -101,6 +101,7 @@
if "cli" in config.output and not config.quiet:
print(f"{self.banner}\n")
exit_codes = []
+ cli_reports = []
report_jsons = []
sarif_reports = []
junit_reports = []
@@ -117,22 +118,25 @@
if "sarif" in config.output:
sarif_reports.append(report)
if "cli" in config.output:
- report.print_console(
- is_quiet=config.quiet,
- is_compact=config.compact,
- created_baseline_path=created_baseline_path,
- baseline=baseline,
- use_bc_ids=config.output_bc_ids,
- )
- if url:
- print("More details: {}".format(url))
- output_formats.discard("cli")
- if output_formats:
- print(OUTPUT_DELIMITER)
+ cli_reports.append(report)
if "cyclonedx" in config.output:
cyclonedx_reports.append(report)
exit_codes.append(report.get_exit_code(config.soft_fail, config.soft_fail_on, config.hard_fail_on))
+ if "cli" in config.output:
+ for report in cli_reports:
+ report.print_console(
+ is_quiet=config.quiet,
+ is_compact=config.compact,
+ created_baseline_path=created_baseline_path,
+ baseline=baseline,
+ use_bc_ids=config.output_bc_ids,
+ )
+ if url:
+ print("More details: {}".format(url))
+ output_formats.remove("cli")
+ if output_formats:
+ print(OUTPUT_DELIMITER)
if "sarif" in config.output:
master_report = Report("merged")
print(self.banner)
| {"golden_diff": "diff --git a/checkov/common/runners/runner_registry.py b/checkov/common/runners/runner_registry.py\n--- a/checkov/common/runners/runner_registry.py\n+++ b/checkov/common/runners/runner_registry.py\n@@ -101,6 +101,7 @@\n if \"cli\" in config.output and not config.quiet:\n print(f\"{self.banner}\\n\")\n exit_codes = []\n+ cli_reports = []\n report_jsons = []\n sarif_reports = []\n junit_reports = []\n@@ -117,22 +118,25 @@\n if \"sarif\" in config.output:\n sarif_reports.append(report)\n if \"cli\" in config.output:\n- report.print_console(\n- is_quiet=config.quiet,\n- is_compact=config.compact,\n- created_baseline_path=created_baseline_path,\n- baseline=baseline,\n- use_bc_ids=config.output_bc_ids,\n- )\n- if url:\n- print(\"More details: {}\".format(url))\n- output_formats.discard(\"cli\")\n- if output_formats:\n- print(OUTPUT_DELIMITER)\n+ cli_reports.append(report)\n if \"cyclonedx\" in config.output:\n cyclonedx_reports.append(report)\n exit_codes.append(report.get_exit_code(config.soft_fail, config.soft_fail_on, config.hard_fail_on))\n \n+ if \"cli\" in config.output:\n+ for report in cli_reports:\n+ report.print_console(\n+ is_quiet=config.quiet,\n+ is_compact=config.compact,\n+ created_baseline_path=created_baseline_path,\n+ baseline=baseline,\n+ use_bc_ids=config.output_bc_ids,\n+ )\n+ if url:\n+ print(\"More details: {}\".format(url))\n+ output_formats.remove(\"cli\")\n+ if output_formats:\n+ print(OUTPUT_DELIMITER)\n if \"sarif\" in config.output:\n master_report = Report(\"merged\")\n print(self.banner)\n", "issue": "Delimiter in wrong position when using multi-output with empty error list\n**Describe the bug**\r\nThe \"--- OUTPUT DELIMITER ---\" message appears below two outputs when running on a project without any errors to output. This causes problems when trying to use parsers to split the output.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. mkdir -p /tmp/checkov-bug\r\n2. cd /tmp/checkov-bug\r\n3. docker run -it -v $PWD:/app -w /app bridgecrew/checkov:2.0.591 -o cli -o junitxml -d .\r\n\r\nOutput is:\r\n _ _ \r\n ___| |__ ___ ___| | _______ __\r\n / __| '_ \\ / _ \\/ __| |/ / _ \\ \\ / /\r\n | (__| | | | __/ (__| < (_) \\ V / \r\n \\___|_| |_|\\___|\\___|_|\\_\\___/ \\_/ \r\n \r\nBy bridgecrew.io | version: 2.0.590 \r\nUpdate available 2.0.590 -> 2.0.591\r\nRun pip3 install -U checkov to update \r\n\r\n\\<?xml version=\"1.0\" ?\\>\r\n\\<testsuites/\\>\r\n\r\n--- OUTPUT DELIMITER ---\r\n\r\n\r\n**Expected behavior**\r\n\r\nThe expected behaviour would be for the XML snippet to be below \"--- OUTPUT DELIMITER ---\", since it was the second output option passed to checkov.\r\n\r\n**Screenshots**\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: Docker image bridgecrew/checkov\r\n - Checkov Version 2.0.591\r\n\r\n**Additional context**\r\n\n", "before_files": [{"content": "import argparse\nimport itertools\nfrom json import dumps, JSONEncoder\nfrom lark import Tree\nimport datetime\nimport logging\nimport os\nfrom abc import abstractmethod\nfrom typing import List, Union, Dict, Any, Tuple, Optional\n\nfrom typing_extensions import Literal\n\nfrom checkov.common.bridgecrew.integration_features.integration_feature_registry import integration_feature_registry\nfrom checkov.common.output.baseline import Baseline\nfrom checkov.common.output.report import Report\nfrom checkov.common.runners.base_runner import BaseRunner\nfrom checkov.common.util import data_structures_utils\nfrom checkov.runner_filter import RunnerFilter\nfrom checkov.terraform.context_parsers.registry import parser_registry\nfrom checkov.terraform.runner import Runner as tf_runner\nfrom checkov.terraform.parser import Parser\nfrom checkov.common.parallelizer.parallel_runner import parallel_runner\nfrom checkov.common.util.ext_cyclonedx_xml import ExtXml\nfrom checkov.common.util.banner import tool as tool_name\n\nCHECK_BLOCK_TYPES = frozenset([\"resource\", \"data\", \"provider\", \"module\"])\nOUTPUT_CHOICES = [\"cli\", \"cyclonedx\", \"json\", \"junitxml\", \"github_failed_only\", \"sarif\"]\nOUTPUT_DELIMITER = \"\\n--- OUTPUT DELIMITER ---\\n\"\n\nclass OutputEncoder(JSONEncoder):\n def default(self, obj):\n if isinstance(obj, set):\n return list(obj)\n elif isinstance(obj, Tree):\n return str(obj)\n elif isinstance(obj, datetime.date):\n return str(obj)\n return super().default(obj)\n\nclass RunnerRegistry:\n runners: List[BaseRunner] = []\n scan_reports: List[Report] = []\n banner = \"\"\n\n def __init__(self, banner: str, runner_filter: RunnerFilter, *runners: BaseRunner) -> None:\n self.logger = logging.getLogger(__name__)\n self.runner_filter = runner_filter\n self.runners = list(runners)\n self.banner = banner\n self.scan_reports = []\n self.filter_runner_framework()\n self.tool = tool_name\n\n @abstractmethod\n def extract_entity_details(self, entity: Dict[str, Any]) -> Tuple[str, str, Dict[str, Any]]:\n raise NotImplementedError()\n\n def run(\n self,\n root_folder: Optional[str] = None,\n external_checks_dir: Optional[List[str]] = None,\n files: Optional[List[str]] = None,\n guidelines: Optional[Dict[str, str]] = None,\n collect_skip_comments: bool = True,\n repo_root_for_plan_enrichment: Optional[List[Union[str, os.PathLike]]] = None,\n ) -> List[Report]:\n integration_feature_registry.run_pre_runner()\n if len(self.runners) == 1:\n reports = [self.runners[0].run(root_folder, external_checks_dir=external_checks_dir, files=files,\n runner_filter=self.runner_filter, collect_skip_comments=collect_skip_comments)]\n else:\n reports = parallel_runner.run_function(\n lambda runner: runner.run(root_folder, external_checks_dir=external_checks_dir, files=files,\n runner_filter=self.runner_filter, collect_skip_comments=collect_skip_comments),\n self.runners, 1)\n\n for scan_report in reports:\n self._handle_report(scan_report, guidelines, repo_root_for_plan_enrichment)\n return self.scan_reports\n\n def _handle_report(self, scan_report, guidelines, repo_root_for_plan_enrichment):\n integration_feature_registry.run_post_runner(scan_report)\n if guidelines:\n RunnerRegistry.enrich_report_with_guidelines(scan_report, guidelines)\n if repo_root_for_plan_enrichment:\n enriched_resources = RunnerRegistry.get_enriched_resources(repo_root_for_plan_enrichment)\n scan_report = Report(\"terraform_plan\").enrich_plan_report(scan_report, enriched_resources)\n scan_report = Report(\"terraform_plan\").handle_skipped_checks(scan_report, enriched_resources)\n self.scan_reports.append(scan_report)\n\n def print_reports(\n self,\n scan_reports: List[Report],\n config: argparse.Namespace,\n url: Optional[str] = None,\n created_baseline_path: Optional[str] = None,\n baseline: Optional[Baseline] = None,\n ) -> Literal[0, 1]:\n output_formats = set(config.output)\n\n if \"cli\" in config.output and not config.quiet:\n print(f\"{self.banner}\\n\")\n exit_codes = []\n report_jsons = []\n sarif_reports = []\n junit_reports = []\n cyclonedx_reports = []\n for report in scan_reports:\n if not report.is_empty():\n if \"json\" in config.output:\n report_jsons.append(report.get_dict(is_quiet=config.quiet, url=url))\n if \"junitxml\" in config.output:\n junit_reports.append(report)\n # report.print_junit_xml()\n if \"github_failed_only\" in config.output:\n report.print_failed_github_md(use_bc_ids=config.output_bc_ids)\n if \"sarif\" in config.output:\n sarif_reports.append(report)\n if \"cli\" in config.output:\n report.print_console(\n is_quiet=config.quiet,\n is_compact=config.compact,\n created_baseline_path=created_baseline_path,\n baseline=baseline,\n use_bc_ids=config.output_bc_ids,\n )\n if url:\n print(\"More details: {}\".format(url))\n output_formats.discard(\"cli\")\n if output_formats:\n print(OUTPUT_DELIMITER)\n if \"cyclonedx\" in config.output:\n cyclonedx_reports.append(report)\n exit_codes.append(report.get_exit_code(config.soft_fail, config.soft_fail_on, config.hard_fail_on))\n\n if \"sarif\" in config.output:\n master_report = Report(\"merged\")\n print(self.banner)\n for report in sarif_reports:\n report.print_console(\n is_quiet=config.quiet,\n is_compact=config.compact,\n created_baseline_path=created_baseline_path,\n baseline=baseline,\n use_bc_ids=config.output_bc_ids,\n )\n master_report.failed_checks += report.failed_checks\n master_report.skipped_checks += report.skipped_checks\n if url:\n print(\"More details: {}\".format(url))\n master_report.write_sarif_output(self.tool)\n output_formats.remove(\"sarif\")\n if output_formats:\n print(OUTPUT_DELIMITER)\n if \"json\" in config.output:\n if not report_jsons:\n print(dumps(Report(None).get_summary(), indent=4))\n elif len(report_jsons) == 1:\n print(dumps(report_jsons[0], indent=4, cls=OutputEncoder))\n else:\n print(dumps(report_jsons, indent=4, cls=OutputEncoder))\n output_formats.remove(\"json\")\n if output_formats:\n print(OUTPUT_DELIMITER)\n if \"junitxml\" in config.output:\n if len(junit_reports) == 1:\n junit_reports[0].print_junit_xml(use_bc_ids=config.output_bc_ids)\n else:\n master_report = Report(None)\n for report in junit_reports:\n master_report.skipped_checks += report.skipped_checks\n master_report.passed_checks += report.passed_checks\n master_report.failed_checks += report.failed_checks\n master_report.print_junit_xml(use_bc_ids=config.output_bc_ids)\n output_formats.remove(\"junitxml\")\n if output_formats:\n print(OUTPUT_DELIMITER)\n\n if \"cyclonedx\" in config.output:\n if cyclonedx_reports:\n # More than one Report - combine Reports first\n report = Report(None)\n for r in cyclonedx_reports:\n report.passed_checks += r.passed_checks\n report.skipped_checks += r.skipped_checks\n report.failed_checks += r.failed_checks\n else:\n report = cyclonedx_reports[0]\n cyclonedx_output = ExtXml(bom=report.get_cyclonedx_bom())\n print(cyclonedx_output.output_as_string())\n output_formats.remove(\"cyclonedx\")\n if output_formats:\n print(OUTPUT_DELIMITER)\n\n exit_code = 1 if 1 in exit_codes else 0\n return exit_code\n\n def filter_runner_framework(self) -> None:\n if not self.runner_filter:\n return\n if self.runner_filter.framework is None:\n return\n if self.runner_filter.framework == \"all\":\n return\n self.runners = [runner for runner in self.runners if runner.check_type == self.runner_filter.framework]\n\n def remove_runner(self, runner: BaseRunner) -> None:\n if runner in self.runners:\n self.runners.remove(runner)\n\n @staticmethod\n def enrich_report_with_guidelines(scan_report: Report, guidelines: Dict[str, str]) -> None:\n for record in itertools.chain(scan_report.failed_checks, scan_report.passed_checks, scan_report.skipped_checks):\n if record.check_id in guidelines:\n record.set_guideline(guidelines[record.check_id])\n\n @staticmethod\n def get_enriched_resources(repo_roots: List[Union[str, os.PathLike]]) -> Dict[str, Dict[str, Any]]:\n repo_definitions = {}\n for repo_root in repo_roots:\n tf_definitions = {}\n parsing_errors = {}\n Parser().parse_directory(\n directory=repo_root, # assume plan file is in the repo-root\n out_definitions=tf_definitions,\n out_parsing_errors=parsing_errors,\n )\n repo_definitions[repo_root] = { 'tf_definitions': tf_definitions, 'parsing_errors': parsing_errors }\n\n enriched_resources = {}\n for repo_root, parse_results in repo_definitions.items():\n for full_file_path, definition in parse_results['tf_definitions'].items():\n definitions_context = parser_registry.enrich_definitions_context((full_file_path, definition))\n abs_scanned_file, _ = tf_runner._strip_module_referrer(full_file_path)\n scanned_file = os.path.relpath(abs_scanned_file, repo_root)\n for block_type, block_value in definition.items():\n if block_type in CHECK_BLOCK_TYPES:\n for entity in block_value:\n context_parser = parser_registry.context_parsers[block_type]\n definition_path = context_parser.get_entity_context_path(entity)\n entity_id = \".\".join(definition_path)\n entity_context_path = [block_type] + definition_path\n entity_context = data_structures_utils.get_inner_dict(\n definitions_context[full_file_path], entity_context_path\n )\n entity_lines_range = [\n entity_context.get(\"start_line\"),\n entity_context.get(\"end_line\"),\n ]\n entity_code_lines = entity_context.get(\"code_lines\")\n skipped_checks = entity_context.get(\"skipped_checks\")\n enriched_resources[entity_id] = {\n \"entity_code_lines\": entity_code_lines,\n \"entity_lines_range\": entity_lines_range,\n \"scanned_file\": scanned_file,\n \"skipped_checks\": skipped_checks,\n }\n return enriched_resources\n", "path": "checkov/common/runners/runner_registry.py"}], "after_files": [{"content": "import argparse\nimport itertools\nfrom json import dumps, JSONEncoder\nfrom lark import Tree\nimport datetime\nimport logging\nimport os\nfrom abc import abstractmethod\nfrom typing import List, Union, Dict, Any, Tuple, Optional\n\nfrom typing_extensions import Literal\n\nfrom checkov.common.bridgecrew.integration_features.integration_feature_registry import integration_feature_registry\nfrom checkov.common.output.baseline import Baseline\nfrom checkov.common.output.report import Report\nfrom checkov.common.runners.base_runner import BaseRunner\nfrom checkov.common.util import data_structures_utils\nfrom checkov.runner_filter import RunnerFilter\nfrom checkov.terraform.context_parsers.registry import parser_registry\nfrom checkov.terraform.runner import Runner as tf_runner\nfrom checkov.terraform.parser import Parser\nfrom checkov.common.parallelizer.parallel_runner import parallel_runner\nfrom checkov.common.util.ext_cyclonedx_xml import ExtXml\nfrom checkov.common.util.banner import tool as tool_name\n\nCHECK_BLOCK_TYPES = frozenset([\"resource\", \"data\", \"provider\", \"module\"])\nOUTPUT_CHOICES = [\"cli\", \"cyclonedx\", \"json\", \"junitxml\", \"github_failed_only\", \"sarif\"]\nOUTPUT_DELIMITER = \"\\n--- OUTPUT DELIMITER ---\\n\"\n\nclass OutputEncoder(JSONEncoder):\n def default(self, obj):\n if isinstance(obj, set):\n return list(obj)\n elif isinstance(obj, Tree):\n return str(obj)\n elif isinstance(obj, datetime.date):\n return str(obj)\n return super().default(obj)\n\nclass RunnerRegistry:\n runners: List[BaseRunner] = []\n scan_reports: List[Report] = []\n banner = \"\"\n\n def __init__(self, banner: str, runner_filter: RunnerFilter, *runners: BaseRunner) -> None:\n self.logger = logging.getLogger(__name__)\n self.runner_filter = runner_filter\n self.runners = list(runners)\n self.banner = banner\n self.scan_reports = []\n self.filter_runner_framework()\n self.tool = tool_name\n\n @abstractmethod\n def extract_entity_details(self, entity: Dict[str, Any]) -> Tuple[str, str, Dict[str, Any]]:\n raise NotImplementedError()\n\n def run(\n self,\n root_folder: Optional[str] = None,\n external_checks_dir: Optional[List[str]] = None,\n files: Optional[List[str]] = None,\n guidelines: Optional[Dict[str, str]] = None,\n collect_skip_comments: bool = True,\n repo_root_for_plan_enrichment: Optional[List[Union[str, os.PathLike]]] = None,\n ) -> List[Report]:\n integration_feature_registry.run_pre_runner()\n if len(self.runners) == 1:\n reports = [self.runners[0].run(root_folder, external_checks_dir=external_checks_dir, files=files,\n runner_filter=self.runner_filter, collect_skip_comments=collect_skip_comments)]\n else:\n reports = parallel_runner.run_function(\n lambda runner: runner.run(root_folder, external_checks_dir=external_checks_dir, files=files,\n runner_filter=self.runner_filter, collect_skip_comments=collect_skip_comments),\n self.runners, 1)\n\n for scan_report in reports:\n self._handle_report(scan_report, guidelines, repo_root_for_plan_enrichment)\n return self.scan_reports\n\n def _handle_report(self, scan_report, guidelines, repo_root_for_plan_enrichment):\n integration_feature_registry.run_post_runner(scan_report)\n if guidelines:\n RunnerRegistry.enrich_report_with_guidelines(scan_report, guidelines)\n if repo_root_for_plan_enrichment:\n enriched_resources = RunnerRegistry.get_enriched_resources(repo_root_for_plan_enrichment)\n scan_report = Report(\"terraform_plan\").enrich_plan_report(scan_report, enriched_resources)\n scan_report = Report(\"terraform_plan\").handle_skipped_checks(scan_report, enriched_resources)\n self.scan_reports.append(scan_report)\n\n def print_reports(\n self,\n scan_reports: List[Report],\n config: argparse.Namespace,\n url: Optional[str] = None,\n created_baseline_path: Optional[str] = None,\n baseline: Optional[Baseline] = None,\n ) -> Literal[0, 1]:\n output_formats = set(config.output)\n\n if \"cli\" in config.output and not config.quiet:\n print(f\"{self.banner}\\n\")\n exit_codes = []\n cli_reports = []\n report_jsons = []\n sarif_reports = []\n junit_reports = []\n cyclonedx_reports = []\n for report in scan_reports:\n if not report.is_empty():\n if \"json\" in config.output:\n report_jsons.append(report.get_dict(is_quiet=config.quiet, url=url))\n if \"junitxml\" in config.output:\n junit_reports.append(report)\n # report.print_junit_xml()\n if \"github_failed_only\" in config.output:\n report.print_failed_github_md(use_bc_ids=config.output_bc_ids)\n if \"sarif\" in config.output:\n sarif_reports.append(report)\n if \"cli\" in config.output:\n cli_reports.append(report)\n if \"cyclonedx\" in config.output:\n cyclonedx_reports.append(report)\n exit_codes.append(report.get_exit_code(config.soft_fail, config.soft_fail_on, config.hard_fail_on))\n\n if \"cli\" in config.output:\n for report in cli_reports:\n report.print_console(\n is_quiet=config.quiet,\n is_compact=config.compact,\n created_baseline_path=created_baseline_path,\n baseline=baseline,\n use_bc_ids=config.output_bc_ids,\n )\n if url:\n print(\"More details: {}\".format(url))\n output_formats.remove(\"cli\")\n if output_formats:\n print(OUTPUT_DELIMITER)\n if \"sarif\" in config.output:\n master_report = Report(\"merged\")\n print(self.banner)\n for report in sarif_reports:\n report.print_console(\n is_quiet=config.quiet,\n is_compact=config.compact,\n created_baseline_path=created_baseline_path,\n baseline=baseline,\n use_bc_ids=config.output_bc_ids,\n )\n master_report.failed_checks += report.failed_checks\n master_report.skipped_checks += report.skipped_checks\n if url:\n print(\"More details: {}\".format(url))\n master_report.write_sarif_output(self.tool)\n output_formats.remove(\"sarif\")\n if output_formats:\n print(OUTPUT_DELIMITER)\n if \"json\" in config.output:\n if not report_jsons:\n print(dumps(Report(None).get_summary(), indent=4))\n elif len(report_jsons) == 1:\n print(dumps(report_jsons[0], indent=4, cls=OutputEncoder))\n else:\n print(dumps(report_jsons, indent=4, cls=OutputEncoder))\n output_formats.remove(\"json\")\n if output_formats:\n print(OUTPUT_DELIMITER)\n if \"junitxml\" in config.output:\n if len(junit_reports) == 1:\n junit_reports[0].print_junit_xml(use_bc_ids=config.output_bc_ids)\n else:\n master_report = Report(None)\n for report in junit_reports:\n master_report.skipped_checks += report.skipped_checks\n master_report.passed_checks += report.passed_checks\n master_report.failed_checks += report.failed_checks\n master_report.print_junit_xml(use_bc_ids=config.output_bc_ids)\n output_formats.remove(\"junitxml\")\n if output_formats:\n print(OUTPUT_DELIMITER)\n\n if \"cyclonedx\" in config.output:\n if cyclonedx_reports:\n # More than one Report - combine Reports first\n report = Report(None)\n for r in cyclonedx_reports:\n report.passed_checks += r.passed_checks\n report.skipped_checks += r.skipped_checks\n report.failed_checks += r.failed_checks\n else:\n report = cyclonedx_reports[0]\n cyclonedx_output = ExtXml(bom=report.get_cyclonedx_bom())\n print(cyclonedx_output.output_as_string())\n output_formats.remove(\"cyclonedx\")\n if output_formats:\n print(OUTPUT_DELIMITER)\n\n exit_code = 1 if 1 in exit_codes else 0\n return exit_code\n\n def filter_runner_framework(self) -> None:\n if not self.runner_filter:\n return\n if self.runner_filter.framework is None:\n return\n if self.runner_filter.framework == \"all\":\n return\n self.runners = [runner for runner in self.runners if runner.check_type == self.runner_filter.framework]\n\n def remove_runner(self, runner: BaseRunner) -> None:\n if runner in self.runners:\n self.runners.remove(runner)\n\n @staticmethod\n def enrich_report_with_guidelines(scan_report: Report, guidelines: Dict[str, str]) -> None:\n for record in itertools.chain(scan_report.failed_checks, scan_report.passed_checks, scan_report.skipped_checks):\n if record.check_id in guidelines:\n record.set_guideline(guidelines[record.check_id])\n\n @staticmethod\n def get_enriched_resources(repo_roots: List[Union[str, os.PathLike]]) -> Dict[str, Dict[str, Any]]:\n repo_definitions = {}\n for repo_root in repo_roots:\n tf_definitions = {}\n parsing_errors = {}\n Parser().parse_directory(\n directory=repo_root, # assume plan file is in the repo-root\n out_definitions=tf_definitions,\n out_parsing_errors=parsing_errors,\n )\n repo_definitions[repo_root] = { 'tf_definitions': tf_definitions, 'parsing_errors': parsing_errors }\n\n enriched_resources = {}\n for repo_root, parse_results in repo_definitions.items():\n for full_file_path, definition in parse_results['tf_definitions'].items():\n definitions_context = parser_registry.enrich_definitions_context((full_file_path, definition))\n abs_scanned_file, _ = tf_runner._strip_module_referrer(full_file_path)\n scanned_file = os.path.relpath(abs_scanned_file, repo_root)\n for block_type, block_value in definition.items():\n if block_type in CHECK_BLOCK_TYPES:\n for entity in block_value:\n context_parser = parser_registry.context_parsers[block_type]\n definition_path = context_parser.get_entity_context_path(entity)\n entity_id = \".\".join(definition_path)\n entity_context_path = [block_type] + definition_path\n entity_context = data_structures_utils.get_inner_dict(\n definitions_context[full_file_path], entity_context_path\n )\n entity_lines_range = [\n entity_context.get(\"start_line\"),\n entity_context.get(\"end_line\"),\n ]\n entity_code_lines = entity_context.get(\"code_lines\")\n skipped_checks = entity_context.get(\"skipped_checks\")\n enriched_resources[entity_id] = {\n \"entity_code_lines\": entity_code_lines,\n \"entity_lines_range\": entity_lines_range,\n \"scanned_file\": scanned_file,\n \"skipped_checks\": skipped_checks,\n }\n return enriched_resources\n", "path": "checkov/common/runners/runner_registry.py"}]} | 3,585 | 429 |
gh_patches_debug_37315 | rasdani/github-patches | git_diff | nltk__nltk-900 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
HunposTagger error in Python3
Bug report from Tülin Erçelebi Ayyıldız:
> Dear hunpos authors,
>
> I am currently trying to use hunpos tagger to tag a text file, however I get an error at the stage of hunpos.py run.
>
> My configuration:
> OS : Windows 7-64bit
> python 3.4.1
> nltk 3.0.1
>
> All "english.model", "hunpos-tag.exe" and "hunpos-train.exe" are located in "C:/Users" folder. My python code is as follows:
> ---
``` python
import nltk
from nltk.tag.hunpos import HunposTagger
from nltk.tokenize import word_tokenize
corpus = "so how do i hunpos tag my ntuen ? i can't get the following code to work."
ht = HunposTagger('C:/Users/english.model','C:/Users/hunpos-tag.exe')
x=word_tokenize(corpus)
ht.tag(x)
```
> ---
>
> When I run this module I get the following error:
```
Traceback (most recent call last):
File "C:\Users\Tülin\Desktop\hunpos_deneme.py", line 12, in <module>
ht.tag(x)
File "C:\Python34\lib\site-packages\nltk\tag\hunpos.py", line 109, in tag
self._hunpos.stdin.write(token + "\n")
TypeError: can't concat bytes to str
```
> I tried several things, but I could not successfully eliminate the problem and get a correct result.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nltk/tag/hunpos.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Natural Language Toolkit: Interface to the HunPos POS-tagger
3 #
4 # Copyright (C) 2001-2015 NLTK Project
5 # Author: Peter Ljunglöf <[email protected]>
6 # David Nemeskey <[email protected]> (modifications)
7 # Attila Zseder <[email protected]> (modifications)
8 # URL: <http://nltk.org/>
9 # For license information, see LICENSE.TXT
10
11 """
12 A module for interfacing with the HunPos open-source POS-tagger.
13 """
14
15 import os
16 from subprocess import Popen, PIPE
17
18 from nltk.internals import find_binary, find_file
19 from nltk.tag.api import TaggerI
20 from nltk import compat
21
22 _hunpos_url = 'http://code.google.com/p/hunpos/'
23
24 _hunpos_charset = 'ISO-8859-1'
25 """The default encoding used by hunpos: ISO-8859-1."""
26
27 class HunposTagger(TaggerI):
28 """
29 A class for pos tagging with HunPos. The input is the paths to:
30 - a model trained on training data
31 - (optionally) the path to the hunpos-tag binary
32 - (optionally) the encoding of the training data (default: ISO-8859-1)
33
34 Example:
35
36 >>> from nltk.tag.hunpos import HunposTagger
37 >>> ht = HunposTagger('en_wsj.model')
38 >>> ht.tag('What is the airspeed of an unladen swallow ?'.split())
39 [('What', 'WP'), ('is', 'VBZ'), ('the', 'DT'), ('airspeed', 'NN'), ('of', 'IN'), ('an', 'DT'), ('unladen', 'NN'), ('swallow', 'VB'), ('?', '.')]
40 >>> ht.close()
41
42 This class communicates with the hunpos-tag binary via pipes. When the
43 tagger object is no longer needed, the close() method should be called to
44 free system resources. The class supports the context manager interface; if
45 used in a with statement, the close() method is invoked automatically:
46
47 >>> with HunposTagger('en_wsj.model') as ht:
48 ... ht.tag('What is the airspeed of an unladen swallow ?'.split())
49 ...
50 [('What', 'WP'), ('is', 'VBZ'), ('the', 'DT'), ('airspeed', 'NN'), ('of', 'IN'), ('an', 'DT'), ('unladen', 'NN'), ('swallow', 'VB'), ('?', '.')]
51 """
52
53 def __init__(self, path_to_model, path_to_bin=None,
54 encoding=_hunpos_charset, verbose=False):
55 """
56 Starts the hunpos-tag executable and establishes a connection with it.
57
58 :param path_to_model: The model file.
59 :param path_to_bin: The hunpos-tag binary.
60 :param encoding: The encoding used by the model. Unicode tokens
61 passed to the tag() and tag_sents() methods are converted to
62 this charset when they are sent to hunpos-tag.
63 The default is ISO-8859-1 (Latin-1).
64
65 This parameter is ignored for str tokens, which are sent as-is.
66 The caller must ensure that tokens are encoded in the right charset.
67 """
68 self._closed = True
69 hunpos_paths = ['.', '/usr/bin', '/usr/local/bin', '/opt/local/bin',
70 '/Applications/bin', '~/bin', '~/Applications/bin']
71 hunpos_paths = list(map(os.path.expanduser, hunpos_paths))
72
73 self._hunpos_bin = find_binary(
74 'hunpos-tag', path_to_bin,
75 env_vars=('HUNPOS_TAGGER',),
76 searchpath=hunpos_paths,
77 url=_hunpos_url,
78 verbose=verbose)
79
80 self._hunpos_model = find_file(path_to_model,
81 env_vars=('HUNPOS_TAGGER',), verbose=verbose)
82 self._encoding = encoding
83 self._hunpos = Popen([self._hunpos_bin, self._hunpos_model],
84 shell=False, stdin=PIPE, stdout=PIPE, stderr=PIPE)
85 self._closed = False
86
87 def __del__(self):
88 self.close()
89
90 def close(self):
91 """Closes the pipe to the hunpos executable."""
92 if not self._closed:
93 self._hunpos.communicate()
94 self._closed = True
95
96 def __enter__(self):
97 return self
98 def __exit__(self, exc_type, exc_value, traceback):
99 self.close()
100
101 def tag(self, tokens):
102 """Tags a single sentence: a list of words.
103 The tokens should not contain any newline characters.
104 """
105 for token in tokens:
106 assert "\n" not in token, "Tokens should not contain newlines"
107 if isinstance(token, compat.text_type):
108 token = token.encode(self._encoding)
109 self._hunpos.stdin.write(token + "\n")
110 # We write a final empty line to tell hunpos that the sentence is finished:
111 self._hunpos.stdin.write("\n")
112 self._hunpos.stdin.flush()
113
114 tagged_tokens = []
115 for token in tokens:
116 tagged = self._hunpos.stdout.readline().strip().split("\t")
117 tag = (tagged[1] if len(tagged) > 1 else None)
118 tagged_tokens.append((token, tag))
119 # We have to read (and dismiss) the final empty line:
120 self._hunpos.stdout.readline()
121
122 return tagged_tokens
123
124 # skip doctests if Hunpos tagger is not installed
125 def setup_module(module):
126 from nose import SkipTest
127 try:
128 HunposTagger('en_wsj.model')
129 except LookupError:
130 raise SkipTest("HunposTagger is not available")
131
132 if __name__ == "__main__":
133 import doctest
134 doctest.testmod(optionflags=doctest.NORMALIZE_WHITESPACE)
135
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nltk/tag/hunpos.py b/nltk/tag/hunpos.py
--- a/nltk/tag/hunpos.py
+++ b/nltk/tag/hunpos.py
@@ -3,8 +3,8 @@
#
# Copyright (C) 2001-2015 NLTK Project
# Author: Peter Ljunglöf <[email protected]>
-# David Nemeskey <[email protected]> (modifications)
-# Attila Zseder <[email protected]> (modifications)
+# Dávid Márk Nemeskey <[email protected]> (modifications)
+# Attila Zséder <[email protected]> (modifications)
# URL: <http://nltk.org/>
# For license information, see LICENSE.TXT
@@ -71,14 +71,15 @@
hunpos_paths = list(map(os.path.expanduser, hunpos_paths))
self._hunpos_bin = find_binary(
- 'hunpos-tag', path_to_bin,
- env_vars=('HUNPOS_TAGGER',),
- searchpath=hunpos_paths,
- url=_hunpos_url,
- verbose=verbose)
-
- self._hunpos_model = find_file(path_to_model,
- env_vars=('HUNPOS_TAGGER',), verbose=verbose)
+ 'hunpos-tag', path_to_bin,
+ env_vars=('HUNPOS_TAGGER',),
+ searchpath=hunpos_paths,
+ url=_hunpos_url,
+ verbose=verbose
+ )
+
+ self._hunpos_model = find_file(
+ path_to_model, env_vars=('HUNPOS_TAGGER',), verbose=verbose)
self._encoding = encoding
self._hunpos = Popen([self._hunpos_bin, self._hunpos_model],
shell=False, stdin=PIPE, stdout=PIPE, stderr=PIPE)
@@ -106,14 +107,14 @@
assert "\n" not in token, "Tokens should not contain newlines"
if isinstance(token, compat.text_type):
token = token.encode(self._encoding)
- self._hunpos.stdin.write(token + "\n")
+ self._hunpos.stdin.write(token + b"\n")
# We write a final empty line to tell hunpos that the sentence is finished:
- self._hunpos.stdin.write("\n")
+ self._hunpos.stdin.write(b"\n")
self._hunpos.stdin.flush()
tagged_tokens = []
for token in tokens:
- tagged = self._hunpos.stdout.readline().strip().split("\t")
+ tagged = self._hunpos.stdout.readline().strip().split(b"\t")
tag = (tagged[1] if len(tagged) > 1 else None)
tagged_tokens.append((token, tag))
# We have to read (and dismiss) the final empty line:
| {"golden_diff": "diff --git a/nltk/tag/hunpos.py b/nltk/tag/hunpos.py\n--- a/nltk/tag/hunpos.py\n+++ b/nltk/tag/hunpos.py\n@@ -3,8 +3,8 @@\n #\n # Copyright (C) 2001-2015 NLTK Project\n # Author: Peter Ljungl\u00f6f <[email protected]>\n-# David Nemeskey <[email protected]> (modifications)\n-# Attila Zseder <[email protected]> (modifications)\n+# D\u00e1vid M\u00e1rk Nemeskey <[email protected]> (modifications)\n+# Attila Zs\u00e9der <[email protected]> (modifications)\n # URL: <http://nltk.org/>\n # For license information, see LICENSE.TXT\n \n@@ -71,14 +71,15 @@\n hunpos_paths = list(map(os.path.expanduser, hunpos_paths))\n \n self._hunpos_bin = find_binary(\n- 'hunpos-tag', path_to_bin,\n- env_vars=('HUNPOS_TAGGER',),\n- searchpath=hunpos_paths,\n- url=_hunpos_url,\n- verbose=verbose)\n-\n- self._hunpos_model = find_file(path_to_model,\n- env_vars=('HUNPOS_TAGGER',), verbose=verbose)\n+ 'hunpos-tag', path_to_bin,\n+ env_vars=('HUNPOS_TAGGER',),\n+ searchpath=hunpos_paths,\n+ url=_hunpos_url,\n+ verbose=verbose\n+ )\n+\n+ self._hunpos_model = find_file(\n+ path_to_model, env_vars=('HUNPOS_TAGGER',), verbose=verbose)\n self._encoding = encoding\n self._hunpos = Popen([self._hunpos_bin, self._hunpos_model],\n shell=False, stdin=PIPE, stdout=PIPE, stderr=PIPE)\n@@ -106,14 +107,14 @@\n assert \"\\n\" not in token, \"Tokens should not contain newlines\"\n if isinstance(token, compat.text_type):\n token = token.encode(self._encoding)\n- self._hunpos.stdin.write(token + \"\\n\")\n+ self._hunpos.stdin.write(token + b\"\\n\")\n # We write a final empty line to tell hunpos that the sentence is finished:\n- self._hunpos.stdin.write(\"\\n\")\n+ self._hunpos.stdin.write(b\"\\n\")\n self._hunpos.stdin.flush()\n \n tagged_tokens = []\n for token in tokens:\n- tagged = self._hunpos.stdout.readline().strip().split(\"\\t\")\n+ tagged = self._hunpos.stdout.readline().strip().split(b\"\\t\")\n tag = (tagged[1] if len(tagged) > 1 else None)\n tagged_tokens.append((token, tag))\n # We have to read (and dismiss) the final empty line:\n", "issue": "HunposTagger error in Python3\nBug report from T\u00fclin Er\u00e7elebi Ayy\u0131ld\u0131z:\n\n> Dear hunpos authors,\n> \n> I am currently trying to use hunpos tagger to tag a text file, however I get an error at the stage of hunpos.py run.\n> \n> My configuration:\n> OS : Windows 7-64bit\n> python 3.4.1\n> nltk 3.0.1\n> \n> All \"english.model\", \"hunpos-tag.exe\" and \"hunpos-train.exe\" are located in \"C:/Users\" folder. My python code is as follows:\n> ---\n\n``` python\nimport nltk \nfrom nltk.tag.hunpos import HunposTagger\nfrom nltk.tokenize import word_tokenize\ncorpus = \"so how do i hunpos tag my ntuen ? i can't get the following code to work.\"\n\nht = HunposTagger('C:/Users/english.model','C:/Users/hunpos-tag.exe')\nx=word_tokenize(corpus)\nht.tag(x)\n```\n\n> ---\n> \n> When I run this module I get the following error:\n\n```\nTraceback (most recent call last):\n File \"C:\\Users\\T\u00fclin\\Desktop\\hunpos_deneme.py\", line 12, in <module>\n ht.tag(x)\n File \"C:\\Python34\\lib\\site-packages\\nltk\\tag\\hunpos.py\", line 109, in tag\n self._hunpos.stdin.write(token + \"\\n\")\nTypeError: can't concat bytes to str\n```\n\n> I tried several things, but I could not successfully eliminate the problem and get a correct result.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Natural Language Toolkit: Interface to the HunPos POS-tagger\n#\n# Copyright (C) 2001-2015 NLTK Project\n# Author: Peter Ljungl\u00f6f <[email protected]>\n# David Nemeskey <[email protected]> (modifications)\n# Attila Zseder <[email protected]> (modifications)\n# URL: <http://nltk.org/>\n# For license information, see LICENSE.TXT\n\n\"\"\"\nA module for interfacing with the HunPos open-source POS-tagger.\n\"\"\"\n\nimport os\nfrom subprocess import Popen, PIPE\n\nfrom nltk.internals import find_binary, find_file\nfrom nltk.tag.api import TaggerI\nfrom nltk import compat\n\n_hunpos_url = 'http://code.google.com/p/hunpos/'\n\n_hunpos_charset = 'ISO-8859-1'\n\"\"\"The default encoding used by hunpos: ISO-8859-1.\"\"\"\n\nclass HunposTagger(TaggerI):\n \"\"\"\n A class for pos tagging with HunPos. The input is the paths to:\n - a model trained on training data\n - (optionally) the path to the hunpos-tag binary\n - (optionally) the encoding of the training data (default: ISO-8859-1)\n\n Example:\n\n >>> from nltk.tag.hunpos import HunposTagger\n >>> ht = HunposTagger('en_wsj.model')\n >>> ht.tag('What is the airspeed of an unladen swallow ?'.split())\n [('What', 'WP'), ('is', 'VBZ'), ('the', 'DT'), ('airspeed', 'NN'), ('of', 'IN'), ('an', 'DT'), ('unladen', 'NN'), ('swallow', 'VB'), ('?', '.')]\n >>> ht.close()\n\n This class communicates with the hunpos-tag binary via pipes. When the\n tagger object is no longer needed, the close() method should be called to\n free system resources. The class supports the context manager interface; if\n used in a with statement, the close() method is invoked automatically:\n\n >>> with HunposTagger('en_wsj.model') as ht:\n ... ht.tag('What is the airspeed of an unladen swallow ?'.split())\n ...\n [('What', 'WP'), ('is', 'VBZ'), ('the', 'DT'), ('airspeed', 'NN'), ('of', 'IN'), ('an', 'DT'), ('unladen', 'NN'), ('swallow', 'VB'), ('?', '.')]\n \"\"\"\n\n def __init__(self, path_to_model, path_to_bin=None,\n encoding=_hunpos_charset, verbose=False):\n \"\"\"\n Starts the hunpos-tag executable and establishes a connection with it.\n\n :param path_to_model: The model file.\n :param path_to_bin: The hunpos-tag binary.\n :param encoding: The encoding used by the model. Unicode tokens\n passed to the tag() and tag_sents() methods are converted to\n this charset when they are sent to hunpos-tag.\n The default is ISO-8859-1 (Latin-1).\n\n This parameter is ignored for str tokens, which are sent as-is.\n The caller must ensure that tokens are encoded in the right charset.\n \"\"\"\n self._closed = True\n hunpos_paths = ['.', '/usr/bin', '/usr/local/bin', '/opt/local/bin',\n '/Applications/bin', '~/bin', '~/Applications/bin']\n hunpos_paths = list(map(os.path.expanduser, hunpos_paths))\n\n self._hunpos_bin = find_binary(\n 'hunpos-tag', path_to_bin,\n env_vars=('HUNPOS_TAGGER',),\n searchpath=hunpos_paths,\n url=_hunpos_url,\n verbose=verbose)\n\n self._hunpos_model = find_file(path_to_model,\n env_vars=('HUNPOS_TAGGER',), verbose=verbose)\n self._encoding = encoding\n self._hunpos = Popen([self._hunpos_bin, self._hunpos_model],\n shell=False, stdin=PIPE, stdout=PIPE, stderr=PIPE)\n self._closed = False\n\n def __del__(self):\n self.close()\n\n def close(self):\n \"\"\"Closes the pipe to the hunpos executable.\"\"\"\n if not self._closed:\n self._hunpos.communicate()\n self._closed = True\n\n def __enter__(self):\n return self\n def __exit__(self, exc_type, exc_value, traceback):\n self.close()\n\n def tag(self, tokens):\n \"\"\"Tags a single sentence: a list of words.\n The tokens should not contain any newline characters.\n \"\"\"\n for token in tokens:\n assert \"\\n\" not in token, \"Tokens should not contain newlines\"\n if isinstance(token, compat.text_type):\n token = token.encode(self._encoding)\n self._hunpos.stdin.write(token + \"\\n\")\n # We write a final empty line to tell hunpos that the sentence is finished:\n self._hunpos.stdin.write(\"\\n\")\n self._hunpos.stdin.flush()\n\n tagged_tokens = []\n for token in tokens:\n tagged = self._hunpos.stdout.readline().strip().split(\"\\t\")\n tag = (tagged[1] if len(tagged) > 1 else None)\n tagged_tokens.append((token, tag))\n # We have to read (and dismiss) the final empty line:\n self._hunpos.stdout.readline()\n\n return tagged_tokens\n\n# skip doctests if Hunpos tagger is not installed\ndef setup_module(module):\n from nose import SkipTest\n try:\n HunposTagger('en_wsj.model')\n except LookupError:\n raise SkipTest(\"HunposTagger is not available\")\n\nif __name__ == \"__main__\":\n import doctest\n doctest.testmod(optionflags=doctest.NORMALIZE_WHITESPACE)\n", "path": "nltk/tag/hunpos.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Natural Language Toolkit: Interface to the HunPos POS-tagger\n#\n# Copyright (C) 2001-2015 NLTK Project\n# Author: Peter Ljungl\u00f6f <[email protected]>\n# D\u00e1vid M\u00e1rk Nemeskey <[email protected]> (modifications)\n# Attila Zs\u00e9der <[email protected]> (modifications)\n# URL: <http://nltk.org/>\n# For license information, see LICENSE.TXT\n\n\"\"\"\nA module for interfacing with the HunPos open-source POS-tagger.\n\"\"\"\n\nimport os\nfrom subprocess import Popen, PIPE\n\nfrom nltk.internals import find_binary, find_file\nfrom nltk.tag.api import TaggerI\nfrom nltk import compat\n\n_hunpos_url = 'http://code.google.com/p/hunpos/'\n\n_hunpos_charset = 'ISO-8859-1'\n\"\"\"The default encoding used by hunpos: ISO-8859-1.\"\"\"\n\nclass HunposTagger(TaggerI):\n \"\"\"\n A class for pos tagging with HunPos. The input is the paths to:\n - a model trained on training data\n - (optionally) the path to the hunpos-tag binary\n - (optionally) the encoding of the training data (default: ISO-8859-1)\n\n Example:\n\n >>> from nltk.tag.hunpos import HunposTagger\n >>> ht = HunposTagger('en_wsj.model')\n >>> ht.tag('What is the airspeed of an unladen swallow ?'.split())\n [('What', 'WP'), ('is', 'VBZ'), ('the', 'DT'), ('airspeed', 'NN'), ('of', 'IN'), ('an', 'DT'), ('unladen', 'NN'), ('swallow', 'VB'), ('?', '.')]\n >>> ht.close()\n\n This class communicates with the hunpos-tag binary via pipes. When the\n tagger object is no longer needed, the close() method should be called to\n free system resources. The class supports the context manager interface; if\n used in a with statement, the close() method is invoked automatically:\n\n >>> with HunposTagger('en_wsj.model') as ht:\n ... ht.tag('What is the airspeed of an unladen swallow ?'.split())\n ...\n [('What', 'WP'), ('is', 'VBZ'), ('the', 'DT'), ('airspeed', 'NN'), ('of', 'IN'), ('an', 'DT'), ('unladen', 'NN'), ('swallow', 'VB'), ('?', '.')]\n \"\"\"\n\n def __init__(self, path_to_model, path_to_bin=None,\n encoding=_hunpos_charset, verbose=False):\n \"\"\"\n Starts the hunpos-tag executable and establishes a connection with it.\n\n :param path_to_model: The model file.\n :param path_to_bin: The hunpos-tag binary.\n :param encoding: The encoding used by the model. Unicode tokens\n passed to the tag() and tag_sents() methods are converted to\n this charset when they are sent to hunpos-tag.\n The default is ISO-8859-1 (Latin-1).\n\n This parameter is ignored for str tokens, which are sent as-is.\n The caller must ensure that tokens are encoded in the right charset.\n \"\"\"\n self._closed = True\n hunpos_paths = ['.', '/usr/bin', '/usr/local/bin', '/opt/local/bin',\n '/Applications/bin', '~/bin', '~/Applications/bin']\n hunpos_paths = list(map(os.path.expanduser, hunpos_paths))\n\n self._hunpos_bin = find_binary(\n 'hunpos-tag', path_to_bin,\n env_vars=('HUNPOS_TAGGER',),\n searchpath=hunpos_paths,\n url=_hunpos_url,\n verbose=verbose\n )\n\n self._hunpos_model = find_file(\n path_to_model, env_vars=('HUNPOS_TAGGER',), verbose=verbose)\n self._encoding = encoding\n self._hunpos = Popen([self._hunpos_bin, self._hunpos_model],\n shell=False, stdin=PIPE, stdout=PIPE, stderr=PIPE)\n self._closed = False\n\n def __del__(self):\n self.close()\n\n def close(self):\n \"\"\"Closes the pipe to the hunpos executable.\"\"\"\n if not self._closed:\n self._hunpos.communicate()\n self._closed = True\n\n def __enter__(self):\n return self\n def __exit__(self, exc_type, exc_value, traceback):\n self.close()\n\n def tag(self, tokens):\n \"\"\"Tags a single sentence: a list of words.\n The tokens should not contain any newline characters.\n \"\"\"\n for token in tokens:\n assert \"\\n\" not in token, \"Tokens should not contain newlines\"\n if isinstance(token, compat.text_type):\n token = token.encode(self._encoding)\n self._hunpos.stdin.write(token + b\"\\n\")\n # We write a final empty line to tell hunpos that the sentence is finished:\n self._hunpos.stdin.write(b\"\\n\")\n self._hunpos.stdin.flush()\n\n tagged_tokens = []\n for token in tokens:\n tagged = self._hunpos.stdout.readline().strip().split(b\"\\t\")\n tag = (tagged[1] if len(tagged) > 1 else None)\n tagged_tokens.append((token, tag))\n # We have to read (and dismiss) the final empty line:\n self._hunpos.stdout.readline()\n\n return tagged_tokens\n\n# skip doctests if Hunpos tagger is not installed\ndef setup_module(module):\n from nose import SkipTest\n try:\n HunposTagger('en_wsj.model')\n except LookupError:\n raise SkipTest(\"HunposTagger is not available\")\n\nif __name__ == \"__main__\":\n import doctest\n doctest.testmod(optionflags=doctest.NORMALIZE_WHITESPACE)\n", "path": "nltk/tag/hunpos.py"}]} | 2,246 | 670 |
gh_patches_debug_34326 | rasdani/github-patches | git_diff | encode__starlette-1350 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
StaticFiles support for directories other than "statics"
### Checklist
<!-- Please make sure you check all these items before submitting your feature request. -->
- [/] There are no similar issues or pull requests for this yet.
- [X - tried to get feedback but no replies] I discussed this idea on the [community chat](https://gitter.im/encode/community) and feedback is positive.
### Is your feature related to a problem? Please describe.
I want to be able to serve static files from arbitrary locations within a python package but I can't because currently it looks in a `statics` folder which is hardcoded.
## Describe the solution you would like.
I'd like to be able to pass as a parameter the path within the python package to look in for the static files.
## Describe alternatives you considered
<!-- Please describe any alternative solutions or features you've considered to solve
your problem and why they wouldn't solve it. -->
I've considered changing the location of my packaged files to `statics` but this will have knock on effects across other systems, and `statics` itself is non-standard as far as I can tell.
## Additional context
<!-- Provide any additional context, screenshots, tracebacks, etc. about the feature here. -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `starlette/staticfiles.py`
Content:
```
1 import importlib.util
2 import os
3 import stat
4 import typing
5 from email.utils import parsedate
6
7 import anyio
8
9 from starlette.datastructures import URL, Headers
10 from starlette.exceptions import HTTPException
11 from starlette.responses import FileResponse, RedirectResponse, Response
12 from starlette.types import Receive, Scope, Send
13
14 PathLike = typing.Union[str, "os.PathLike[str]"]
15
16
17 class NotModifiedResponse(Response):
18 NOT_MODIFIED_HEADERS = (
19 "cache-control",
20 "content-location",
21 "date",
22 "etag",
23 "expires",
24 "vary",
25 )
26
27 def __init__(self, headers: Headers):
28 super().__init__(
29 status_code=304,
30 headers={
31 name: value
32 for name, value in headers.items()
33 if name in self.NOT_MODIFIED_HEADERS
34 },
35 )
36
37
38 class StaticFiles:
39 def __init__(
40 self,
41 *,
42 directory: PathLike = None,
43 packages: typing.List[str] = None,
44 html: bool = False,
45 check_dir: bool = True,
46 ) -> None:
47 self.directory = directory
48 self.packages = packages
49 self.all_directories = self.get_directories(directory, packages)
50 self.html = html
51 self.config_checked = False
52 if check_dir and directory is not None and not os.path.isdir(directory):
53 raise RuntimeError(f"Directory '{directory}' does not exist")
54
55 def get_directories(
56 self, directory: PathLike = None, packages: typing.List[str] = None
57 ) -> typing.List[PathLike]:
58 """
59 Given `directory` and `packages` arguments, return a list of all the
60 directories that should be used for serving static files from.
61 """
62 directories = []
63 if directory is not None:
64 directories.append(directory)
65
66 for package in packages or []:
67 spec = importlib.util.find_spec(package)
68 assert spec is not None, f"Package {package!r} could not be found."
69 assert (
70 spec.origin is not None
71 ), f"Directory 'statics' in package {package!r} could not be found."
72 package_directory = os.path.normpath(
73 os.path.join(spec.origin, "..", "statics")
74 )
75 assert os.path.isdir(
76 package_directory
77 ), f"Directory 'statics' in package {package!r} could not be found."
78 directories.append(package_directory)
79
80 return directories
81
82 async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
83 """
84 The ASGI entry point.
85 """
86 assert scope["type"] == "http"
87
88 if not self.config_checked:
89 await self.check_config()
90 self.config_checked = True
91
92 path = self.get_path(scope)
93 response = await self.get_response(path, scope)
94 await response(scope, receive, send)
95
96 def get_path(self, scope: Scope) -> str:
97 """
98 Given the ASGI scope, return the `path` string to serve up,
99 with OS specific path seperators, and any '..', '.' components removed.
100 """
101 return os.path.normpath(os.path.join(*scope["path"].split("/")))
102
103 async def get_response(self, path: str, scope: Scope) -> Response:
104 """
105 Returns an HTTP response, given the incoming path, method and request headers.
106 """
107 if scope["method"] not in ("GET", "HEAD"):
108 raise HTTPException(status_code=405)
109
110 try:
111 full_path, stat_result = await anyio.to_thread.run_sync(
112 self.lookup_path, path
113 )
114 except PermissionError:
115 raise HTTPException(status_code=401)
116 except OSError:
117 raise
118
119 if stat_result and stat.S_ISREG(stat_result.st_mode):
120 # We have a static file to serve.
121 return self.file_response(full_path, stat_result, scope)
122
123 elif stat_result and stat.S_ISDIR(stat_result.st_mode) and self.html:
124 # We're in HTML mode, and have got a directory URL.
125 # Check if we have 'index.html' file to serve.
126 index_path = os.path.join(path, "index.html")
127 full_path, stat_result = await anyio.to_thread.run_sync(
128 self.lookup_path, index_path
129 )
130 if stat_result is not None and stat.S_ISREG(stat_result.st_mode):
131 if not scope["path"].endswith("/"):
132 # Directory URLs should redirect to always end in "/".
133 url = URL(scope=scope)
134 url = url.replace(path=url.path + "/")
135 return RedirectResponse(url=url)
136 return self.file_response(full_path, stat_result, scope)
137
138 if self.html:
139 # Check for '404.html' if we're in HTML mode.
140 full_path, stat_result = await anyio.to_thread.run_sync(
141 self.lookup_path, "404.html"
142 )
143 if stat_result and stat.S_ISREG(stat_result.st_mode):
144 return FileResponse(
145 full_path,
146 stat_result=stat_result,
147 method=scope["method"],
148 status_code=404,
149 )
150 raise HTTPException(status_code=404)
151
152 def lookup_path(
153 self, path: str
154 ) -> typing.Tuple[str, typing.Optional[os.stat_result]]:
155 for directory in self.all_directories:
156 full_path = os.path.realpath(os.path.join(directory, path))
157 directory = os.path.realpath(directory)
158 if os.path.commonprefix([full_path, directory]) != directory:
159 # Don't allow misbehaving clients to break out of the static files
160 # directory.
161 continue
162 try:
163 return full_path, os.stat(full_path)
164 except (FileNotFoundError, NotADirectoryError):
165 continue
166 return "", None
167
168 def file_response(
169 self,
170 full_path: PathLike,
171 stat_result: os.stat_result,
172 scope: Scope,
173 status_code: int = 200,
174 ) -> Response:
175 method = scope["method"]
176 request_headers = Headers(scope=scope)
177
178 response = FileResponse(
179 full_path, status_code=status_code, stat_result=stat_result, method=method
180 )
181 if self.is_not_modified(response.headers, request_headers):
182 return NotModifiedResponse(response.headers)
183 return response
184
185 async def check_config(self) -> None:
186 """
187 Perform a one-off configuration check that StaticFiles is actually
188 pointed at a directory, so that we can raise loud errors rather than
189 just returning 404 responses.
190 """
191 if self.directory is None:
192 return
193
194 try:
195 stat_result = await anyio.to_thread.run_sync(os.stat, self.directory)
196 except FileNotFoundError:
197 raise RuntimeError(
198 f"StaticFiles directory '{self.directory}' does not exist."
199 )
200 if not (stat.S_ISDIR(stat_result.st_mode) or stat.S_ISLNK(stat_result.st_mode)):
201 raise RuntimeError(
202 f"StaticFiles path '{self.directory}' is not a directory."
203 )
204
205 def is_not_modified(
206 self, response_headers: Headers, request_headers: Headers
207 ) -> bool:
208 """
209 Given the request and response headers, return `True` if an HTTP
210 "Not Modified" response could be returned instead.
211 """
212 try:
213 if_none_match = request_headers["if-none-match"]
214 etag = response_headers["etag"]
215 if if_none_match == etag:
216 return True
217 except KeyError:
218 pass
219
220 try:
221 if_modified_since = parsedate(request_headers["if-modified-since"])
222 last_modified = parsedate(response_headers["last-modified"])
223 if (
224 if_modified_since is not None
225 and last_modified is not None
226 and if_modified_since >= last_modified
227 ):
228 return True
229 except KeyError:
230 pass
231
232 return False
233
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/starlette/staticfiles.py b/starlette/staticfiles.py
--- a/starlette/staticfiles.py
+++ b/starlette/staticfiles.py
@@ -40,7 +40,7 @@
self,
*,
directory: PathLike = None,
- packages: typing.List[str] = None,
+ packages: typing.List[typing.Union[str, typing.Tuple[str, str]]] = None,
html: bool = False,
check_dir: bool = True,
) -> None:
@@ -53,7 +53,9 @@
raise RuntimeError(f"Directory '{directory}' does not exist")
def get_directories(
- self, directory: PathLike = None, packages: typing.List[str] = None
+ self,
+ directory: PathLike = None,
+ packages: typing.List[typing.Union[str, typing.Tuple[str, str]]] = None,
) -> typing.List[PathLike]:
"""
Given `directory` and `packages` arguments, return a list of all the
@@ -64,17 +66,19 @@
directories.append(directory)
for package in packages or []:
+ if isinstance(package, tuple):
+ package, statics_dir = package
+ else:
+ statics_dir = "statics"
spec = importlib.util.find_spec(package)
assert spec is not None, f"Package {package!r} could not be found."
- assert (
- spec.origin is not None
- ), f"Directory 'statics' in package {package!r} could not be found."
+ assert spec.origin is not None, f"Package {package!r} could not be found."
package_directory = os.path.normpath(
- os.path.join(spec.origin, "..", "statics")
+ os.path.join(spec.origin, "..", statics_dir)
)
assert os.path.isdir(
package_directory
- ), f"Directory 'statics' in package {package!r} could not be found."
+ ), f"Directory '{statics_dir!r}' in package {package!r} could not be found."
directories.append(package_directory)
return directories
| {"golden_diff": "diff --git a/starlette/staticfiles.py b/starlette/staticfiles.py\n--- a/starlette/staticfiles.py\n+++ b/starlette/staticfiles.py\n@@ -40,7 +40,7 @@\n self,\n *,\n directory: PathLike = None,\n- packages: typing.List[str] = None,\n+ packages: typing.List[typing.Union[str, typing.Tuple[str, str]]] = None,\n html: bool = False,\n check_dir: bool = True,\n ) -> None:\n@@ -53,7 +53,9 @@\n raise RuntimeError(f\"Directory '{directory}' does not exist\")\n \n def get_directories(\n- self, directory: PathLike = None, packages: typing.List[str] = None\n+ self,\n+ directory: PathLike = None,\n+ packages: typing.List[typing.Union[str, typing.Tuple[str, str]]] = None,\n ) -> typing.List[PathLike]:\n \"\"\"\n Given `directory` and `packages` arguments, return a list of all the\n@@ -64,17 +66,19 @@\n directories.append(directory)\n \n for package in packages or []:\n+ if isinstance(package, tuple):\n+ package, statics_dir = package\n+ else:\n+ statics_dir = \"statics\"\n spec = importlib.util.find_spec(package)\n assert spec is not None, f\"Package {package!r} could not be found.\"\n- assert (\n- spec.origin is not None\n- ), f\"Directory 'statics' in package {package!r} could not be found.\"\n+ assert spec.origin is not None, f\"Package {package!r} could not be found.\"\n package_directory = os.path.normpath(\n- os.path.join(spec.origin, \"..\", \"statics\")\n+ os.path.join(spec.origin, \"..\", statics_dir)\n )\n assert os.path.isdir(\n package_directory\n- ), f\"Directory 'statics' in package {package!r} could not be found.\"\n+ ), f\"Directory '{statics_dir!r}' in package {package!r} could not be found.\"\n directories.append(package_directory)\n \n return directories\n", "issue": "StaticFiles support for directories other than \"statics\"\n### Checklist\r\n\r\n<!-- Please make sure you check all these items before submitting your feature request. -->\r\n\r\n- [/] There are no similar issues or pull requests for this yet.\r\n- [X - tried to get feedback but no replies] I discussed this idea on the [community chat](https://gitter.im/encode/community) and feedback is positive.\r\n\r\n### Is your feature related to a problem? Please describe.\r\n\r\nI want to be able to serve static files from arbitrary locations within a python package but I can't because currently it looks in a `statics` folder which is hardcoded.\r\n\r\n## Describe the solution you would like.\r\n\r\nI'd like to be able to pass as a parameter the path within the python package to look in for the static files.\r\n\r\n## Describe alternatives you considered\r\n\r\n<!-- Please describe any alternative solutions or features you've considered to solve\r\nyour problem and why they wouldn't solve it. -->\r\n\r\nI've considered changing the location of my packaged files to `statics` but this will have knock on effects across other systems, and `statics` itself is non-standard as far as I can tell.\r\n\r\n## Additional context\r\n\r\n<!-- Provide any additional context, screenshots, tracebacks, etc. about the feature here. -->\r\n\n", "before_files": [{"content": "import importlib.util\nimport os\nimport stat\nimport typing\nfrom email.utils import parsedate\n\nimport anyio\n\nfrom starlette.datastructures import URL, Headers\nfrom starlette.exceptions import HTTPException\nfrom starlette.responses import FileResponse, RedirectResponse, Response\nfrom starlette.types import Receive, Scope, Send\n\nPathLike = typing.Union[str, \"os.PathLike[str]\"]\n\n\nclass NotModifiedResponse(Response):\n NOT_MODIFIED_HEADERS = (\n \"cache-control\",\n \"content-location\",\n \"date\",\n \"etag\",\n \"expires\",\n \"vary\",\n )\n\n def __init__(self, headers: Headers):\n super().__init__(\n status_code=304,\n headers={\n name: value\n for name, value in headers.items()\n if name in self.NOT_MODIFIED_HEADERS\n },\n )\n\n\nclass StaticFiles:\n def __init__(\n self,\n *,\n directory: PathLike = None,\n packages: typing.List[str] = None,\n html: bool = False,\n check_dir: bool = True,\n ) -> None:\n self.directory = directory\n self.packages = packages\n self.all_directories = self.get_directories(directory, packages)\n self.html = html\n self.config_checked = False\n if check_dir and directory is not None and not os.path.isdir(directory):\n raise RuntimeError(f\"Directory '{directory}' does not exist\")\n\n def get_directories(\n self, directory: PathLike = None, packages: typing.List[str] = None\n ) -> typing.List[PathLike]:\n \"\"\"\n Given `directory` and `packages` arguments, return a list of all the\n directories that should be used for serving static files from.\n \"\"\"\n directories = []\n if directory is not None:\n directories.append(directory)\n\n for package in packages or []:\n spec = importlib.util.find_spec(package)\n assert spec is not None, f\"Package {package!r} could not be found.\"\n assert (\n spec.origin is not None\n ), f\"Directory 'statics' in package {package!r} could not be found.\"\n package_directory = os.path.normpath(\n os.path.join(spec.origin, \"..\", \"statics\")\n )\n assert os.path.isdir(\n package_directory\n ), f\"Directory 'statics' in package {package!r} could not be found.\"\n directories.append(package_directory)\n\n return directories\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n \"\"\"\n The ASGI entry point.\n \"\"\"\n assert scope[\"type\"] == \"http\"\n\n if not self.config_checked:\n await self.check_config()\n self.config_checked = True\n\n path = self.get_path(scope)\n response = await self.get_response(path, scope)\n await response(scope, receive, send)\n\n def get_path(self, scope: Scope) -> str:\n \"\"\"\n Given the ASGI scope, return the `path` string to serve up,\n with OS specific path seperators, and any '..', '.' components removed.\n \"\"\"\n return os.path.normpath(os.path.join(*scope[\"path\"].split(\"/\")))\n\n async def get_response(self, path: str, scope: Scope) -> Response:\n \"\"\"\n Returns an HTTP response, given the incoming path, method and request headers.\n \"\"\"\n if scope[\"method\"] not in (\"GET\", \"HEAD\"):\n raise HTTPException(status_code=405)\n\n try:\n full_path, stat_result = await anyio.to_thread.run_sync(\n self.lookup_path, path\n )\n except PermissionError:\n raise HTTPException(status_code=401)\n except OSError:\n raise\n\n if stat_result and stat.S_ISREG(stat_result.st_mode):\n # We have a static file to serve.\n return self.file_response(full_path, stat_result, scope)\n\n elif stat_result and stat.S_ISDIR(stat_result.st_mode) and self.html:\n # We're in HTML mode, and have got a directory URL.\n # Check if we have 'index.html' file to serve.\n index_path = os.path.join(path, \"index.html\")\n full_path, stat_result = await anyio.to_thread.run_sync(\n self.lookup_path, index_path\n )\n if stat_result is not None and stat.S_ISREG(stat_result.st_mode):\n if not scope[\"path\"].endswith(\"/\"):\n # Directory URLs should redirect to always end in \"/\".\n url = URL(scope=scope)\n url = url.replace(path=url.path + \"/\")\n return RedirectResponse(url=url)\n return self.file_response(full_path, stat_result, scope)\n\n if self.html:\n # Check for '404.html' if we're in HTML mode.\n full_path, stat_result = await anyio.to_thread.run_sync(\n self.lookup_path, \"404.html\"\n )\n if stat_result and stat.S_ISREG(stat_result.st_mode):\n return FileResponse(\n full_path,\n stat_result=stat_result,\n method=scope[\"method\"],\n status_code=404,\n )\n raise HTTPException(status_code=404)\n\n def lookup_path(\n self, path: str\n ) -> typing.Tuple[str, typing.Optional[os.stat_result]]:\n for directory in self.all_directories:\n full_path = os.path.realpath(os.path.join(directory, path))\n directory = os.path.realpath(directory)\n if os.path.commonprefix([full_path, directory]) != directory:\n # Don't allow misbehaving clients to break out of the static files\n # directory.\n continue\n try:\n return full_path, os.stat(full_path)\n except (FileNotFoundError, NotADirectoryError):\n continue\n return \"\", None\n\n def file_response(\n self,\n full_path: PathLike,\n stat_result: os.stat_result,\n scope: Scope,\n status_code: int = 200,\n ) -> Response:\n method = scope[\"method\"]\n request_headers = Headers(scope=scope)\n\n response = FileResponse(\n full_path, status_code=status_code, stat_result=stat_result, method=method\n )\n if self.is_not_modified(response.headers, request_headers):\n return NotModifiedResponse(response.headers)\n return response\n\n async def check_config(self) -> None:\n \"\"\"\n Perform a one-off configuration check that StaticFiles is actually\n pointed at a directory, so that we can raise loud errors rather than\n just returning 404 responses.\n \"\"\"\n if self.directory is None:\n return\n\n try:\n stat_result = await anyio.to_thread.run_sync(os.stat, self.directory)\n except FileNotFoundError:\n raise RuntimeError(\n f\"StaticFiles directory '{self.directory}' does not exist.\"\n )\n if not (stat.S_ISDIR(stat_result.st_mode) or stat.S_ISLNK(stat_result.st_mode)):\n raise RuntimeError(\n f\"StaticFiles path '{self.directory}' is not a directory.\"\n )\n\n def is_not_modified(\n self, response_headers: Headers, request_headers: Headers\n ) -> bool:\n \"\"\"\n Given the request and response headers, return `True` if an HTTP\n \"Not Modified\" response could be returned instead.\n \"\"\"\n try:\n if_none_match = request_headers[\"if-none-match\"]\n etag = response_headers[\"etag\"]\n if if_none_match == etag:\n return True\n except KeyError:\n pass\n\n try:\n if_modified_since = parsedate(request_headers[\"if-modified-since\"])\n last_modified = parsedate(response_headers[\"last-modified\"])\n if (\n if_modified_since is not None\n and last_modified is not None\n and if_modified_since >= last_modified\n ):\n return True\n except KeyError:\n pass\n\n return False\n", "path": "starlette/staticfiles.py"}], "after_files": [{"content": "import importlib.util\nimport os\nimport stat\nimport typing\nfrom email.utils import parsedate\n\nimport anyio\n\nfrom starlette.datastructures import URL, Headers\nfrom starlette.exceptions import HTTPException\nfrom starlette.responses import FileResponse, RedirectResponse, Response\nfrom starlette.types import Receive, Scope, Send\n\nPathLike = typing.Union[str, \"os.PathLike[str]\"]\n\n\nclass NotModifiedResponse(Response):\n NOT_MODIFIED_HEADERS = (\n \"cache-control\",\n \"content-location\",\n \"date\",\n \"etag\",\n \"expires\",\n \"vary\",\n )\n\n def __init__(self, headers: Headers):\n super().__init__(\n status_code=304,\n headers={\n name: value\n for name, value in headers.items()\n if name in self.NOT_MODIFIED_HEADERS\n },\n )\n\n\nclass StaticFiles:\n def __init__(\n self,\n *,\n directory: PathLike = None,\n packages: typing.List[typing.Union[str, typing.Tuple[str, str]]] = None,\n html: bool = False,\n check_dir: bool = True,\n ) -> None:\n self.directory = directory\n self.packages = packages\n self.all_directories = self.get_directories(directory, packages)\n self.html = html\n self.config_checked = False\n if check_dir and directory is not None and not os.path.isdir(directory):\n raise RuntimeError(f\"Directory '{directory}' does not exist\")\n\n def get_directories(\n self,\n directory: PathLike = None,\n packages: typing.List[typing.Union[str, typing.Tuple[str, str]]] = None,\n ) -> typing.List[PathLike]:\n \"\"\"\n Given `directory` and `packages` arguments, return a list of all the\n directories that should be used for serving static files from.\n \"\"\"\n directories = []\n if directory is not None:\n directories.append(directory)\n\n for package in packages or []:\n if isinstance(package, tuple):\n package, statics_dir = package\n else:\n statics_dir = \"statics\"\n spec = importlib.util.find_spec(package)\n assert spec is not None, f\"Package {package!r} could not be found.\"\n assert spec.origin is not None, f\"Package {package!r} could not be found.\"\n package_directory = os.path.normpath(\n os.path.join(spec.origin, \"..\", statics_dir)\n )\n assert os.path.isdir(\n package_directory\n ), f\"Directory '{statics_dir!r}' in package {package!r} could not be found.\"\n directories.append(package_directory)\n\n return directories\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n \"\"\"\n The ASGI entry point.\n \"\"\"\n assert scope[\"type\"] == \"http\"\n\n if not self.config_checked:\n await self.check_config()\n self.config_checked = True\n\n path = self.get_path(scope)\n response = await self.get_response(path, scope)\n await response(scope, receive, send)\n\n def get_path(self, scope: Scope) -> str:\n \"\"\"\n Given the ASGI scope, return the `path` string to serve up,\n with OS specific path seperators, and any '..', '.' components removed.\n \"\"\"\n return os.path.normpath(os.path.join(*scope[\"path\"].split(\"/\")))\n\n async def get_response(self, path: str, scope: Scope) -> Response:\n \"\"\"\n Returns an HTTP response, given the incoming path, method and request headers.\n \"\"\"\n if scope[\"method\"] not in (\"GET\", \"HEAD\"):\n raise HTTPException(status_code=405)\n\n try:\n full_path, stat_result = await anyio.to_thread.run_sync(\n self.lookup_path, path\n )\n except PermissionError:\n raise HTTPException(status_code=401)\n except OSError:\n raise\n\n if stat_result and stat.S_ISREG(stat_result.st_mode):\n # We have a static file to serve.\n return self.file_response(full_path, stat_result, scope)\n\n elif stat_result and stat.S_ISDIR(stat_result.st_mode) and self.html:\n # We're in HTML mode, and have got a directory URL.\n # Check if we have 'index.html' file to serve.\n index_path = os.path.join(path, \"index.html\")\n full_path, stat_result = await anyio.to_thread.run_sync(\n self.lookup_path, index_path\n )\n if stat_result is not None and stat.S_ISREG(stat_result.st_mode):\n if not scope[\"path\"].endswith(\"/\"):\n # Directory URLs should redirect to always end in \"/\".\n url = URL(scope=scope)\n url = url.replace(path=url.path + \"/\")\n return RedirectResponse(url=url)\n return self.file_response(full_path, stat_result, scope)\n\n if self.html:\n # Check for '404.html' if we're in HTML mode.\n full_path, stat_result = await anyio.to_thread.run_sync(\n self.lookup_path, \"404.html\"\n )\n if stat_result and stat.S_ISREG(stat_result.st_mode):\n return FileResponse(\n full_path,\n stat_result=stat_result,\n method=scope[\"method\"],\n status_code=404,\n )\n raise HTTPException(status_code=404)\n\n def lookup_path(\n self, path: str\n ) -> typing.Tuple[str, typing.Optional[os.stat_result]]:\n for directory in self.all_directories:\n full_path = os.path.realpath(os.path.join(directory, path))\n directory = os.path.realpath(directory)\n if os.path.commonprefix([full_path, directory]) != directory:\n # Don't allow misbehaving clients to break out of the static files\n # directory.\n continue\n try:\n return full_path, os.stat(full_path)\n except (FileNotFoundError, NotADirectoryError):\n continue\n return \"\", None\n\n def file_response(\n self,\n full_path: PathLike,\n stat_result: os.stat_result,\n scope: Scope,\n status_code: int = 200,\n ) -> Response:\n method = scope[\"method\"]\n request_headers = Headers(scope=scope)\n\n response = FileResponse(\n full_path, status_code=status_code, stat_result=stat_result, method=method\n )\n if self.is_not_modified(response.headers, request_headers):\n return NotModifiedResponse(response.headers)\n return response\n\n async def check_config(self) -> None:\n \"\"\"\n Perform a one-off configuration check that StaticFiles is actually\n pointed at a directory, so that we can raise loud errors rather than\n just returning 404 responses.\n \"\"\"\n if self.directory is None:\n return\n\n try:\n stat_result = await anyio.to_thread.run_sync(os.stat, self.directory)\n except FileNotFoundError:\n raise RuntimeError(\n f\"StaticFiles directory '{self.directory}' does not exist.\"\n )\n if not (stat.S_ISDIR(stat_result.st_mode) or stat.S_ISLNK(stat_result.st_mode)):\n raise RuntimeError(\n f\"StaticFiles path '{self.directory}' is not a directory.\"\n )\n\n def is_not_modified(\n self, response_headers: Headers, request_headers: Headers\n ) -> bool:\n \"\"\"\n Given the request and response headers, return `True` if an HTTP\n \"Not Modified\" response could be returned instead.\n \"\"\"\n try:\n if_none_match = request_headers[\"if-none-match\"]\n etag = response_headers[\"etag\"]\n if if_none_match == etag:\n return True\n except KeyError:\n pass\n\n try:\n if_modified_since = parsedate(request_headers[\"if-modified-since\"])\n last_modified = parsedate(response_headers[\"last-modified\"])\n if (\n if_modified_since is not None\n and last_modified is not None\n and if_modified_since >= last_modified\n ):\n return True\n except KeyError:\n pass\n\n return False\n", "path": "starlette/staticfiles.py"}]} | 2,798 | 480 |
gh_patches_debug_7039 | rasdani/github-patches | git_diff | encode__httpx-545 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Instances of `tempfile.TemporaryFile` fail when used as an upload file.
When using `tempfile.TemporaryFile` the `file.name` attribute returns an integer, rather than the usual path string, which causes a breakage for us further down the line...
```shell
venv/lib/python3.7/site-packages/httpx/client.py:484: in post
trust_env=trust_env,
venv/lib/python3.7/site-packages/httpx/client.py:616: in request
cookies=cookies,
venv/lib/python3.7/site-packages/httpx/client.py:356: in build_request
cookies=cookies,
venv/lib/python3.7/site-packages/httpx/models.py:696: in __init__
content, content_type = self.encode_data(data, files, json)
venv/lib/python3.7/site-packages/httpx/models.py:619: in encode_data
content, content_type = multipart_encode(data or {}, files)
venv/lib/python3.7/site-packages/httpx/multipart.py:100: in multipart_encode
for field in iter_fields(data, files):
venv/lib/python3.7/site-packages/httpx/multipart.py:93: in iter_fields
yield FileField(name=name, value=value)
venv/lib/python3.7/site-packages/httpx/multipart.py:51: in __init__
self.filename = Path(getattr(value, "name", "upload")).name
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/pathlib.py:994: in __new__
self = cls._from_parts(args, init=False)
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/pathlib.py:649: in _from_parts
drv, root, parts = self._parse_args(args)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class 'pathlib.PosixPath'>, args = (29,)
@classmethod
def _parse_args(cls, args):
# This is useful when you don't want to create an instance, just
# canonicalize some constructor arguments.
parts = []
for a in args:
if isinstance(a, PurePath):
parts += a._parts
else:
> a = os.fspath(a)
E TypeError: expected str, bytes or os.PathLike object, not int
```
Have also confirmed that the issue *doesn't* occur with `tempfile.NamedTemporaryFile`.
I believe the resolution will be on this line...
https://github.com/encode/httpx/blob/1a32cf036a825f6eb35395af5388a3b23180a82e/httpx/multipart.py#L51
I assume that this would be sufficient...
```python
self.filename = Path(str(getattr(value, "name", "upload")).name
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `httpx/multipart.py`
Content:
```
1 import binascii
2 import mimetypes
3 import os
4 import re
5 import typing
6 from io import BytesIO
7 from pathlib import Path
8
9 _HTML5_FORM_ENCODING_REPLACEMENTS = {'"': "%22", "\\": "\\\\"}
10 _HTML5_FORM_ENCODING_REPLACEMENTS.update(
11 {chr(c): "%{:02X}".format(c) for c in range(0x00, 0x1F + 1) if c != 0x1B}
12 )
13 _HTML5_FORM_ENCODING_RE = re.compile(
14 r"|".join([re.escape(c) for c in _HTML5_FORM_ENCODING_REPLACEMENTS.keys()])
15 )
16
17
18 class Field:
19 def render_headers(self) -> bytes:
20 raise NotImplementedError() # pragma: nocover
21
22 def render_data(self) -> bytes:
23 raise NotImplementedError() # pragma: nocover
24
25
26 class DataField(Field):
27 def __init__(self, name: str, value: typing.Union[str, bytes]) -> None:
28 if not isinstance(name, str):
29 raise TypeError("Invalid type for name. Expected str.")
30 if not isinstance(value, (str, bytes)):
31 raise TypeError("Invalid type for value. Expected str or bytes.")
32 self.name = name
33 self.value = value
34
35 def render_headers(self) -> bytes:
36 name = _format_param("name", self.name)
37 return b"".join([b"Content-Disposition: form-data; ", name, b"\r\n\r\n"])
38
39 def render_data(self) -> bytes:
40 return (
41 self.value if isinstance(self.value, bytes) else self.value.encode("utf-8")
42 )
43
44
45 class FileField(Field):
46 def __init__(
47 self, name: str, value: typing.Union[typing.IO[typing.AnyStr], tuple]
48 ) -> None:
49 self.name = name
50 if not isinstance(value, tuple):
51 self.filename = Path(getattr(value, "name", "upload")).name
52 self.file = value # type: typing.Union[typing.IO[str], typing.IO[bytes]]
53 self.content_type = self.guess_content_type()
54 else:
55 self.filename = value[0]
56 self.file = value[1]
57 self.content_type = (
58 value[2] if len(value) > 2 else self.guess_content_type()
59 )
60
61 def guess_content_type(self) -> str:
62 if self.filename:
63 return mimetypes.guess_type(self.filename)[0] or "application/octet-stream"
64 else:
65 return "application/octet-stream"
66
67 def render_headers(self) -> bytes:
68 parts = [b"Content-Disposition: form-data; ", _format_param("name", self.name)]
69 if self.filename:
70 filename = _format_param("filename", self.filename)
71 parts.extend([b"; ", filename])
72 content_type = self.content_type.encode()
73 parts.extend([b"\r\nContent-Type: ", content_type, b"\r\n\r\n"])
74 return b"".join(parts)
75
76 def render_data(self) -> bytes:
77 if isinstance(self.file, str):
78 content = self.file
79 else:
80 content = self.file.read()
81 return content.encode("utf-8") if isinstance(content, str) else content
82
83
84 def iter_fields(data: dict, files: dict) -> typing.Iterator[Field]:
85 for name, value in data.items():
86 if isinstance(value, (list, dict)):
87 for item in value:
88 yield DataField(name=name, value=item)
89 else:
90 yield DataField(name=name, value=value)
91
92 for name, value in files.items():
93 yield FileField(name=name, value=value)
94
95
96 def multipart_encode(data: dict, files: dict) -> typing.Tuple[bytes, str]:
97 body = BytesIO()
98 boundary = binascii.hexlify(os.urandom(16))
99
100 for field in iter_fields(data, files):
101 body.write(b"--%s\r\n" % boundary)
102 body.write(field.render_headers())
103 body.write(field.render_data())
104 body.write(b"\r\n")
105
106 body.write(b"--%s--\r\n" % boundary)
107
108 content_type = "multipart/form-data; boundary=%s" % boundary.decode("ascii")
109
110 return body.getvalue(), content_type
111
112
113 def _format_param(name: str, value: typing.Union[str, bytes]) -> bytes:
114 if isinstance(value, bytes):
115 value = value.decode()
116
117 def replacer(match: typing.Match[str]) -> str:
118 return _HTML5_FORM_ENCODING_REPLACEMENTS[match.group(0)]
119
120 value = _HTML5_FORM_ENCODING_RE.sub(replacer, value)
121 return f'{name}="{value}"'.encode()
122
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/httpx/multipart.py b/httpx/multipart.py
--- a/httpx/multipart.py
+++ b/httpx/multipart.py
@@ -48,7 +48,7 @@
) -> None:
self.name = name
if not isinstance(value, tuple):
- self.filename = Path(getattr(value, "name", "upload")).name
+ self.filename = Path(str(getattr(value, "name", "upload"))).name
self.file = value # type: typing.Union[typing.IO[str], typing.IO[bytes]]
self.content_type = self.guess_content_type()
else:
| {"golden_diff": "diff --git a/httpx/multipart.py b/httpx/multipart.py\n--- a/httpx/multipart.py\n+++ b/httpx/multipart.py\n@@ -48,7 +48,7 @@\n ) -> None:\n self.name = name\n if not isinstance(value, tuple):\n- self.filename = Path(getattr(value, \"name\", \"upload\")).name\n+ self.filename = Path(str(getattr(value, \"name\", \"upload\"))).name\n self.file = value # type: typing.Union[typing.IO[str], typing.IO[bytes]]\n self.content_type = self.guess_content_type()\n else:\n", "issue": "Instances of `tempfile.TemporaryFile` fail when used as an upload file.\nWhen using `tempfile.TemporaryFile` the `file.name` attribute returns an integer, rather than the usual path string, which causes a breakage for us further down the line...\r\n\r\n```shell\r\nvenv/lib/python3.7/site-packages/httpx/client.py:484: in post\r\n trust_env=trust_env,\r\nvenv/lib/python3.7/site-packages/httpx/client.py:616: in request\r\n cookies=cookies,\r\nvenv/lib/python3.7/site-packages/httpx/client.py:356: in build_request\r\n cookies=cookies,\r\nvenv/lib/python3.7/site-packages/httpx/models.py:696: in __init__\r\n content, content_type = self.encode_data(data, files, json)\r\nvenv/lib/python3.7/site-packages/httpx/models.py:619: in encode_data\r\n content, content_type = multipart_encode(data or {}, files)\r\nvenv/lib/python3.7/site-packages/httpx/multipart.py:100: in multipart_encode\r\n for field in iter_fields(data, files):\r\nvenv/lib/python3.7/site-packages/httpx/multipart.py:93: in iter_fields\r\n yield FileField(name=name, value=value)\r\nvenv/lib/python3.7/site-packages/httpx/multipart.py:51: in __init__\r\n self.filename = Path(getattr(value, \"name\", \"upload\")).name\r\n/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/pathlib.py:994: in __new__\r\n self = cls._from_parts(args, init=False)\r\n/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/pathlib.py:649: in _from_parts\r\n drv, root, parts = self._parse_args(args)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\ncls = <class 'pathlib.PosixPath'>, args = (29,)\r\n\r\n @classmethod\r\n def _parse_args(cls, args):\r\n # This is useful when you don't want to create an instance, just\r\n # canonicalize some constructor arguments.\r\n parts = []\r\n for a in args:\r\n if isinstance(a, PurePath):\r\n parts += a._parts\r\n else:\r\n> a = os.fspath(a)\r\nE TypeError: expected str, bytes or os.PathLike object, not int\r\n```\r\n\r\nHave also confirmed that the issue *doesn't* occur with `tempfile.NamedTemporaryFile`.\r\n\r\nI believe the resolution will be on this line...\r\n\r\nhttps://github.com/encode/httpx/blob/1a32cf036a825f6eb35395af5388a3b23180a82e/httpx/multipart.py#L51\r\n\r\nI assume that this would be sufficient...\r\n\r\n```python\r\nself.filename = Path(str(getattr(value, \"name\", \"upload\")).name \r\n```\r\n\n", "before_files": [{"content": "import binascii\nimport mimetypes\nimport os\nimport re\nimport typing\nfrom io import BytesIO\nfrom pathlib import Path\n\n_HTML5_FORM_ENCODING_REPLACEMENTS = {'\"': \"%22\", \"\\\\\": \"\\\\\\\\\"}\n_HTML5_FORM_ENCODING_REPLACEMENTS.update(\n {chr(c): \"%{:02X}\".format(c) for c in range(0x00, 0x1F + 1) if c != 0x1B}\n)\n_HTML5_FORM_ENCODING_RE = re.compile(\n r\"|\".join([re.escape(c) for c in _HTML5_FORM_ENCODING_REPLACEMENTS.keys()])\n)\n\n\nclass Field:\n def render_headers(self) -> bytes:\n raise NotImplementedError() # pragma: nocover\n\n def render_data(self) -> bytes:\n raise NotImplementedError() # pragma: nocover\n\n\nclass DataField(Field):\n def __init__(self, name: str, value: typing.Union[str, bytes]) -> None:\n if not isinstance(name, str):\n raise TypeError(\"Invalid type for name. Expected str.\")\n if not isinstance(value, (str, bytes)):\n raise TypeError(\"Invalid type for value. Expected str or bytes.\")\n self.name = name\n self.value = value\n\n def render_headers(self) -> bytes:\n name = _format_param(\"name\", self.name)\n return b\"\".join([b\"Content-Disposition: form-data; \", name, b\"\\r\\n\\r\\n\"])\n\n def render_data(self) -> bytes:\n return (\n self.value if isinstance(self.value, bytes) else self.value.encode(\"utf-8\")\n )\n\n\nclass FileField(Field):\n def __init__(\n self, name: str, value: typing.Union[typing.IO[typing.AnyStr], tuple]\n ) -> None:\n self.name = name\n if not isinstance(value, tuple):\n self.filename = Path(getattr(value, \"name\", \"upload\")).name\n self.file = value # type: typing.Union[typing.IO[str], typing.IO[bytes]]\n self.content_type = self.guess_content_type()\n else:\n self.filename = value[0]\n self.file = value[1]\n self.content_type = (\n value[2] if len(value) > 2 else self.guess_content_type()\n )\n\n def guess_content_type(self) -> str:\n if self.filename:\n return mimetypes.guess_type(self.filename)[0] or \"application/octet-stream\"\n else:\n return \"application/octet-stream\"\n\n def render_headers(self) -> bytes:\n parts = [b\"Content-Disposition: form-data; \", _format_param(\"name\", self.name)]\n if self.filename:\n filename = _format_param(\"filename\", self.filename)\n parts.extend([b\"; \", filename])\n content_type = self.content_type.encode()\n parts.extend([b\"\\r\\nContent-Type: \", content_type, b\"\\r\\n\\r\\n\"])\n return b\"\".join(parts)\n\n def render_data(self) -> bytes:\n if isinstance(self.file, str):\n content = self.file\n else:\n content = self.file.read()\n return content.encode(\"utf-8\") if isinstance(content, str) else content\n\n\ndef iter_fields(data: dict, files: dict) -> typing.Iterator[Field]:\n for name, value in data.items():\n if isinstance(value, (list, dict)):\n for item in value:\n yield DataField(name=name, value=item)\n else:\n yield DataField(name=name, value=value)\n\n for name, value in files.items():\n yield FileField(name=name, value=value)\n\n\ndef multipart_encode(data: dict, files: dict) -> typing.Tuple[bytes, str]:\n body = BytesIO()\n boundary = binascii.hexlify(os.urandom(16))\n\n for field in iter_fields(data, files):\n body.write(b\"--%s\\r\\n\" % boundary)\n body.write(field.render_headers())\n body.write(field.render_data())\n body.write(b\"\\r\\n\")\n\n body.write(b\"--%s--\\r\\n\" % boundary)\n\n content_type = \"multipart/form-data; boundary=%s\" % boundary.decode(\"ascii\")\n\n return body.getvalue(), content_type\n\n\ndef _format_param(name: str, value: typing.Union[str, bytes]) -> bytes:\n if isinstance(value, bytes):\n value = value.decode()\n\n def replacer(match: typing.Match[str]) -> str:\n return _HTML5_FORM_ENCODING_REPLACEMENTS[match.group(0)]\n\n value = _HTML5_FORM_ENCODING_RE.sub(replacer, value)\n return f'{name}=\"{value}\"'.encode()\n", "path": "httpx/multipart.py"}], "after_files": [{"content": "import binascii\nimport mimetypes\nimport os\nimport re\nimport typing\nfrom io import BytesIO\nfrom pathlib import Path\n\n_HTML5_FORM_ENCODING_REPLACEMENTS = {'\"': \"%22\", \"\\\\\": \"\\\\\\\\\"}\n_HTML5_FORM_ENCODING_REPLACEMENTS.update(\n {chr(c): \"%{:02X}\".format(c) for c in range(0x00, 0x1F + 1) if c != 0x1B}\n)\n_HTML5_FORM_ENCODING_RE = re.compile(\n r\"|\".join([re.escape(c) for c in _HTML5_FORM_ENCODING_REPLACEMENTS.keys()])\n)\n\n\nclass Field:\n def render_headers(self) -> bytes:\n raise NotImplementedError() # pragma: nocover\n\n def render_data(self) -> bytes:\n raise NotImplementedError() # pragma: nocover\n\n\nclass DataField(Field):\n def __init__(self, name: str, value: typing.Union[str, bytes]) -> None:\n if not isinstance(name, str):\n raise TypeError(\"Invalid type for name. Expected str.\")\n if not isinstance(value, (str, bytes)):\n raise TypeError(\"Invalid type for value. Expected str or bytes.\")\n self.name = name\n self.value = value\n\n def render_headers(self) -> bytes:\n name = _format_param(\"name\", self.name)\n return b\"\".join([b\"Content-Disposition: form-data; \", name, b\"\\r\\n\\r\\n\"])\n\n def render_data(self) -> bytes:\n return (\n self.value if isinstance(self.value, bytes) else self.value.encode(\"utf-8\")\n )\n\n\nclass FileField(Field):\n def __init__(\n self, name: str, value: typing.Union[typing.IO[typing.AnyStr], tuple]\n ) -> None:\n self.name = name\n if not isinstance(value, tuple):\n self.filename = Path(str(getattr(value, \"name\", \"upload\"))).name\n self.file = value # type: typing.Union[typing.IO[str], typing.IO[bytes]]\n self.content_type = self.guess_content_type()\n else:\n self.filename = value[0]\n self.file = value[1]\n self.content_type = (\n value[2] if len(value) > 2 else self.guess_content_type()\n )\n\n def guess_content_type(self) -> str:\n if self.filename:\n return mimetypes.guess_type(self.filename)[0] or \"application/octet-stream\"\n else:\n return \"application/octet-stream\"\n\n def render_headers(self) -> bytes:\n parts = [b\"Content-Disposition: form-data; \", _format_param(\"name\", self.name)]\n if self.filename:\n filename = _format_param(\"filename\", self.filename)\n parts.extend([b\"; \", filename])\n content_type = self.content_type.encode()\n parts.extend([b\"\\r\\nContent-Type: \", content_type, b\"\\r\\n\\r\\n\"])\n return b\"\".join(parts)\n\n def render_data(self) -> bytes:\n if isinstance(self.file, str):\n content = self.file\n else:\n content = self.file.read()\n return content.encode(\"utf-8\") if isinstance(content, str) else content\n\n\ndef iter_fields(data: dict, files: dict) -> typing.Iterator[Field]:\n for name, value in data.items():\n if isinstance(value, (list, dict)):\n for item in value:\n yield DataField(name=name, value=item)\n else:\n yield DataField(name=name, value=value)\n\n for name, value in files.items():\n yield FileField(name=name, value=value)\n\n\ndef multipart_encode(data: dict, files: dict) -> typing.Tuple[bytes, str]:\n body = BytesIO()\n boundary = binascii.hexlify(os.urandom(16))\n\n for field in iter_fields(data, files):\n body.write(b\"--%s\\r\\n\" % boundary)\n body.write(field.render_headers())\n body.write(field.render_data())\n body.write(b\"\\r\\n\")\n\n body.write(b\"--%s--\\r\\n\" % boundary)\n\n content_type = \"multipart/form-data; boundary=%s\" % boundary.decode(\"ascii\")\n\n return body.getvalue(), content_type\n\n\ndef _format_param(name: str, value: typing.Union[str, bytes]) -> bytes:\n if isinstance(value, bytes):\n value = value.decode()\n\n def replacer(match: typing.Match[str]) -> str:\n return _HTML5_FORM_ENCODING_REPLACEMENTS[match.group(0)]\n\n value = _HTML5_FORM_ENCODING_RE.sub(replacer, value)\n return f'{name}=\"{value}\"'.encode()\n", "path": "httpx/multipart.py"}]} | 2,290 | 137 |
gh_patches_debug_1213 | rasdani/github-patches | git_diff | scalableminds__webknossos-libs-312 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Convenience for wkcuber.api
To open/create a dataset with the cool new high-level API the following code is required:
```python
from wkcuber.api.Dataset import WKDataset
from pathlib import Path
ds1 = WKDataset.create(Path("path") / "to" / "dataset1", scale=(128,128,128))
ds2 = WKDataset.open(Path("path") / "to" / "dataset2")
```
For one-off scripts, I think that could be a bit more convenient, if we had an API like this
```python
from wkcuber import WKDataset
ds1 = WKDataset.create("path/to/dataset1", scale=(128, 128, 128))
ds2 = WKDataset.open("path/to/dataset2")
```
Any thoughts? @rschwanhold @jstriebel @philippotto
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wkcuber/__init__.py`
Content:
```
1 from .cubing import cubing
2 from .downsampling import downsample_mags
3 from .compress import compress_mag
4 from .metadata import write_webknossos_metadata
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wkcuber/__init__.py b/wkcuber/__init__.py
--- a/wkcuber/__init__.py
+++ b/wkcuber/__init__.py
@@ -1,4 +1,6 @@
+from .api.Dataset import WKDataset
from .cubing import cubing
from .downsampling import downsample_mags
from .compress import compress_mag
+from .mag import Mag
from .metadata import write_webknossos_metadata
| {"golden_diff": "diff --git a/wkcuber/__init__.py b/wkcuber/__init__.py\n--- a/wkcuber/__init__.py\n+++ b/wkcuber/__init__.py\n@@ -1,4 +1,6 @@\n+from .api.Dataset import WKDataset\n from .cubing import cubing\n from .downsampling import downsample_mags\n from .compress import compress_mag\n+from .mag import Mag\n from .metadata import write_webknossos_metadata\n", "issue": "Convenience for wkcuber.api\nTo open/create a dataset with the cool new high-level API the following code is required:\r\n\r\n```python\r\nfrom wkcuber.api.Dataset import WKDataset\r\nfrom pathlib import Path\r\n\r\nds1 = WKDataset.create(Path(\"path\") / \"to\" / \"dataset1\", scale=(128,128,128))\r\nds2 = WKDataset.open(Path(\"path\") / \"to\" / \"dataset2\")\r\n\r\n```\r\n\r\nFor one-off scripts, I think that could be a bit more convenient, if we had an API like this\r\n\r\n```python\r\nfrom wkcuber import WKDataset\r\n\r\nds1 = WKDataset.create(\"path/to/dataset1\", scale=(128, 128, 128))\r\nds2 = WKDataset.open(\"path/to/dataset2\")\r\n```\r\n\r\nAny thoughts? @rschwanhold @jstriebel @philippotto \r\n\n", "before_files": [{"content": "from .cubing import cubing\nfrom .downsampling import downsample_mags\nfrom .compress import compress_mag\nfrom .metadata import write_webknossos_metadata\n", "path": "wkcuber/__init__.py"}], "after_files": [{"content": "from .api.Dataset import WKDataset\nfrom .cubing import cubing\nfrom .downsampling import downsample_mags\nfrom .compress import compress_mag\nfrom .mag import Mag\nfrom .metadata import write_webknossos_metadata\n", "path": "wkcuber/__init__.py"}]} | 499 | 103 |
gh_patches_debug_15531 | rasdani/github-patches | git_diff | tensorflow__addons-2299 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Enable SSE4.2 and AVX support during build
So the pip installed TF does not support these instruction sets by default, but modern-ish CPUs do. (Roughly CPUs after 2012).
We could try this and see if there are any improvements in test times and weight the benefits. If nothing else we can add it as a flag for building from source. Currently TF-IO does this by default:
https://github.com/tensorflow/io/blob/master/.github/workflows/build.yml#L13
@perfinion do we know if this is on the roadmap for default TF installations?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `configure.py`
Content:
```
1 # Copyright 2020 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 # Usage: python configure.py
16 #
17
18
19 import os
20 import pathlib
21 import platform
22 import logging
23
24 import tensorflow as tf
25
26 _TFA_BAZELRC = ".bazelrc"
27
28
29 # Writes variables to bazelrc file
30 def write(line):
31 with open(_TFA_BAZELRC, "a") as f:
32 f.write(line + "\n")
33
34
35 def write_action_env(var_name, var):
36 write('build --action_env {}="{}"'.format(var_name, var))
37
38
39 def is_macos():
40 return platform.system() == "Darwin"
41
42
43 def is_windows():
44 return platform.system() == "Windows"
45
46
47 def is_raspi_arm():
48 return os.uname()[4] == "armv7l"
49
50
51 def get_tf_header_dir():
52 import tensorflow as tf
53
54 tf_header_dir = tf.sysconfig.get_compile_flags()[0][2:]
55 if is_windows():
56 tf_header_dir = tf_header_dir.replace("\\", "/")
57 return tf_header_dir
58
59
60 def get_tf_shared_lib_dir():
61 import tensorflow as tf
62
63 # OS Specific parsing
64 if is_windows():
65 tf_shared_lib_dir = tf.sysconfig.get_compile_flags()[0][2:-7] + "python"
66 return tf_shared_lib_dir.replace("\\", "/")
67 elif is_raspi_arm():
68 return tf.sysconfig.get_compile_flags()[0][2:-7] + "python"
69 else:
70 return tf.sysconfig.get_link_flags()[0][2:]
71
72
73 # Converts the linkflag namespec to the full shared library name
74 def get_shared_lib_name():
75 import tensorflow as tf
76
77 namespec = tf.sysconfig.get_link_flags()
78 if is_macos():
79 # MacOS
80 return "lib" + namespec[1][2:] + ".dylib"
81 elif is_windows():
82 # Windows
83 return "_pywrap_tensorflow_internal.lib"
84 elif is_raspi_arm():
85 # The below command for linux would return an empty list
86 return "_pywrap_tensorflow_internal.so"
87 else:
88 # Linux
89 return namespec[1][3:]
90
91
92 def create_build_configuration():
93 print()
94 print("Configuring TensorFlow Addons to be built from source...")
95
96 if os.path.isfile(_TFA_BAZELRC):
97 os.remove(_TFA_BAZELRC)
98
99 logging.disable(logging.WARNING)
100
101 write_action_env("TF_HEADER_DIR", get_tf_header_dir())
102 write_action_env("TF_SHARED_LIBRARY_DIR", get_tf_shared_lib_dir())
103 write_action_env("TF_SHARED_LIBRARY_NAME", get_shared_lib_name())
104 write_action_env("TF_CXX11_ABI_FLAG", tf.sysconfig.CXX11_ABI_FLAG)
105
106 write("build --spawn_strategy=standalone")
107 write("build --strategy=Genrule=standalone")
108 write("build -c opt")
109
110 if is_windows():
111 write("build --config=windows")
112 write("build:windows --copt=/experimental:preprocessor")
113 write("build:windows --host_copt=/experimental:preprocessor")
114
115 if os.getenv("TF_NEED_CUDA", "0") == "1":
116 print("> Building GPU & CPU ops")
117 configure_cuda()
118 else:
119 print("> Building only CPU ops")
120
121 print()
122 print("Build configurations successfully written to", _TFA_BAZELRC, ":\n")
123 print(pathlib.Path(_TFA_BAZELRC).read_text())
124
125
126 def configure_cuda():
127 write_action_env("TF_NEED_CUDA", "1")
128 write_action_env(
129 "CUDA_TOOLKIT_PATH", os.getenv("CUDA_TOOLKIT_PATH", "/usr/local/cuda")
130 )
131 write_action_env(
132 "CUDNN_INSTALL_PATH",
133 os.getenv("CUDNN_INSTALL_PATH", "/usr/lib/x86_64-linux-gnu"),
134 )
135 write_action_env("TF_CUDA_VERSION", os.getenv("TF_CUDA_VERSION", "11"))
136 write_action_env("TF_CUDNN_VERSION", os.getenv("TF_CUDNN_VERSION", "8"))
137
138 write("test --config=cuda")
139 write("build --config=cuda")
140 write("build:cuda --define=using_cuda=true --define=using_cuda_nvcc=true")
141 write("build:cuda --crosstool_top=@local_config_cuda//crosstool:toolchain")
142
143
144 if __name__ == "__main__":
145 create_build_configuration()
146
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/configure.py b/configure.py
--- a/configure.py
+++ b/configure.py
@@ -44,6 +44,10 @@
return platform.system() == "Windows"
+def is_linux():
+ return platform.system() == "Linux"
+
+
def is_raspi_arm():
return os.uname()[4] == "armv7l"
@@ -111,6 +115,10 @@
write("build --config=windows")
write("build:windows --copt=/experimental:preprocessor")
write("build:windows --host_copt=/experimental:preprocessor")
+ write("build:windows --copt=/arch=AVX2")
+
+ if is_macos() or is_linux():
+ write("build --copt=-mavx2")
if os.getenv("TF_NEED_CUDA", "0") == "1":
print("> Building GPU & CPU ops")
| {"golden_diff": "diff --git a/configure.py b/configure.py\n--- a/configure.py\n+++ b/configure.py\n@@ -44,6 +44,10 @@\n return platform.system() == \"Windows\"\n \n \n+def is_linux():\n+ return platform.system() == \"Linux\"\n+\n+\n def is_raspi_arm():\n return os.uname()[4] == \"armv7l\"\n \n@@ -111,6 +115,10 @@\n write(\"build --config=windows\")\n write(\"build:windows --copt=/experimental:preprocessor\")\n write(\"build:windows --host_copt=/experimental:preprocessor\")\n+ write(\"build:windows --copt=/arch=AVX2\")\n+\n+ if is_macos() or is_linux():\n+ write(\"build --copt=-mavx2\")\n \n if os.getenv(\"TF_NEED_CUDA\", \"0\") == \"1\":\n print(\"> Building GPU & CPU ops\")\n", "issue": "Enable SSE4.2 and AVX support during build\nSo the pip installed TF does not support these instruction sets by default, but modern-ish CPUs do. (Roughly CPUs after 2012).\r\n\r\nWe could try this and see if there are any improvements in test times and weight the benefits. If nothing else we can add it as a flag for building from source. Currently TF-IO does this by default:\r\nhttps://github.com/tensorflow/io/blob/master/.github/workflows/build.yml#L13\r\n\r\n@perfinion do we know if this is on the roadmap for default TF installations?\n", "before_files": [{"content": "# Copyright 2020 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n# Usage: python configure.py\n#\n\n\nimport os\nimport pathlib\nimport platform\nimport logging\n\nimport tensorflow as tf\n\n_TFA_BAZELRC = \".bazelrc\"\n\n\n# Writes variables to bazelrc file\ndef write(line):\n with open(_TFA_BAZELRC, \"a\") as f:\n f.write(line + \"\\n\")\n\n\ndef write_action_env(var_name, var):\n write('build --action_env {}=\"{}\"'.format(var_name, var))\n\n\ndef is_macos():\n return platform.system() == \"Darwin\"\n\n\ndef is_windows():\n return platform.system() == \"Windows\"\n\n\ndef is_raspi_arm():\n return os.uname()[4] == \"armv7l\"\n\n\ndef get_tf_header_dir():\n import tensorflow as tf\n\n tf_header_dir = tf.sysconfig.get_compile_flags()[0][2:]\n if is_windows():\n tf_header_dir = tf_header_dir.replace(\"\\\\\", \"/\")\n return tf_header_dir\n\n\ndef get_tf_shared_lib_dir():\n import tensorflow as tf\n\n # OS Specific parsing\n if is_windows():\n tf_shared_lib_dir = tf.sysconfig.get_compile_flags()[0][2:-7] + \"python\"\n return tf_shared_lib_dir.replace(\"\\\\\", \"/\")\n elif is_raspi_arm():\n return tf.sysconfig.get_compile_flags()[0][2:-7] + \"python\"\n else:\n return tf.sysconfig.get_link_flags()[0][2:]\n\n\n# Converts the linkflag namespec to the full shared library name\ndef get_shared_lib_name():\n import tensorflow as tf\n\n namespec = tf.sysconfig.get_link_flags()\n if is_macos():\n # MacOS\n return \"lib\" + namespec[1][2:] + \".dylib\"\n elif is_windows():\n # Windows\n return \"_pywrap_tensorflow_internal.lib\"\n elif is_raspi_arm():\n # The below command for linux would return an empty list\n return \"_pywrap_tensorflow_internal.so\"\n else:\n # Linux\n return namespec[1][3:]\n\n\ndef create_build_configuration():\n print()\n print(\"Configuring TensorFlow Addons to be built from source...\")\n\n if os.path.isfile(_TFA_BAZELRC):\n os.remove(_TFA_BAZELRC)\n\n logging.disable(logging.WARNING)\n\n write_action_env(\"TF_HEADER_DIR\", get_tf_header_dir())\n write_action_env(\"TF_SHARED_LIBRARY_DIR\", get_tf_shared_lib_dir())\n write_action_env(\"TF_SHARED_LIBRARY_NAME\", get_shared_lib_name())\n write_action_env(\"TF_CXX11_ABI_FLAG\", tf.sysconfig.CXX11_ABI_FLAG)\n\n write(\"build --spawn_strategy=standalone\")\n write(\"build --strategy=Genrule=standalone\")\n write(\"build -c opt\")\n\n if is_windows():\n write(\"build --config=windows\")\n write(\"build:windows --copt=/experimental:preprocessor\")\n write(\"build:windows --host_copt=/experimental:preprocessor\")\n\n if os.getenv(\"TF_NEED_CUDA\", \"0\") == \"1\":\n print(\"> Building GPU & CPU ops\")\n configure_cuda()\n else:\n print(\"> Building only CPU ops\")\n\n print()\n print(\"Build configurations successfully written to\", _TFA_BAZELRC, \":\\n\")\n print(pathlib.Path(_TFA_BAZELRC).read_text())\n\n\ndef configure_cuda():\n write_action_env(\"TF_NEED_CUDA\", \"1\")\n write_action_env(\n \"CUDA_TOOLKIT_PATH\", os.getenv(\"CUDA_TOOLKIT_PATH\", \"/usr/local/cuda\")\n )\n write_action_env(\n \"CUDNN_INSTALL_PATH\",\n os.getenv(\"CUDNN_INSTALL_PATH\", \"/usr/lib/x86_64-linux-gnu\"),\n )\n write_action_env(\"TF_CUDA_VERSION\", os.getenv(\"TF_CUDA_VERSION\", \"11\"))\n write_action_env(\"TF_CUDNN_VERSION\", os.getenv(\"TF_CUDNN_VERSION\", \"8\"))\n\n write(\"test --config=cuda\")\n write(\"build --config=cuda\")\n write(\"build:cuda --define=using_cuda=true --define=using_cuda_nvcc=true\")\n write(\"build:cuda --crosstool_top=@local_config_cuda//crosstool:toolchain\")\n\n\nif __name__ == \"__main__\":\n create_build_configuration()\n", "path": "configure.py"}], "after_files": [{"content": "# Copyright 2020 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n# Usage: python configure.py\n#\n\n\nimport os\nimport pathlib\nimport platform\nimport logging\n\nimport tensorflow as tf\n\n_TFA_BAZELRC = \".bazelrc\"\n\n\n# Writes variables to bazelrc file\ndef write(line):\n with open(_TFA_BAZELRC, \"a\") as f:\n f.write(line + \"\\n\")\n\n\ndef write_action_env(var_name, var):\n write('build --action_env {}=\"{}\"'.format(var_name, var))\n\n\ndef is_macos():\n return platform.system() == \"Darwin\"\n\n\ndef is_windows():\n return platform.system() == \"Windows\"\n\n\ndef is_linux():\n return platform.system() == \"Linux\"\n\n\ndef is_raspi_arm():\n return os.uname()[4] == \"armv7l\"\n\n\ndef get_tf_header_dir():\n import tensorflow as tf\n\n tf_header_dir = tf.sysconfig.get_compile_flags()[0][2:]\n if is_windows():\n tf_header_dir = tf_header_dir.replace(\"\\\\\", \"/\")\n return tf_header_dir\n\n\ndef get_tf_shared_lib_dir():\n import tensorflow as tf\n\n # OS Specific parsing\n if is_windows():\n tf_shared_lib_dir = tf.sysconfig.get_compile_flags()[0][2:-7] + \"python\"\n return tf_shared_lib_dir.replace(\"\\\\\", \"/\")\n elif is_raspi_arm():\n return tf.sysconfig.get_compile_flags()[0][2:-7] + \"python\"\n else:\n return tf.sysconfig.get_link_flags()[0][2:]\n\n\n# Converts the linkflag namespec to the full shared library name\ndef get_shared_lib_name():\n import tensorflow as tf\n\n namespec = tf.sysconfig.get_link_flags()\n if is_macos():\n # MacOS\n return \"lib\" + namespec[1][2:] + \".dylib\"\n elif is_windows():\n # Windows\n return \"_pywrap_tensorflow_internal.lib\"\n elif is_raspi_arm():\n # The below command for linux would return an empty list\n return \"_pywrap_tensorflow_internal.so\"\n else:\n # Linux\n return namespec[1][3:]\n\n\ndef create_build_configuration():\n print()\n print(\"Configuring TensorFlow Addons to be built from source...\")\n\n if os.path.isfile(_TFA_BAZELRC):\n os.remove(_TFA_BAZELRC)\n\n logging.disable(logging.WARNING)\n\n write_action_env(\"TF_HEADER_DIR\", get_tf_header_dir())\n write_action_env(\"TF_SHARED_LIBRARY_DIR\", get_tf_shared_lib_dir())\n write_action_env(\"TF_SHARED_LIBRARY_NAME\", get_shared_lib_name())\n write_action_env(\"TF_CXX11_ABI_FLAG\", tf.sysconfig.CXX11_ABI_FLAG)\n\n write(\"build --spawn_strategy=standalone\")\n write(\"build --strategy=Genrule=standalone\")\n write(\"build -c opt\")\n\n if is_windows():\n write(\"build --config=windows\")\n write(\"build:windows --copt=/experimental:preprocessor\")\n write(\"build:windows --host_copt=/experimental:preprocessor\")\n write(\"build:windows --copt=/arch=AVX2\")\n\n if is_macos() or is_linux():\n write(\"build --copt=-mavx2\")\n\n if os.getenv(\"TF_NEED_CUDA\", \"0\") == \"1\":\n print(\"> Building GPU & CPU ops\")\n configure_cuda()\n else:\n print(\"> Building only CPU ops\")\n\n print()\n print(\"Build configurations successfully written to\", _TFA_BAZELRC, \":\\n\")\n print(pathlib.Path(_TFA_BAZELRC).read_text())\n\n\ndef configure_cuda():\n write_action_env(\"TF_NEED_CUDA\", \"1\")\n write_action_env(\n \"CUDA_TOOLKIT_PATH\", os.getenv(\"CUDA_TOOLKIT_PATH\", \"/usr/local/cuda\")\n )\n write_action_env(\n \"CUDNN_INSTALL_PATH\",\n os.getenv(\"CUDNN_INSTALL_PATH\", \"/usr/lib/x86_64-linux-gnu\"),\n )\n write_action_env(\"TF_CUDA_VERSION\", os.getenv(\"TF_CUDA_VERSION\", \"11\"))\n write_action_env(\"TF_CUDNN_VERSION\", os.getenv(\"TF_CUDNN_VERSION\", \"8\"))\n\n write(\"test --config=cuda\")\n write(\"build --config=cuda\")\n write(\"build:cuda --define=using_cuda=true --define=using_cuda_nvcc=true\")\n write(\"build:cuda --crosstool_top=@local_config_cuda//crosstool:toolchain\")\n\n\nif __name__ == \"__main__\":\n create_build_configuration()\n", "path": "configure.py"}]} | 1,804 | 211 |
gh_patches_debug_16647 | rasdani/github-patches | git_diff | ietf-tools__datatracker-3832 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
person_link error in charter document view
### What happened?
A 500 error occurs when retrieving some group charter documents. E.g., when getting `/doc/charter-ietf-cat/`:
```
ERROR: django.request:228: Internal Server Error: /doc/charter-ietf-cat/
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/usr/local/lib/python3.6/site-packages/django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/usr/local/lib/python3.6/site-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/workspace/ietf/doc/views_doc.py", line 548, in document_main
can_manage=can_manage,
File "/usr/local/lib/python3.6/site-packages/django/shortcuts.py", line 36, in render
content = loader.render_to_string(template_name, context, request, using=using)
File "/usr/local/lib/python3.6/site-packages/django/template/loader.py", line 62, in render_to_string
return template.render(context, request)
File "/usr/local/lib/python3.6/site-packages/django/template/backends/django.py", line 61, in render
return self.template.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 171, in render
return self._render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 163, in _render
return self.nodelist.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 937, in render
bit = node.render_annotated(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 904, in render_annotated
return self.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/loader_tags.py", line 150, in render
return compiled_parent._render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 163, in _render
return self.nodelist.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 937, in render
bit = node.render_annotated(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 904, in render_annotated
return self.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/loader_tags.py", line 62, in render
result = block.nodelist.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 937, in render
bit = node.render_annotated(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 904, in render_annotated
return self.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/defaulttags.py", line 312, in render
return nodelist.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 937, in render
bit = node.render_annotated(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 904, in render_annotated
return self.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/library.py", line 214, in render
_dict = self.func(*resolved_args, **resolved_kwargs)
File "/workspace/ietf/person/templatetags/person_filters.py", line 44, in person_link
plain_name = person.plain_name()
AttributeError: 'str' object has no attribute 'plain_name'
ERROR: django.request:228: Internal Server Error: /doc/charter-ietf-cat/
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/usr/local/lib/python3.6/site-packages/django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/usr/local/lib/python3.6/site-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/workspace/ietf/doc/views_doc.py", line 548, in document_main
can_manage=can_manage,
File "/usr/local/lib/python3.6/site-packages/django/shortcuts.py", line 36, in render
content = loader.render_to_string(template_name, context, request, using=using)
File "/usr/local/lib/python3.6/site-packages/django/template/loader.py", line 62, in render_to_string
return template.render(context, request)
File "/usr/local/lib/python3.6/site-packages/django/template/backends/django.py", line 61, in render
return self.template.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 171, in render
return self._render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 163, in _render
return self.nodelist.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 937, in render
bit = node.render_annotated(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 904, in render_annotated
return self.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/loader_tags.py", line 150, in render
return compiled_parent._render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 163, in _render
return self.nodelist.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 937, in render
bit = node.render_annotated(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 904, in render_annotated
return self.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/loader_tags.py", line 62, in render
result = block.nodelist.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 937, in render
bit = node.render_annotated(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 904, in render_annotated
return self.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/defaulttags.py", line 312, in render
return nodelist.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 937, in render
bit = node.render_annotated(context)
File "/usr/local/lib/python3.6/site-packages/django/template/base.py", line 904, in render_annotated
return self.render(context)
File "/usr/local/lib/python3.6/site-packages/django/template/library.py", line 214, in render
_dict = self.func(*resolved_args, **resolved_kwargs)
File "/workspace/ietf/person/templatetags/person_filters.py", line 44, in person_link
plain_name = person.plain_name()
AttributeError: 'str' object has no attribute 'plain_name'
```
### What browser(s) are you seeing the problem on?
Not Applicable
### Code of Conduct
- [X] I agree to follow the IETF's Code of Conduct
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ietf/person/templatetags/person_filters.py`
Content:
```
1 # Copyright The IETF Trust 2017-2020, All Rights Reserved
2
3 import datetime
4
5 from django import template
6
7 import debug # pyflakes:ignore
8
9 from ietf.nomcom.utils import is_eligible
10 from ietf.person.models import Alias
11
12 register = template.Library()
13
14
15 @register.filter
16 def is_nomcom_eligible(person, date=datetime.date.today()):
17 return is_eligible(person=person, date=date)
18
19
20 @register.filter
21 def person_by_name(name):
22 "Look up a person record from name"
23 if not isinstance(name, (type(b""), type(""))):
24 return None
25 alias = Alias.objects.filter(name=name).first()
26 return alias.person if alias else None
27
28
29 # CLEANUP: There are several hundred Person objects with no Alias object,
30 # violating the expectations of the code. The check for the existence of an
31 # alias object below matching the person's name avoids presenting a link that
32 # we know will 404. When the database is corrected and we can expect that the
33 # Alias for the person's name to always be there, we can remove this extra
34 # database query (or leave it as a safeguard until it becomes a performance
35 # issue.)
36
37
38 @register.inclusion_tag("person/person_link.html")
39 def person_link(person, **kwargs):
40 title = kwargs.get("title", "")
41 cls = kwargs.get("class", "")
42 with_email = kwargs.get("with_email", True)
43 if person:
44 plain_name = person.plain_name()
45 name = (
46 person.name
47 if person.alias_set.filter(name=person.name).exists()
48 else plain_name
49 )
50 email = person.email_address()
51 return {
52 "name": name,
53 "plain_name": plain_name,
54 "email": email,
55 "title": title,
56 "class": cls,
57 "with_email": with_email,
58 }
59 else:
60 return {}
61
62
63 @register.inclusion_tag("person/person_link.html")
64 def email_person_link(email, **kwargs):
65 title = kwargs.get("title", "")
66 cls = kwargs.get("class", "")
67 with_email = kwargs.get("with_email", True)
68 plain_name = email.person.plain_name()
69 name = (
70 email.person.name
71 if email.person.alias_set.filter(name=email.person.name).exists()
72 else plain_name
73 )
74 email = email.address
75 return {
76 "name": name,
77 "plain_name": plain_name,
78 "email": email,
79 "title": title,
80 "class": cls,
81 "with_email": with_email,
82 }
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ietf/person/templatetags/person_filters.py b/ietf/person/templatetags/person_filters.py
--- a/ietf/person/templatetags/person_filters.py
+++ b/ietf/person/templatetags/person_filters.py
@@ -37,10 +37,19 @@
@register.inclusion_tag("person/person_link.html")
def person_link(person, **kwargs):
+ """Render a link to a Person
+
+ If person is None or a string, renders as a span containing '(None)'.
+ """
+ if isinstance(person, str):
+ # If person is a string, most likely an invalid template variable was referenced.
+ # That normally comes in as an empty string, but may be non-empty if string_if_invalid
+ # is set. Translate strings into None to try to get consistent behavior.
+ person = None
title = kwargs.get("title", "")
cls = kwargs.get("class", "")
with_email = kwargs.get("with_email", True)
- if person:
+ if person is not None:
plain_name = person.plain_name()
name = (
person.name
| {"golden_diff": "diff --git a/ietf/person/templatetags/person_filters.py b/ietf/person/templatetags/person_filters.py\n--- a/ietf/person/templatetags/person_filters.py\n+++ b/ietf/person/templatetags/person_filters.py\n@@ -37,10 +37,19 @@\n \n @register.inclusion_tag(\"person/person_link.html\")\n def person_link(person, **kwargs):\n+ \"\"\"Render a link to a Person\n+\n+ If person is None or a string, renders as a span containing '(None)'.\n+ \"\"\"\n+ if isinstance(person, str):\n+ # If person is a string, most likely an invalid template variable was referenced.\n+ # That normally comes in as an empty string, but may be non-empty if string_if_invalid\n+ # is set. Translate strings into None to try to get consistent behavior.\n+ person = None\n title = kwargs.get(\"title\", \"\")\n cls = kwargs.get(\"class\", \"\")\n with_email = kwargs.get(\"with_email\", True)\n- if person:\n+ if person is not None:\n plain_name = person.plain_name()\n name = (\n person.name\n", "issue": "person_link error in charter document view\n### What happened?\n\nA 500 error occurs when retrieving some group charter documents. E.g., when getting `/doc/charter-ietf-cat/`:\r\n```\r\nERROR: django.request:228: Internal Server Error: /doc/charter-ietf-cat/\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/site-packages/django/core/handlers/exception.py\", line 34, in inner\r\n response = get_response(request)\r\n File \"/usr/local/lib/python3.6/site-packages/django/core/handlers/base.py\", line 115, in _get_response\r\n response = self.process_exception_by_middleware(e, request)\r\n File \"/usr/local/lib/python3.6/site-packages/django/core/handlers/base.py\", line 113, in _get_response\r\n response = wrapped_callback(request, *callback_args, **callback_kwargs)\r\n File \"/workspace/ietf/doc/views_doc.py\", line 548, in document_main\r\n can_manage=can_manage,\r\n File \"/usr/local/lib/python3.6/site-packages/django/shortcuts.py\", line 36, in render\r\n content = loader.render_to_string(template_name, context, request, using=using)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/loader.py\", line 62, in render_to_string\r\n return template.render(context, request)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/backends/django.py\", line 61, in render\r\n return self.template.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 171, in render\r\n return self._render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 163, in _render\r\n return self.nodelist.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 937, in render\r\n bit = node.render_annotated(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 904, in render_annotated\r\n return self.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/loader_tags.py\", line 150, in render\r\n return compiled_parent._render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 163, in _render\r\n return self.nodelist.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 937, in render\r\n bit = node.render_annotated(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 904, in render_annotated\r\n return self.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/loader_tags.py\", line 62, in render\r\n result = block.nodelist.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 937, in render\r\n bit = node.render_annotated(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 904, in render_annotated\r\n return self.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/defaulttags.py\", line 312, in render\r\n return nodelist.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 937, in render\r\n bit = node.render_annotated(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 904, in render_annotated\r\n return self.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/library.py\", line 214, in render\r\n _dict = self.func(*resolved_args, **resolved_kwargs)\r\n File \"/workspace/ietf/person/templatetags/person_filters.py\", line 44, in person_link\r\n plain_name = person.plain_name()\r\nAttributeError: 'str' object has no attribute 'plain_name'\r\nERROR: django.request:228: Internal Server Error: /doc/charter-ietf-cat/\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/site-packages/django/core/handlers/exception.py\", line 34, in inner\r\n response = get_response(request)\r\n File \"/usr/local/lib/python3.6/site-packages/django/core/handlers/base.py\", line 115, in _get_response\r\n response = self.process_exception_by_middleware(e, request)\r\n File \"/usr/local/lib/python3.6/site-packages/django/core/handlers/base.py\", line 113, in _get_response\r\n response = wrapped_callback(request, *callback_args, **callback_kwargs)\r\n File \"/workspace/ietf/doc/views_doc.py\", line 548, in document_main\r\n can_manage=can_manage,\r\n File \"/usr/local/lib/python3.6/site-packages/django/shortcuts.py\", line 36, in render\r\n content = loader.render_to_string(template_name, context, request, using=using)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/loader.py\", line 62, in render_to_string\r\n return template.render(context, request)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/backends/django.py\", line 61, in render\r\n return self.template.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 171, in render\r\n return self._render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 163, in _render\r\n return self.nodelist.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 937, in render\r\n bit = node.render_annotated(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 904, in render_annotated\r\n return self.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/loader_tags.py\", line 150, in render\r\n return compiled_parent._render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 163, in _render\r\n return self.nodelist.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 937, in render\r\n bit = node.render_annotated(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 904, in render_annotated\r\n return self.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/loader_tags.py\", line 62, in render\r\n result = block.nodelist.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 937, in render\r\n bit = node.render_annotated(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 904, in render_annotated\r\n return self.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/defaulttags.py\", line 312, in render\r\n return nodelist.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 937, in render\r\n bit = node.render_annotated(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/base.py\", line 904, in render_annotated\r\n return self.render(context)\r\n File \"/usr/local/lib/python3.6/site-packages/django/template/library.py\", line 214, in render\r\n _dict = self.func(*resolved_args, **resolved_kwargs)\r\n File \"/workspace/ietf/person/templatetags/person_filters.py\", line 44, in person_link\r\n plain_name = person.plain_name()\r\nAttributeError: 'str' object has no attribute 'plain_name'\r\n```\n\n### What browser(s) are you seeing the problem on?\n\nNot Applicable\n\n### Code of Conduct\n\n- [X] I agree to follow the IETF's Code of Conduct\n", "before_files": [{"content": "# Copyright The IETF Trust 2017-2020, All Rights Reserved\n\nimport datetime\n\nfrom django import template\n\nimport debug # pyflakes:ignore\n\nfrom ietf.nomcom.utils import is_eligible\nfrom ietf.person.models import Alias\n\nregister = template.Library()\n\n\[email protected]\ndef is_nomcom_eligible(person, date=datetime.date.today()):\n return is_eligible(person=person, date=date)\n\n\[email protected]\ndef person_by_name(name):\n \"Look up a person record from name\"\n if not isinstance(name, (type(b\"\"), type(\"\"))):\n return None\n alias = Alias.objects.filter(name=name).first()\n return alias.person if alias else None\n\n\n# CLEANUP: There are several hundred Person objects with no Alias object,\n# violating the expectations of the code. The check for the existence of an\n# alias object below matching the person's name avoids presenting a link that\n# we know will 404. When the database is corrected and we can expect that the\n# Alias for the person's name to always be there, we can remove this extra\n# database query (or leave it as a safeguard until it becomes a performance\n# issue.)\n\n\[email protected]_tag(\"person/person_link.html\")\ndef person_link(person, **kwargs):\n title = kwargs.get(\"title\", \"\")\n cls = kwargs.get(\"class\", \"\")\n with_email = kwargs.get(\"with_email\", True)\n if person:\n plain_name = person.plain_name()\n name = (\n person.name\n if person.alias_set.filter(name=person.name).exists()\n else plain_name\n )\n email = person.email_address()\n return {\n \"name\": name,\n \"plain_name\": plain_name,\n \"email\": email,\n \"title\": title,\n \"class\": cls,\n \"with_email\": with_email,\n }\n else:\n return {}\n\n\[email protected]_tag(\"person/person_link.html\")\ndef email_person_link(email, **kwargs):\n title = kwargs.get(\"title\", \"\")\n cls = kwargs.get(\"class\", \"\")\n with_email = kwargs.get(\"with_email\", True)\n plain_name = email.person.plain_name()\n name = (\n email.person.name\n if email.person.alias_set.filter(name=email.person.name).exists()\n else plain_name\n )\n email = email.address\n return {\n \"name\": name,\n \"plain_name\": plain_name,\n \"email\": email,\n \"title\": title,\n \"class\": cls,\n \"with_email\": with_email,\n }", "path": "ietf/person/templatetags/person_filters.py"}], "after_files": [{"content": "# Copyright The IETF Trust 2017-2020, All Rights Reserved\n\nimport datetime\n\nfrom django import template\n\nimport debug # pyflakes:ignore\n\nfrom ietf.nomcom.utils import is_eligible\nfrom ietf.person.models import Alias\n\nregister = template.Library()\n\n\[email protected]\ndef is_nomcom_eligible(person, date=datetime.date.today()):\n return is_eligible(person=person, date=date)\n\n\[email protected]\ndef person_by_name(name):\n \"Look up a person record from name\"\n if not isinstance(name, (type(b\"\"), type(\"\"))):\n return None\n alias = Alias.objects.filter(name=name).first()\n return alias.person if alias else None\n\n\n# CLEANUP: There are several hundred Person objects with no Alias object,\n# violating the expectations of the code. The check for the existence of an\n# alias object below matching the person's name avoids presenting a link that\n# we know will 404. When the database is corrected and we can expect that the\n# Alias for the person's name to always be there, we can remove this extra\n# database query (or leave it as a safeguard until it becomes a performance\n# issue.)\n\n\[email protected]_tag(\"person/person_link.html\")\ndef person_link(person, **kwargs):\n \"\"\"Render a link to a Person\n\n If person is None or a string, renders as a span containing '(None)'.\n \"\"\"\n if isinstance(person, str):\n # If person is a string, most likely an invalid template variable was referenced.\n # That normally comes in as an empty string, but may be non-empty if string_if_invalid\n # is set. Translate strings into None to try to get consistent behavior.\n person = None\n title = kwargs.get(\"title\", \"\")\n cls = kwargs.get(\"class\", \"\")\n with_email = kwargs.get(\"with_email\", True)\n if person is not None:\n plain_name = person.plain_name()\n name = (\n person.name\n if person.alias_set.filter(name=person.name).exists()\n else plain_name\n )\n email = person.email_address()\n return {\n \"name\": name,\n \"plain_name\": plain_name,\n \"email\": email,\n \"title\": title,\n \"class\": cls,\n \"with_email\": with_email,\n }\n else:\n return {}\n\n\[email protected]_tag(\"person/person_link.html\")\ndef email_person_link(email, **kwargs):\n title = kwargs.get(\"title\", \"\")\n cls = kwargs.get(\"class\", \"\")\n with_email = kwargs.get(\"with_email\", True)\n plain_name = email.person.plain_name()\n name = (\n email.person.name\n if email.person.alias_set.filter(name=email.person.name).exists()\n else plain_name\n )\n email = email.address\n return {\n \"name\": name,\n \"plain_name\": plain_name,\n \"email\": email,\n \"title\": title,\n \"class\": cls,\n \"with_email\": with_email,\n }", "path": "ietf/person/templatetags/person_filters.py"}]} | 2,860 | 254 |
gh_patches_debug_29909 | rasdani/github-patches | git_diff | nf-core__tools-2031 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bump minimum required Nextflow version
### Description of feature
Latest stable release brings lots of new features that we probably want to use at module level (eg. `bin` directories).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nf_core/lint/readme.py`
Content:
```
1 import os
2 import re
3
4
5 def readme(self):
6 """Repository ``README.md`` tests
7
8 The ``README.md`` files for a project are very important and must meet some requirements:
9
10 * Nextflow badge
11
12 * If no Nextflow badge is found, a warning is given
13 * If a badge is found but the version doesn't match the minimum version in the config file, the test fails
14 * Example badge code:
15
16 .. code-block:: md
17
18 [](https://www.nextflow.io/)
19
20 * Bioconda badge
21
22 * If your pipeline contains a file called ``environment.yml`` in the root directory, a bioconda badge is required
23 * Required badge code:
24
25 .. code-block:: md
26
27 [](https://bioconda.github.io/)
28
29 .. note:: These badges are a markdown image ```` *inside* a markdown link ``[markdown image](<link URL>)``, so a bit fiddly to write.
30 """
31 passed = []
32 warned = []
33 failed = []
34
35 # Remove field that should be ignored according to the linting config
36 ignore_configs = self.lint_config.get("readme", [])
37
38 with open(os.path.join(self.wf_path, "README.md"), "r") as fh:
39 content = fh.read()
40
41 if "nextflow_badge" not in ignore_configs:
42 # Check that there is a readme badge showing the minimum required version of Nextflow
43 # [](https://www.nextflow.io/)
44 # and that it has the correct version
45 nf_badge_re = r"\[!\[Nextflow\]\(https://img\.shields\.io/badge/nextflow%20DSL2-!?(?:%E2%89%A5|%3E%3D)([\d\.]+)-23aa62\.svg\)\]\(https://www\.nextflow\.io/\)"
46 match = re.search(nf_badge_re, content)
47 if match:
48 nf_badge_version = match.group(1).strip("'\"")
49 try:
50 if nf_badge_version != self.minNextflowVersion:
51 raise AssertionError()
52 except (AssertionError, KeyError):
53 failed.append(
54 f"README Nextflow minimum version badge does not match config. Badge: `{nf_badge_version}`, "
55 f"Config: `{self.minNextflowVersion}`"
56 )
57 else:
58 passed.append(
59 f"README Nextflow minimum version badge matched config. Badge: `{nf_badge_version}`, "
60 f"Config: `{self.minNextflowVersion}`"
61 )
62 else:
63 warned.append("README did not have a Nextflow minimum version badge.")
64
65 # Check that the minimum version mentioned in the quick start section is consistent
66 # Looking for: "1. Install [`Nextflow`](https://www.nextflow.io/docs/latest/getstarted.html#installation) (`>=21.10.3`)"
67 nf_version_re = r"1\.\s*Install\s*\[`Nextflow`\]\(https://www.nextflow.io/docs/latest/getstarted.html#installation\)\s*\(`>=(\d*\.\d*\.\d*)`\)"
68 match = re.search(nf_version_re, content)
69 if match:
70 nf_quickstart_version = match.group(1)
71 try:
72 if nf_quickstart_version != self.minNextflowVersion:
73 raise AssertionError()
74 except (AssertionError, KeyError):
75 failed.append(
76 f"README Nextflow minimium version in Quick Start section does not match config. README: `{nf_quickstart_version}`, Config `{self.minNextflowVersion}`"
77 )
78 else:
79 passed.append(
80 f"README Nextflow minimum version in Quick Start section matched config. README: `{nf_quickstart_version}`, Config: `{self.minNextflowVersion}`"
81 )
82 else:
83 warned.append("README did not have a Nextflow minimum version mentioned in Quick Start section.")
84
85 return {"passed": passed, "warned": warned, "failed": failed}
86
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nf_core/lint/readme.py b/nf_core/lint/readme.py
--- a/nf_core/lint/readme.py
+++ b/nf_core/lint/readme.py
@@ -40,7 +40,7 @@
if "nextflow_badge" not in ignore_configs:
# Check that there is a readme badge showing the minimum required version of Nextflow
- # [](https://www.nextflow.io/)
+ # [](https://www.nextflow.io/)
# and that it has the correct version
nf_badge_re = r"\[!\[Nextflow\]\(https://img\.shields\.io/badge/nextflow%20DSL2-!?(?:%E2%89%A5|%3E%3D)([\d\.]+)-23aa62\.svg\)\]\(https://www\.nextflow\.io/\)"
match = re.search(nf_badge_re, content)
@@ -63,7 +63,7 @@
warned.append("README did not have a Nextflow minimum version badge.")
# Check that the minimum version mentioned in the quick start section is consistent
- # Looking for: "1. Install [`Nextflow`](https://www.nextflow.io/docs/latest/getstarted.html#installation) (`>=21.10.3`)"
+ # Looking for: "1. Install [`Nextflow`](https://www.nextflow.io/docs/latest/getstarted.html#installation) (`>=22.10.1`)"
nf_version_re = r"1\.\s*Install\s*\[`Nextflow`\]\(https://www.nextflow.io/docs/latest/getstarted.html#installation\)\s*\(`>=(\d*\.\d*\.\d*)`\)"
match = re.search(nf_version_re, content)
if match:
| {"golden_diff": "diff --git a/nf_core/lint/readme.py b/nf_core/lint/readme.py\n--- a/nf_core/lint/readme.py\n+++ b/nf_core/lint/readme.py\n@@ -40,7 +40,7 @@\n \n if \"nextflow_badge\" not in ignore_configs:\n # Check that there is a readme badge showing the minimum required version of Nextflow\n- # [](https://www.nextflow.io/)\n+ # [](https://www.nextflow.io/)\n # and that it has the correct version\n nf_badge_re = r\"\\[!\\[Nextflow\\]\\(https://img\\.shields\\.io/badge/nextflow%20DSL2-!?(?:%E2%89%A5|%3E%3D)([\\d\\.]+)-23aa62\\.svg\\)\\]\\(https://www\\.nextflow\\.io/\\)\"\n match = re.search(nf_badge_re, content)\n@@ -63,7 +63,7 @@\n warned.append(\"README did not have a Nextflow minimum version badge.\")\n \n # Check that the minimum version mentioned in the quick start section is consistent\n- # Looking for: \"1. Install [`Nextflow`](https://www.nextflow.io/docs/latest/getstarted.html#installation) (`>=21.10.3`)\"\n+ # Looking for: \"1. Install [`Nextflow`](https://www.nextflow.io/docs/latest/getstarted.html#installation) (`>=22.10.1`)\"\n nf_version_re = r\"1\\.\\s*Install\\s*\\[`Nextflow`\\]\\(https://www.nextflow.io/docs/latest/getstarted.html#installation\\)\\s*\\(`>=(\\d*\\.\\d*\\.\\d*)`\\)\"\n match = re.search(nf_version_re, content)\n if match:\n", "issue": "Bump minimum required Nextflow version\n### Description of feature\n\nLatest stable release brings lots of new features that we probably want to use at module level (eg. `bin` directories).\n", "before_files": [{"content": "import os\nimport re\n\n\ndef readme(self):\n \"\"\"Repository ``README.md`` tests\n\n The ``README.md`` files for a project are very important and must meet some requirements:\n\n * Nextflow badge\n\n * If no Nextflow badge is found, a warning is given\n * If a badge is found but the version doesn't match the minimum version in the config file, the test fails\n * Example badge code:\n\n .. code-block:: md\n\n [](https://www.nextflow.io/)\n\n * Bioconda badge\n\n * If your pipeline contains a file called ``environment.yml`` in the root directory, a bioconda badge is required\n * Required badge code:\n\n .. code-block:: md\n\n [](https://bioconda.github.io/)\n\n .. note:: These badges are a markdown image ```` *inside* a markdown link ``[markdown image](<link URL>)``, so a bit fiddly to write.\n \"\"\"\n passed = []\n warned = []\n failed = []\n\n # Remove field that should be ignored according to the linting config\n ignore_configs = self.lint_config.get(\"readme\", [])\n\n with open(os.path.join(self.wf_path, \"README.md\"), \"r\") as fh:\n content = fh.read()\n\n if \"nextflow_badge\" not in ignore_configs:\n # Check that there is a readme badge showing the minimum required version of Nextflow\n # [](https://www.nextflow.io/)\n # and that it has the correct version\n nf_badge_re = r\"\\[!\\[Nextflow\\]\\(https://img\\.shields\\.io/badge/nextflow%20DSL2-!?(?:%E2%89%A5|%3E%3D)([\\d\\.]+)-23aa62\\.svg\\)\\]\\(https://www\\.nextflow\\.io/\\)\"\n match = re.search(nf_badge_re, content)\n if match:\n nf_badge_version = match.group(1).strip(\"'\\\"\")\n try:\n if nf_badge_version != self.minNextflowVersion:\n raise AssertionError()\n except (AssertionError, KeyError):\n failed.append(\n f\"README Nextflow minimum version badge does not match config. Badge: `{nf_badge_version}`, \"\n f\"Config: `{self.minNextflowVersion}`\"\n )\n else:\n passed.append(\n f\"README Nextflow minimum version badge matched config. Badge: `{nf_badge_version}`, \"\n f\"Config: `{self.minNextflowVersion}`\"\n )\n else:\n warned.append(\"README did not have a Nextflow minimum version badge.\")\n\n # Check that the minimum version mentioned in the quick start section is consistent\n # Looking for: \"1. Install [`Nextflow`](https://www.nextflow.io/docs/latest/getstarted.html#installation) (`>=21.10.3`)\"\n nf_version_re = r\"1\\.\\s*Install\\s*\\[`Nextflow`\\]\\(https://www.nextflow.io/docs/latest/getstarted.html#installation\\)\\s*\\(`>=(\\d*\\.\\d*\\.\\d*)`\\)\"\n match = re.search(nf_version_re, content)\n if match:\n nf_quickstart_version = match.group(1)\n try:\n if nf_quickstart_version != self.minNextflowVersion:\n raise AssertionError()\n except (AssertionError, KeyError):\n failed.append(\n f\"README Nextflow minimium version in Quick Start section does not match config. README: `{nf_quickstart_version}`, Config `{self.minNextflowVersion}`\"\n )\n else:\n passed.append(\n f\"README Nextflow minimum version in Quick Start section matched config. README: `{nf_quickstart_version}`, Config: `{self.minNextflowVersion}`\"\n )\n else:\n warned.append(\"README did not have a Nextflow minimum version mentioned in Quick Start section.\")\n\n return {\"passed\": passed, \"warned\": warned, \"failed\": failed}\n", "path": "nf_core/lint/readme.py"}], "after_files": [{"content": "import os\nimport re\n\n\ndef readme(self):\n \"\"\"Repository ``README.md`` tests\n\n The ``README.md`` files for a project are very important and must meet some requirements:\n\n * Nextflow badge\n\n * If no Nextflow badge is found, a warning is given\n * If a badge is found but the version doesn't match the minimum version in the config file, the test fails\n * Example badge code:\n\n .. code-block:: md\n\n [](https://www.nextflow.io/)\n\n * Bioconda badge\n\n * If your pipeline contains a file called ``environment.yml`` in the root directory, a bioconda badge is required\n * Required badge code:\n\n .. code-block:: md\n\n [](https://bioconda.github.io/)\n\n .. note:: These badges are a markdown image ```` *inside* a markdown link ``[markdown image](<link URL>)``, so a bit fiddly to write.\n \"\"\"\n passed = []\n warned = []\n failed = []\n\n # Remove field that should be ignored according to the linting config\n ignore_configs = self.lint_config.get(\"readme\", [])\n\n with open(os.path.join(self.wf_path, \"README.md\"), \"r\") as fh:\n content = fh.read()\n\n if \"nextflow_badge\" not in ignore_configs:\n # Check that there is a readme badge showing the minimum required version of Nextflow\n # [](https://www.nextflow.io/)\n # and that it has the correct version\n nf_badge_re = r\"\\[!\\[Nextflow\\]\\(https://img\\.shields\\.io/badge/nextflow%20DSL2-!?(?:%E2%89%A5|%3E%3D)([\\d\\.]+)-23aa62\\.svg\\)\\]\\(https://www\\.nextflow\\.io/\\)\"\n match = re.search(nf_badge_re, content)\n if match:\n nf_badge_version = match.group(1).strip(\"'\\\"\")\n try:\n if nf_badge_version != self.minNextflowVersion:\n raise AssertionError()\n except (AssertionError, KeyError):\n failed.append(\n f\"README Nextflow minimum version badge does not match config. Badge: `{nf_badge_version}`, \"\n f\"Config: `{self.minNextflowVersion}`\"\n )\n else:\n passed.append(\n f\"README Nextflow minimum version badge matched config. Badge: `{nf_badge_version}`, \"\n f\"Config: `{self.minNextflowVersion}`\"\n )\n else:\n warned.append(\"README did not have a Nextflow minimum version badge.\")\n\n # Check that the minimum version mentioned in the quick start section is consistent\n # Looking for: \"1. Install [`Nextflow`](https://www.nextflow.io/docs/latest/getstarted.html#installation) (`>=22.10.1`)\"\n nf_version_re = r\"1\\.\\s*Install\\s*\\[`Nextflow`\\]\\(https://www.nextflow.io/docs/latest/getstarted.html#installation\\)\\s*\\(`>=(\\d*\\.\\d*\\.\\d*)`\\)\"\n match = re.search(nf_version_re, content)\n if match:\n nf_quickstart_version = match.group(1)\n try:\n if nf_quickstart_version != self.minNextflowVersion:\n raise AssertionError()\n except (AssertionError, KeyError):\n failed.append(\n f\"README Nextflow minimium version in Quick Start section does not match config. README: `{nf_quickstart_version}`, Config `{self.minNextflowVersion}`\"\n )\n else:\n passed.append(\n f\"README Nextflow minimum version in Quick Start section matched config. README: `{nf_quickstart_version}`, Config: `{self.minNextflowVersion}`\"\n )\n else:\n warned.append(\"README did not have a Nextflow minimum version mentioned in Quick Start section.\")\n\n return {\"passed\": passed, \"warned\": warned, \"failed\": failed}\n", "path": "nf_core/lint/readme.py"}]} | 1,437 | 490 |
gh_patches_debug_937 | rasdani/github-patches | git_diff | boto__boto-2166 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Invalid path check in euca-bundle-image
The -i option uses convert_file in boto/roboto/param.py to verify that the path passed is, indeed, a file. This fails unless the path specified is a boring old file which is not necessary. Indeed it not being necessary is sort of the whole point in unix having a /dev in the first place. Everything is a file.
The code calls os.path.isfile(value) in convert_file(). It should call os.path.exists(value) and not os.path.isdir(value). Directories are the only types of files which need to be considered special in the normal course of events.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `boto/roboto/param.py`
Content:
```
1 # Copyright (c) 2010 Mitch Garnaat http://garnaat.org/
2 # Copyright (c) 2010, Eucalyptus Systems, Inc.
3 #
4 # Permission is hereby granted, free of charge, to any person obtaining a
5 # copy of this software and associated documentation files (the
6 # "Software"), to deal in the Software without restriction, including
7 # without limitation the rights to use, copy, modify, merge, publish, dis-
8 # tribute, sublicense, and/or sell copies of the Software, and to permit
9 # persons to whom the Software is furnished to do so, subject to the fol-
10 # lowing conditions:
11 #
12 # The above copyright notice and this permission notice shall be included
13 # in all copies or substantial portions of the Software.
14 #
15 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
16 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
17 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
18 # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
19 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
21 # IN THE SOFTWARE.
22
23 import os
24
25 class Converter(object):
26
27 @classmethod
28 def convert_string(cls, param, value):
29 # TODO: could do length validation, etc. here
30 if not isinstance(value, basestring):
31 raise ValueError
32 return value
33
34 @classmethod
35 def convert_integer(cls, param, value):
36 # TODO: could do range checking here
37 return int(value)
38
39 @classmethod
40 def convert_boolean(cls, param, value):
41 """
42 For command line arguments, just the presence
43 of the option means True so just return True
44 """
45 return True
46
47 @classmethod
48 def convert_file(cls, param, value):
49 if os.path.isfile(value):
50 return value
51 raise ValueError
52
53 @classmethod
54 def convert_dir(cls, param, value):
55 if os.path.isdir(value):
56 return value
57 raise ValueError
58
59 @classmethod
60 def convert(cls, param, value):
61 try:
62 if hasattr(cls, 'convert_'+param.ptype):
63 mthd = getattr(cls, 'convert_'+param.ptype)
64 else:
65 mthd = cls.convert_string
66 return mthd(param, value)
67 except:
68 raise ValidationException(param, '')
69
70 class Param(Converter):
71
72 def __init__(self, name=None, ptype='string', optional=True,
73 short_name=None, long_name=None, doc='',
74 metavar=None, cardinality=1, default=None,
75 choices=None, encoder=None, request_param=True):
76 self.name = name
77 self.ptype = ptype
78 self.optional = optional
79 self.short_name = short_name
80 self.long_name = long_name
81 self.doc = doc
82 self.metavar = metavar
83 self.cardinality = cardinality
84 self.default = default
85 self.choices = choices
86 self.encoder = encoder
87 self.request_param = request_param
88
89 @property
90 def optparse_long_name(self):
91 ln = None
92 if self.long_name:
93 ln = '--%s' % self.long_name
94 return ln
95
96 @property
97 def synopsis_long_name(self):
98 ln = None
99 if self.long_name:
100 ln = '--%s' % self.long_name
101 return ln
102
103 @property
104 def getopt_long_name(self):
105 ln = None
106 if self.long_name:
107 ln = '%s' % self.long_name
108 if self.ptype != 'boolean':
109 ln += '='
110 return ln
111
112 @property
113 def optparse_short_name(self):
114 sn = None
115 if self.short_name:
116 sn = '-%s' % self.short_name
117 return sn
118
119 @property
120 def synopsis_short_name(self):
121 sn = None
122 if self.short_name:
123 sn = '-%s' % self.short_name
124 return sn
125
126 @property
127 def getopt_short_name(self):
128 sn = None
129 if self.short_name:
130 sn = '%s' % self.short_name
131 if self.ptype != 'boolean':
132 sn += ':'
133 return sn
134
135 def convert(self, value):
136 """
137 Convert a string value as received in the command line
138 tools and convert to the appropriate type of value.
139 Raise a ValidationError if the value can't be converted.
140
141 :type value: str
142 :param value: The value to convert. This should always
143 be a string.
144 """
145 return super(Param, self).convert(self,value)
146
147
148
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/boto/roboto/param.py b/boto/roboto/param.py
--- a/boto/roboto/param.py
+++ b/boto/roboto/param.py
@@ -46,7 +46,7 @@
@classmethod
def convert_file(cls, param, value):
- if os.path.isfile(value):
+ if os.path.exists(value) and not os.path.isdir(value):
return value
raise ValueError
| {"golden_diff": "diff --git a/boto/roboto/param.py b/boto/roboto/param.py\n--- a/boto/roboto/param.py\n+++ b/boto/roboto/param.py\n@@ -46,7 +46,7 @@\n \n @classmethod\n def convert_file(cls, param, value):\n- if os.path.isfile(value):\n+ if os.path.exists(value) and not os.path.isdir(value):\n return value\n raise ValueError\n", "issue": "Invalid path check in euca-bundle-image\nThe -i option uses convert_file in boto/roboto/param.py to verify that the path passed is, indeed, a file. This fails unless the path specified is a boring old file which is not necessary. Indeed it not being necessary is sort of the whole point in unix having a /dev in the first place. Everything is a file.\n\nThe code calls os.path.isfile(value) in convert_file(). It should call os.path.exists(value) and not os.path.isdir(value). Directories are the only types of files which need to be considered special in the normal course of events.\n\n", "before_files": [{"content": "# Copyright (c) 2010 Mitch Garnaat http://garnaat.org/\n# Copyright (c) 2010, Eucalyptus Systems, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish, dis-\n# tribute, sublicense, and/or sell copies of the Software, and to permit\n# persons to whom the Software is furnished to do so, subject to the fol-\n# lowing conditions:\n#\n# The above copyright notice and this permission notice shall be included\n# in all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\n# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-\n# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT\n# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,\n# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n# IN THE SOFTWARE.\n\nimport os\n\nclass Converter(object):\n\n @classmethod\n def convert_string(cls, param, value):\n # TODO: could do length validation, etc. here\n if not isinstance(value, basestring):\n raise ValueError\n return value\n\n @classmethod\n def convert_integer(cls, param, value):\n # TODO: could do range checking here\n return int(value)\n\n @classmethod\n def convert_boolean(cls, param, value):\n \"\"\"\n For command line arguments, just the presence\n of the option means True so just return True\n \"\"\"\n return True\n\n @classmethod\n def convert_file(cls, param, value):\n if os.path.isfile(value):\n return value\n raise ValueError\n\n @classmethod\n def convert_dir(cls, param, value):\n if os.path.isdir(value):\n return value\n raise ValueError\n\n @classmethod\n def convert(cls, param, value):\n try:\n if hasattr(cls, 'convert_'+param.ptype):\n mthd = getattr(cls, 'convert_'+param.ptype)\n else:\n mthd = cls.convert_string\n return mthd(param, value)\n except:\n raise ValidationException(param, '')\n\nclass Param(Converter):\n\n def __init__(self, name=None, ptype='string', optional=True,\n short_name=None, long_name=None, doc='',\n metavar=None, cardinality=1, default=None,\n choices=None, encoder=None, request_param=True):\n self.name = name\n self.ptype = ptype\n self.optional = optional\n self.short_name = short_name\n self.long_name = long_name\n self.doc = doc\n self.metavar = metavar\n self.cardinality = cardinality\n self.default = default\n self.choices = choices\n self.encoder = encoder\n self.request_param = request_param\n\n @property\n def optparse_long_name(self):\n ln = None\n if self.long_name:\n ln = '--%s' % self.long_name\n return ln\n\n @property\n def synopsis_long_name(self):\n ln = None\n if self.long_name:\n ln = '--%s' % self.long_name\n return ln\n\n @property\n def getopt_long_name(self):\n ln = None\n if self.long_name:\n ln = '%s' % self.long_name\n if self.ptype != 'boolean':\n ln += '='\n return ln\n\n @property\n def optparse_short_name(self):\n sn = None\n if self.short_name:\n sn = '-%s' % self.short_name\n return sn\n\n @property\n def synopsis_short_name(self):\n sn = None\n if self.short_name:\n sn = '-%s' % self.short_name\n return sn\n\n @property\n def getopt_short_name(self):\n sn = None\n if self.short_name:\n sn = '%s' % self.short_name\n if self.ptype != 'boolean':\n sn += ':'\n return sn\n\n def convert(self, value):\n \"\"\"\n Convert a string value as received in the command line\n tools and convert to the appropriate type of value.\n Raise a ValidationError if the value can't be converted.\n\n :type value: str\n :param value: The value to convert. This should always\n be a string.\n \"\"\"\n return super(Param, self).convert(self,value)\n\n\n", "path": "boto/roboto/param.py"}], "after_files": [{"content": "# Copyright (c) 2010 Mitch Garnaat http://garnaat.org/\n# Copyright (c) 2010, Eucalyptus Systems, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish, dis-\n# tribute, sublicense, and/or sell copies of the Software, and to permit\n# persons to whom the Software is furnished to do so, subject to the fol-\n# lowing conditions:\n#\n# The above copyright notice and this permission notice shall be included\n# in all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\n# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-\n# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT\n# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,\n# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n# IN THE SOFTWARE.\n\nimport os\n\nclass Converter(object):\n\n @classmethod\n def convert_string(cls, param, value):\n # TODO: could do length validation, etc. here\n if not isinstance(value, basestring):\n raise ValueError\n return value\n\n @classmethod\n def convert_integer(cls, param, value):\n # TODO: could do range checking here\n return int(value)\n\n @classmethod\n def convert_boolean(cls, param, value):\n \"\"\"\n For command line arguments, just the presence\n of the option means True so just return True\n \"\"\"\n return True\n\n @classmethod\n def convert_file(cls, param, value):\n if os.path.exists(value) and not os.path.isdir(value):\n return value\n raise ValueError\n\n @classmethod\n def convert_dir(cls, param, value):\n if os.path.isdir(value):\n return value\n raise ValueError\n\n @classmethod\n def convert(cls, param, value):\n try:\n if hasattr(cls, 'convert_'+param.ptype):\n mthd = getattr(cls, 'convert_'+param.ptype)\n else:\n mthd = cls.convert_string\n return mthd(param, value)\n except:\n raise ValidationException(param, '')\n\nclass Param(Converter):\n\n def __init__(self, name=None, ptype='string', optional=True,\n short_name=None, long_name=None, doc='',\n metavar=None, cardinality=1, default=None,\n choices=None, encoder=None, request_param=True):\n self.name = name\n self.ptype = ptype\n self.optional = optional\n self.short_name = short_name\n self.long_name = long_name\n self.doc = doc\n self.metavar = metavar\n self.cardinality = cardinality\n self.default = default\n self.choices = choices\n self.encoder = encoder\n self.request_param = request_param\n\n @property\n def optparse_long_name(self):\n ln = None\n if self.long_name:\n ln = '--%s' % self.long_name\n return ln\n\n @property\n def synopsis_long_name(self):\n ln = None\n if self.long_name:\n ln = '--%s' % self.long_name\n return ln\n\n @property\n def getopt_long_name(self):\n ln = None\n if self.long_name:\n ln = '%s' % self.long_name\n if self.ptype != 'boolean':\n ln += '='\n return ln\n\n @property\n def optparse_short_name(self):\n sn = None\n if self.short_name:\n sn = '-%s' % self.short_name\n return sn\n\n @property\n def synopsis_short_name(self):\n sn = None\n if self.short_name:\n sn = '-%s' % self.short_name\n return sn\n\n @property\n def getopt_short_name(self):\n sn = None\n if self.short_name:\n sn = '%s' % self.short_name\n if self.ptype != 'boolean':\n sn += ':'\n return sn\n\n def convert(self, value):\n \"\"\"\n Convert a string value as received in the command line\n tools and convert to the appropriate type of value.\n Raise a ValidationError if the value can't be converted.\n\n :type value: str\n :param value: The value to convert. This should always\n be a string.\n \"\"\"\n return super(Param, self).convert(self,value)\n\n\n", "path": "boto/roboto/param.py"}]} | 1,750 | 102 |
gh_patches_debug_22109 | rasdani/github-patches | git_diff | scoutapp__scout_apm_python-259 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Warn about wrong values for SCOUT_AGENT_TRIPLE
Follow-on to #239 - it's easy to override `SCOUT_AGENT_TRIPLE` to a wrong value. We should add some validation here with a log message if it looks wrong.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/scout_apm/core/platform_detection.py`
Content:
```
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import platform
5 import subprocess
6
7
8 def get_triple():
9 return "{arch}-{platform}".format(arch=get_arch(), platform=get_platform())
10
11
12 def get_arch():
13 """
14 What CPU are we on?
15 """
16 arch = platform.machine()
17 if arch == "i686":
18 return "i686"
19 elif arch == "x86_64":
20 return "x86_64"
21 else:
22 return "unknown"
23
24
25 def get_platform():
26 """
27 What Operating System (and sub-system like glibc / musl)
28 """
29 system_name = platform.system()
30 if system_name == "Linux":
31 libc = get_libc()
32 return "unknown-linux-{libc}".format(libc=libc)
33 elif system_name == "Darwin":
34 return "apple-darwin"
35 else:
36 return "unknown"
37
38
39 _libc = None
40
41
42 def get_libc():
43 """
44 Alpine linux uses a non glibc version of the standard library, it uses
45 the stripped down musl instead. The core agent can be built against it,
46 but which one is running must be detected. Shelling out to `ldd`
47 appears to be the most reliable way to do this.
48 """
49 global _libc
50 if _libc is None:
51 try:
52 output = subprocess.check_output(
53 ["ldd", "--version"], stderr=subprocess.STDOUT, close_fds=True
54 )
55 except (OSError, subprocess.CalledProcessError):
56 _libc = "gnu"
57 else:
58 if b"musl" in output:
59 _libc = "musl"
60 else:
61 _libc = "gnu"
62 return _libc
63
```
Path: `src/scout_apm/core/config.py`
Content:
```
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import logging
5 import os
6
7 from scout_apm.compat import string_type
8 from scout_apm.core import platform_detection
9 from scout_apm.core.util import octal
10
11 logger = logging.getLogger(__name__)
12
13
14 class ScoutConfig(object):
15 """
16 Configuration object for the ScoutApm agent.
17
18 Contains a list of configuration "layers". When a configuration key is
19 looked up, each layer is asked in turn if it knows the value. The first one
20 to answer affirmatively returns the value.
21 """
22
23 def __init__(self):
24 self.layers = [
25 ScoutConfigEnv(),
26 ScoutConfigPython(),
27 ScoutConfigDerived(self),
28 ScoutConfigDefaults(),
29 ScoutConfigNull(),
30 ]
31
32 def value(self, key):
33 value = self.locate_layer_for_key(key).value(key)
34 if key in CONVERSIONS:
35 return CONVERSIONS[key](value)
36 return value
37
38 def locate_layer_for_key(self, key):
39 for layer in self.layers:
40 if layer.has_config(key):
41 return layer
42
43 # Should be unreachable because ScoutConfigNull returns None for all
44 # keys.
45 raise ValueError("key {!r} not found in any layer".format(key))
46
47 def log(self):
48 logger.debug("Configuration Loaded:")
49 for key in self.known_keys():
50 layer = self.locate_layer_for_key(key)
51 logger.debug("%-9s: %s = %s", layer.name(), key, layer.value(key))
52
53 def known_keys(self):
54 return [
55 "app_server",
56 "application_root",
57 "core_agent_dir",
58 "core_agent_download",
59 "core_agent_launch",
60 "core_agent_permissions",
61 "core_agent_version",
62 "disabled_instruments",
63 "download_url",
64 "framework",
65 "framework_version",
66 "hostname",
67 "ignore",
68 "key",
69 "log_level",
70 "monitor",
71 "name",
72 "revision_sha",
73 "scm_subdirectory",
74 "socket_path",
75 ]
76
77 def core_agent_permissions(self):
78 try:
79 return octal(self.value("core_agent_permissions"))
80 except ValueError:
81 logger.exception(
82 "Invalid core_agent_permissions value, using default of 0o700"
83 )
84 return 0o700
85
86 @classmethod
87 def set(cls, **kwargs):
88 """
89 Sets a configuration value for the Scout agent. Values set here will
90 not override values set in ENV.
91 """
92 global SCOUT_PYTHON_VALUES
93 for key, value in kwargs.items():
94 SCOUT_PYTHON_VALUES[key] = value
95
96 @classmethod
97 def unset(cls, *keys):
98 """
99 Removes a configuration value for the Scout agent.
100 """
101 global SCOUT_PYTHON_VALUES
102 for key in keys:
103 SCOUT_PYTHON_VALUES.pop(key, None)
104
105 @classmethod
106 def reset_all(cls):
107 """
108 Remove all configuration settings set via `ScoutConfig.set(...)`.
109
110 This is meant for use in testing.
111 """
112 global SCOUT_PYTHON_VALUES
113 SCOUT_PYTHON_VALUES.clear()
114
115
116 # Module-level data, the ScoutConfig.set(key="value") adds to this
117 SCOUT_PYTHON_VALUES = {}
118
119
120 class ScoutConfigPython(object):
121 """
122 A configuration overlay that lets other parts of python set values.
123 """
124
125 def name(self):
126 return "Python"
127
128 def has_config(self, key):
129 return key in SCOUT_PYTHON_VALUES
130
131 def value(self, key):
132 return SCOUT_PYTHON_VALUES[key]
133
134
135 class ScoutConfigEnv(object):
136 """
137 Reads configuration from environment by prefixing the key
138 requested with "SCOUT_"
139
140 Example: the `log_level` config looks for SCOUT_LOG_LEVEL
141 environment variable
142 """
143
144 def name(self):
145 return "ENV"
146
147 def has_config(self, key):
148 env_key = self.modify_key(key)
149 return env_key in os.environ
150
151 def value(self, key):
152 env_key = self.modify_key(key)
153 return os.environ[env_key]
154
155 def modify_key(self, key):
156 env_key = ("SCOUT_" + key).upper()
157 return env_key
158
159
160 class ScoutConfigDerived(object):
161 """
162 A configuration overlay that calculates from other values.
163 """
164
165 def __init__(self, config):
166 """
167 config argument is the overall ScoutConfig var, so we can lookup the
168 components of the derived info.
169 """
170 self.config = config
171
172 def name(self):
173 return "Derived"
174
175 def has_config(self, key):
176 return self.lookup_func(key) is not None
177
178 def value(self, key):
179 return self.lookup_func(key)()
180
181 def lookup_func(self, key):
182 """
183 Returns the derive_#{key} function, or None if it isn't defined
184 """
185 func_name = "derive_" + key
186 return getattr(self, func_name, None)
187
188 def derive_socket_path(self):
189 return "{}/{}/scout-agent.sock".format(
190 self.config.value("core_agent_dir"),
191 self.config.value("core_agent_full_name"),
192 )
193
194 def derive_core_agent_full_name(self):
195 return "{name}-{version}-{triple}".format(
196 name="scout_apm_core",
197 version=self.config.value("core_agent_version"),
198 triple=self.config.value("core_agent_triple"),
199 )
200
201 def derive_core_agent_triple(self):
202 return platform_detection.get_triple()
203
204
205 class ScoutConfigDefaults(object):
206 """
207 Provides default values for important configurations
208 """
209
210 def name(self):
211 return "Defaults"
212
213 def __init__(self):
214 self.defaults = {
215 "app_server": "",
216 "application_root": "",
217 "core_agent_dir": "/tmp/scout_apm_core",
218 "core_agent_download": True,
219 "core_agent_launch": True,
220 "core_agent_permissions": 700,
221 "core_agent_version": "v1.2.0", # can be an exact tag name, or 'latest'
222 "disabled_instruments": [],
223 "download_url": "https://s3-us-west-1.amazonaws.com/scout-public-downloads/apm_core_agent/release", # noqa: E501
224 "framework": "",
225 "framework_version": "",
226 "hostname": None,
227 "key": "",
228 "log_level": "info",
229 "monitor": False,
230 "name": "",
231 "revision_sha": self._git_revision_sha(),
232 "scm_subdirectory": "",
233 }
234
235 def _git_revision_sha(self):
236 # N.B. The environment variable SCOUT_REVISION_SHA may also be used,
237 # but that will be picked up by ScoutConfigEnv
238 return os.environ.get("HEROKU_SLUG_COMMIT", "")
239
240 def has_config(self, key):
241 return key in self.defaults
242
243 def value(self, key):
244 return self.defaults[key]
245
246
247 # Always returns None to any key
248 class ScoutConfigNull(object):
249 """
250 Always answers that a key is present, but the value is None
251
252 Used as the last step of the layered configuration.
253 """
254
255 def name(self):
256 return "Null"
257
258 def has_config(self, key):
259 return True
260
261 def value(self, key):
262 return None
263
264
265 def convert_to_bool(value):
266 if isinstance(value, bool):
267 return value
268 if isinstance(value, string_type):
269 return value.lower() in ("yes", "true", "t", "1")
270 # Unknown type - default to false?
271 return False
272
273
274 def convert_to_list(value):
275 if isinstance(value, list):
276 return value
277 if isinstance(value, tuple):
278 return list(value)
279 if isinstance(value, string_type):
280 # Split on commas
281 return [item.strip() for item in value.split(",") if item]
282 # Unknown type - default to empty?
283 return []
284
285
286 CONVERSIONS = {
287 "core_agent_download": convert_to_bool,
288 "core_agent_launch": convert_to_bool,
289 "monitor": convert_to_bool,
290 "disabled_instruments": convert_to_list,
291 "ignore": convert_to_list,
292 }
293
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/scout_apm/core/config.py b/src/scout_apm/core/config.py
--- a/src/scout_apm/core/config.py
+++ b/src/scout_apm/core/config.py
@@ -192,10 +192,13 @@
)
def derive_core_agent_full_name(self):
+ triple = self.config.value("core_agent_triple")
+ if not platform_detection.is_valid_triple(triple):
+ logger.warning("Invalid value for core_agent_triple: %s", triple)
return "{name}-{version}-{triple}".format(
name="scout_apm_core",
version=self.config.value("core_agent_version"),
- triple=self.config.value("core_agent_triple"),
+ triple=triple,
)
def derive_core_agent_triple(self):
diff --git a/src/scout_apm/core/platform_detection.py b/src/scout_apm/core/platform_detection.py
--- a/src/scout_apm/core/platform_detection.py
+++ b/src/scout_apm/core/platform_detection.py
@@ -5,6 +5,16 @@
import subprocess
+def is_valid_triple(triple):
+ values = triple.split("-", 1)
+ return (
+ len(values) == 2
+ and values[0] in ("i686", "x86_64", "unknown")
+ and values[1]
+ in ("unknown-linux-gnu", "unknown-linux-musl", "apple-darwin", "unknown")
+ )
+
+
def get_triple():
return "{arch}-{platform}".format(arch=get_arch(), platform=get_platform())
| {"golden_diff": "diff --git a/src/scout_apm/core/config.py b/src/scout_apm/core/config.py\n--- a/src/scout_apm/core/config.py\n+++ b/src/scout_apm/core/config.py\n@@ -192,10 +192,13 @@\n )\n \n def derive_core_agent_full_name(self):\n+ triple = self.config.value(\"core_agent_triple\")\n+ if not platform_detection.is_valid_triple(triple):\n+ logger.warning(\"Invalid value for core_agent_triple: %s\", triple)\n return \"{name}-{version}-{triple}\".format(\n name=\"scout_apm_core\",\n version=self.config.value(\"core_agent_version\"),\n- triple=self.config.value(\"core_agent_triple\"),\n+ triple=triple,\n )\n \n def derive_core_agent_triple(self):\ndiff --git a/src/scout_apm/core/platform_detection.py b/src/scout_apm/core/platform_detection.py\n--- a/src/scout_apm/core/platform_detection.py\n+++ b/src/scout_apm/core/platform_detection.py\n@@ -5,6 +5,16 @@\n import subprocess\n \n \n+def is_valid_triple(triple):\n+ values = triple.split(\"-\", 1)\n+ return (\n+ len(values) == 2\n+ and values[0] in (\"i686\", \"x86_64\", \"unknown\")\n+ and values[1]\n+ in (\"unknown-linux-gnu\", \"unknown-linux-musl\", \"apple-darwin\", \"unknown\")\n+ )\n+\n+\n def get_triple():\n return \"{arch}-{platform}\".format(arch=get_arch(), platform=get_platform())\n", "issue": "Warn about wrong values for SCOUT_AGENT_TRIPLE\nFollow-on to #239 - it's easy to override `SCOUT_AGENT_TRIPLE` to a wrong value. We should add some validation here with a log message if it looks wrong.\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport platform\nimport subprocess\n\n\ndef get_triple():\n return \"{arch}-{platform}\".format(arch=get_arch(), platform=get_platform())\n\n\ndef get_arch():\n \"\"\"\n What CPU are we on?\n \"\"\"\n arch = platform.machine()\n if arch == \"i686\":\n return \"i686\"\n elif arch == \"x86_64\":\n return \"x86_64\"\n else:\n return \"unknown\"\n\n\ndef get_platform():\n \"\"\"\n What Operating System (and sub-system like glibc / musl)\n \"\"\"\n system_name = platform.system()\n if system_name == \"Linux\":\n libc = get_libc()\n return \"unknown-linux-{libc}\".format(libc=libc)\n elif system_name == \"Darwin\":\n return \"apple-darwin\"\n else:\n return \"unknown\"\n\n\n_libc = None\n\n\ndef get_libc():\n \"\"\"\n Alpine linux uses a non glibc version of the standard library, it uses\n the stripped down musl instead. The core agent can be built against it,\n but which one is running must be detected. Shelling out to `ldd`\n appears to be the most reliable way to do this.\n \"\"\"\n global _libc\n if _libc is None:\n try:\n output = subprocess.check_output(\n [\"ldd\", \"--version\"], stderr=subprocess.STDOUT, close_fds=True\n )\n except (OSError, subprocess.CalledProcessError):\n _libc = \"gnu\"\n else:\n if b\"musl\" in output:\n _libc = \"musl\"\n else:\n _libc = \"gnu\"\n return _libc\n", "path": "src/scout_apm/core/platform_detection.py"}, {"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\nimport os\n\nfrom scout_apm.compat import string_type\nfrom scout_apm.core import platform_detection\nfrom scout_apm.core.util import octal\n\nlogger = logging.getLogger(__name__)\n\n\nclass ScoutConfig(object):\n \"\"\"\n Configuration object for the ScoutApm agent.\n\n Contains a list of configuration \"layers\". When a configuration key is\n looked up, each layer is asked in turn if it knows the value. The first one\n to answer affirmatively returns the value.\n \"\"\"\n\n def __init__(self):\n self.layers = [\n ScoutConfigEnv(),\n ScoutConfigPython(),\n ScoutConfigDerived(self),\n ScoutConfigDefaults(),\n ScoutConfigNull(),\n ]\n\n def value(self, key):\n value = self.locate_layer_for_key(key).value(key)\n if key in CONVERSIONS:\n return CONVERSIONS[key](value)\n return value\n\n def locate_layer_for_key(self, key):\n for layer in self.layers:\n if layer.has_config(key):\n return layer\n\n # Should be unreachable because ScoutConfigNull returns None for all\n # keys.\n raise ValueError(\"key {!r} not found in any layer\".format(key))\n\n def log(self):\n logger.debug(\"Configuration Loaded:\")\n for key in self.known_keys():\n layer = self.locate_layer_for_key(key)\n logger.debug(\"%-9s: %s = %s\", layer.name(), key, layer.value(key))\n\n def known_keys(self):\n return [\n \"app_server\",\n \"application_root\",\n \"core_agent_dir\",\n \"core_agent_download\",\n \"core_agent_launch\",\n \"core_agent_permissions\",\n \"core_agent_version\",\n \"disabled_instruments\",\n \"download_url\",\n \"framework\",\n \"framework_version\",\n \"hostname\",\n \"ignore\",\n \"key\",\n \"log_level\",\n \"monitor\",\n \"name\",\n \"revision_sha\",\n \"scm_subdirectory\",\n \"socket_path\",\n ]\n\n def core_agent_permissions(self):\n try:\n return octal(self.value(\"core_agent_permissions\"))\n except ValueError:\n logger.exception(\n \"Invalid core_agent_permissions value, using default of 0o700\"\n )\n return 0o700\n\n @classmethod\n def set(cls, **kwargs):\n \"\"\"\n Sets a configuration value for the Scout agent. Values set here will\n not override values set in ENV.\n \"\"\"\n global SCOUT_PYTHON_VALUES\n for key, value in kwargs.items():\n SCOUT_PYTHON_VALUES[key] = value\n\n @classmethod\n def unset(cls, *keys):\n \"\"\"\n Removes a configuration value for the Scout agent.\n \"\"\"\n global SCOUT_PYTHON_VALUES\n for key in keys:\n SCOUT_PYTHON_VALUES.pop(key, None)\n\n @classmethod\n def reset_all(cls):\n \"\"\"\n Remove all configuration settings set via `ScoutConfig.set(...)`.\n\n This is meant for use in testing.\n \"\"\"\n global SCOUT_PYTHON_VALUES\n SCOUT_PYTHON_VALUES.clear()\n\n\n# Module-level data, the ScoutConfig.set(key=\"value\") adds to this\nSCOUT_PYTHON_VALUES = {}\n\n\nclass ScoutConfigPython(object):\n \"\"\"\n A configuration overlay that lets other parts of python set values.\n \"\"\"\n\n def name(self):\n return \"Python\"\n\n def has_config(self, key):\n return key in SCOUT_PYTHON_VALUES\n\n def value(self, key):\n return SCOUT_PYTHON_VALUES[key]\n\n\nclass ScoutConfigEnv(object):\n \"\"\"\n Reads configuration from environment by prefixing the key\n requested with \"SCOUT_\"\n\n Example: the `log_level` config looks for SCOUT_LOG_LEVEL\n environment variable\n \"\"\"\n\n def name(self):\n return \"ENV\"\n\n def has_config(self, key):\n env_key = self.modify_key(key)\n return env_key in os.environ\n\n def value(self, key):\n env_key = self.modify_key(key)\n return os.environ[env_key]\n\n def modify_key(self, key):\n env_key = (\"SCOUT_\" + key).upper()\n return env_key\n\n\nclass ScoutConfigDerived(object):\n \"\"\"\n A configuration overlay that calculates from other values.\n \"\"\"\n\n def __init__(self, config):\n \"\"\"\n config argument is the overall ScoutConfig var, so we can lookup the\n components of the derived info.\n \"\"\"\n self.config = config\n\n def name(self):\n return \"Derived\"\n\n def has_config(self, key):\n return self.lookup_func(key) is not None\n\n def value(self, key):\n return self.lookup_func(key)()\n\n def lookup_func(self, key):\n \"\"\"\n Returns the derive_#{key} function, or None if it isn't defined\n \"\"\"\n func_name = \"derive_\" + key\n return getattr(self, func_name, None)\n\n def derive_socket_path(self):\n return \"{}/{}/scout-agent.sock\".format(\n self.config.value(\"core_agent_dir\"),\n self.config.value(\"core_agent_full_name\"),\n )\n\n def derive_core_agent_full_name(self):\n return \"{name}-{version}-{triple}\".format(\n name=\"scout_apm_core\",\n version=self.config.value(\"core_agent_version\"),\n triple=self.config.value(\"core_agent_triple\"),\n )\n\n def derive_core_agent_triple(self):\n return platform_detection.get_triple()\n\n\nclass ScoutConfigDefaults(object):\n \"\"\"\n Provides default values for important configurations\n \"\"\"\n\n def name(self):\n return \"Defaults\"\n\n def __init__(self):\n self.defaults = {\n \"app_server\": \"\",\n \"application_root\": \"\",\n \"core_agent_dir\": \"/tmp/scout_apm_core\",\n \"core_agent_download\": True,\n \"core_agent_launch\": True,\n \"core_agent_permissions\": 700,\n \"core_agent_version\": \"v1.2.0\", # can be an exact tag name, or 'latest'\n \"disabled_instruments\": [],\n \"download_url\": \"https://s3-us-west-1.amazonaws.com/scout-public-downloads/apm_core_agent/release\", # noqa: E501\n \"framework\": \"\",\n \"framework_version\": \"\",\n \"hostname\": None,\n \"key\": \"\",\n \"log_level\": \"info\",\n \"monitor\": False,\n \"name\": \"\",\n \"revision_sha\": self._git_revision_sha(),\n \"scm_subdirectory\": \"\",\n }\n\n def _git_revision_sha(self):\n # N.B. The environment variable SCOUT_REVISION_SHA may also be used,\n # but that will be picked up by ScoutConfigEnv\n return os.environ.get(\"HEROKU_SLUG_COMMIT\", \"\")\n\n def has_config(self, key):\n return key in self.defaults\n\n def value(self, key):\n return self.defaults[key]\n\n\n# Always returns None to any key\nclass ScoutConfigNull(object):\n \"\"\"\n Always answers that a key is present, but the value is None\n\n Used as the last step of the layered configuration.\n \"\"\"\n\n def name(self):\n return \"Null\"\n\n def has_config(self, key):\n return True\n\n def value(self, key):\n return None\n\n\ndef convert_to_bool(value):\n if isinstance(value, bool):\n return value\n if isinstance(value, string_type):\n return value.lower() in (\"yes\", \"true\", \"t\", \"1\")\n # Unknown type - default to false?\n return False\n\n\ndef convert_to_list(value):\n if isinstance(value, list):\n return value\n if isinstance(value, tuple):\n return list(value)\n if isinstance(value, string_type):\n # Split on commas\n return [item.strip() for item in value.split(\",\") if item]\n # Unknown type - default to empty?\n return []\n\n\nCONVERSIONS = {\n \"core_agent_download\": convert_to_bool,\n \"core_agent_launch\": convert_to_bool,\n \"monitor\": convert_to_bool,\n \"disabled_instruments\": convert_to_list,\n \"ignore\": convert_to_list,\n}\n", "path": "src/scout_apm/core/config.py"}], "after_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport platform\nimport subprocess\n\n\ndef is_valid_triple(triple):\n values = triple.split(\"-\", 1)\n return (\n len(values) == 2\n and values[0] in (\"i686\", \"x86_64\", \"unknown\")\n and values[1]\n in (\"unknown-linux-gnu\", \"unknown-linux-musl\", \"apple-darwin\", \"unknown\")\n )\n\n\ndef get_triple():\n return \"{arch}-{platform}\".format(arch=get_arch(), platform=get_platform())\n\n\ndef get_arch():\n \"\"\"\n What CPU are we on?\n \"\"\"\n arch = platform.machine()\n if arch == \"i686\":\n return \"i686\"\n elif arch == \"x86_64\":\n return \"x86_64\"\n else:\n return \"unknown\"\n\n\ndef get_platform():\n \"\"\"\n What Operating System (and sub-system like glibc / musl)\n \"\"\"\n system_name = platform.system()\n if system_name == \"Linux\":\n libc = get_libc()\n return \"unknown-linux-{libc}\".format(libc=libc)\n elif system_name == \"Darwin\":\n return \"apple-darwin\"\n else:\n return \"unknown\"\n\n\n_libc = None\n\n\ndef get_libc():\n \"\"\"\n Alpine linux uses a non glibc version of the standard library, it uses\n the stripped down musl instead. The core agent can be built against it,\n but which one is running must be detected. Shelling out to `ldd`\n appears to be the most reliable way to do this.\n \"\"\"\n global _libc\n if _libc is None:\n try:\n output = subprocess.check_output(\n [\"ldd\", \"--version\"], stderr=subprocess.STDOUT, close_fds=True\n )\n except (OSError, subprocess.CalledProcessError):\n _libc = \"gnu\"\n else:\n if b\"musl\" in output:\n _libc = \"musl\"\n else:\n _libc = \"gnu\"\n return _libc\n", "path": "src/scout_apm/core/platform_detection.py"}, {"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\nimport os\n\nfrom scout_apm.compat import string_type\nfrom scout_apm.core import platform_detection\nfrom scout_apm.core.util import octal\n\nlogger = logging.getLogger(__name__)\n\n\nclass ScoutConfig(object):\n \"\"\"\n Configuration object for the ScoutApm agent.\n\n Contains a list of configuration \"layers\". When a configuration key is\n looked up, each layer is asked in turn if it knows the value. The first one\n to answer affirmatively returns the value.\n \"\"\"\n\n def __init__(self):\n self.layers = [\n ScoutConfigEnv(),\n ScoutConfigPython(),\n ScoutConfigDerived(self),\n ScoutConfigDefaults(),\n ScoutConfigNull(),\n ]\n\n def value(self, key):\n value = self.locate_layer_for_key(key).value(key)\n if key in CONVERSIONS:\n return CONVERSIONS[key](value)\n return value\n\n def locate_layer_for_key(self, key):\n for layer in self.layers:\n if layer.has_config(key):\n return layer\n\n # Should be unreachable because ScoutConfigNull returns None for all\n # keys.\n raise ValueError(\"key {!r} not found in any layer\".format(key))\n\n def log(self):\n logger.debug(\"Configuration Loaded:\")\n for key in self.known_keys():\n layer = self.locate_layer_for_key(key)\n logger.debug(\"%-9s: %s = %s\", layer.name(), key, layer.value(key))\n\n def known_keys(self):\n return [\n \"app_server\",\n \"application_root\",\n \"core_agent_dir\",\n \"core_agent_download\",\n \"core_agent_launch\",\n \"core_agent_permissions\",\n \"core_agent_version\",\n \"disabled_instruments\",\n \"download_url\",\n \"framework\",\n \"framework_version\",\n \"hostname\",\n \"ignore\",\n \"key\",\n \"log_level\",\n \"monitor\",\n \"name\",\n \"revision_sha\",\n \"scm_subdirectory\",\n \"socket_path\",\n ]\n\n def core_agent_permissions(self):\n try:\n return octal(self.value(\"core_agent_permissions\"))\n except ValueError:\n logger.exception(\n \"Invalid core_agent_permissions value, using default of 0o700\"\n )\n return 0o700\n\n @classmethod\n def set(cls, **kwargs):\n \"\"\"\n Sets a configuration value for the Scout agent. Values set here will\n not override values set in ENV.\n \"\"\"\n global SCOUT_PYTHON_VALUES\n for key, value in kwargs.items():\n SCOUT_PYTHON_VALUES[key] = value\n\n @classmethod\n def unset(cls, *keys):\n \"\"\"\n Removes a configuration value for the Scout agent.\n \"\"\"\n global SCOUT_PYTHON_VALUES\n for key in keys:\n SCOUT_PYTHON_VALUES.pop(key, None)\n\n @classmethod\n def reset_all(cls):\n \"\"\"\n Remove all configuration settings set via `ScoutConfig.set(...)`.\n\n This is meant for use in testing.\n \"\"\"\n global SCOUT_PYTHON_VALUES\n SCOUT_PYTHON_VALUES.clear()\n\n\n# Module-level data, the ScoutConfig.set(key=\"value\") adds to this\nSCOUT_PYTHON_VALUES = {}\n\n\nclass ScoutConfigPython(object):\n \"\"\"\n A configuration overlay that lets other parts of python set values.\n \"\"\"\n\n def name(self):\n return \"Python\"\n\n def has_config(self, key):\n return key in SCOUT_PYTHON_VALUES\n\n def value(self, key):\n return SCOUT_PYTHON_VALUES[key]\n\n\nclass ScoutConfigEnv(object):\n \"\"\"\n Reads configuration from environment by prefixing the key\n requested with \"SCOUT_\"\n\n Example: the `log_level` config looks for SCOUT_LOG_LEVEL\n environment variable\n \"\"\"\n\n def name(self):\n return \"ENV\"\n\n def has_config(self, key):\n env_key = self.modify_key(key)\n return env_key in os.environ\n\n def value(self, key):\n env_key = self.modify_key(key)\n return os.environ[env_key]\n\n def modify_key(self, key):\n env_key = (\"SCOUT_\" + key).upper()\n return env_key\n\n\nclass ScoutConfigDerived(object):\n \"\"\"\n A configuration overlay that calculates from other values.\n \"\"\"\n\n def __init__(self, config):\n \"\"\"\n config argument is the overall ScoutConfig var, so we can lookup the\n components of the derived info.\n \"\"\"\n self.config = config\n\n def name(self):\n return \"Derived\"\n\n def has_config(self, key):\n return self.lookup_func(key) is not None\n\n def value(self, key):\n return self.lookup_func(key)()\n\n def lookup_func(self, key):\n \"\"\"\n Returns the derive_#{key} function, or None if it isn't defined\n \"\"\"\n func_name = \"derive_\" + key\n return getattr(self, func_name, None)\n\n def derive_socket_path(self):\n return \"{}/{}/scout-agent.sock\".format(\n self.config.value(\"core_agent_dir\"),\n self.config.value(\"core_agent_full_name\"),\n )\n\n def derive_core_agent_full_name(self):\n triple = self.config.value(\"core_agent_triple\")\n if not platform_detection.is_valid_triple(triple):\n logger.warning(\"Invalid value for core_agent_triple: %s\", triple)\n return \"{name}-{version}-{triple}\".format(\n name=\"scout_apm_core\",\n version=self.config.value(\"core_agent_version\"),\n triple=triple,\n )\n\n def derive_core_agent_triple(self):\n return platform_detection.get_triple()\n\n\nclass ScoutConfigDefaults(object):\n \"\"\"\n Provides default values for important configurations\n \"\"\"\n\n def name(self):\n return \"Defaults\"\n\n def __init__(self):\n self.defaults = {\n \"app_server\": \"\",\n \"application_root\": \"\",\n \"core_agent_dir\": \"/tmp/scout_apm_core\",\n \"core_agent_download\": True,\n \"core_agent_launch\": True,\n \"core_agent_permissions\": 700,\n \"core_agent_version\": \"v1.2.0\", # can be an exact tag name, or 'latest'\n \"disabled_instruments\": [],\n \"download_url\": \"https://s3-us-west-1.amazonaws.com/scout-public-downloads/apm_core_agent/release\", # noqa: E501\n \"framework\": \"\",\n \"framework_version\": \"\",\n \"hostname\": None,\n \"key\": \"\",\n \"log_level\": \"info\",\n \"monitor\": False,\n \"name\": \"\",\n \"revision_sha\": self._git_revision_sha(),\n \"scm_subdirectory\": \"\",\n }\n\n def _git_revision_sha(self):\n # N.B. The environment variable SCOUT_REVISION_SHA may also be used,\n # but that will be picked up by ScoutConfigEnv\n return os.environ.get(\"HEROKU_SLUG_COMMIT\", \"\")\n\n def has_config(self, key):\n return key in self.defaults\n\n def value(self, key):\n return self.defaults[key]\n\n\n# Always returns None to any key\nclass ScoutConfigNull(object):\n \"\"\"\n Always answers that a key is present, but the value is None\n\n Used as the last step of the layered configuration.\n \"\"\"\n\n def name(self):\n return \"Null\"\n\n def has_config(self, key):\n return True\n\n def value(self, key):\n return None\n\n\ndef convert_to_bool(value):\n if isinstance(value, bool):\n return value\n if isinstance(value, string_type):\n return value.lower() in (\"yes\", \"true\", \"t\", \"1\")\n # Unknown type - default to false?\n return False\n\n\ndef convert_to_list(value):\n if isinstance(value, list):\n return value\n if isinstance(value, tuple):\n return list(value)\n if isinstance(value, string_type):\n # Split on commas\n return [item.strip() for item in value.split(\",\") if item]\n # Unknown type - default to empty?\n return []\n\n\nCONVERSIONS = {\n \"core_agent_download\": convert_to_bool,\n \"core_agent_launch\": convert_to_bool,\n \"monitor\": convert_to_bool,\n \"disabled_instruments\": convert_to_list,\n \"ignore\": convert_to_list,\n}\n", "path": "src/scout_apm/core/config.py"}]} | 3,390 | 356 |
gh_patches_debug_51416 | rasdani/github-patches | git_diff | pallets__click-1496 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ClickException message goes to stdout instead of stderr with version 7.1
Thanks a lot for ``click`` - absolutely fantastic project.
I've noticed the following change when upgrading to 7.1, and am not sure if it's intentional or not:
### Expected Behavior
When raising ``click.ClickException``, the corresponding ``Error: <message>`` goes to ``stderr`` using click version 7.0.
Minimal reproducing code:
```python
import click
@click.command()
def run():
raise click.ClickException('exception message')
if __name__ == '__main__':
run()
```
```bash
python <filename>.py 1> stdout 2> stderr
```
### Actual Behavior
With version 7.1, the ``Error: exception message`` ends up in ``stdout`` instead of ``stderr``.
### Environment
* Python version: Python 3.7.5
* Click version: 7.1
* OS: Ubuntu 18.04 [through WSL1]
* Shell: GNU bash, version 4.4.20
### Additional comments
As mentioned above I'm not sure if this is an intended change, but I couldn't find any mention on the [Changelog](https://click.palletsprojects.com/en/7.x/changelog/#version-7-1), and [this part](https://click.palletsprojects.com/en/7.x/exceptions/#which-exceptions-exist) of the docs still referes to ``show`` being printed to ``stderr``.
Happy to do some more digging if this happens only on my system.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/click/exceptions.py`
Content:
```
1 from ._compat import filename_to_ui
2 from ._compat import get_text_stderr
3 from ._compat import PY2
4 from .utils import echo
5
6
7 def _join_param_hints(param_hint):
8 if isinstance(param_hint, (tuple, list)):
9 return " / ".join(repr(x) for x in param_hint)
10 return param_hint
11
12
13 class ClickException(Exception):
14 """An exception that Click can handle and show to the user."""
15
16 #: The exit code for this exception
17 exit_code = 1
18
19 def __init__(self, message):
20 ctor_msg = message
21 if PY2:
22 if ctor_msg is not None:
23 ctor_msg = ctor_msg.encode("utf-8")
24 Exception.__init__(self, ctor_msg)
25 self.message = message
26
27 def format_message(self):
28 return self.message
29
30 def __str__(self):
31 return self.message
32
33 if PY2:
34 __unicode__ = __str__
35
36 def __str__(self):
37 return self.message.encode("utf-8")
38
39 def show(self, file=None):
40 if file is None:
41 file = get_text_stderr()
42 echo("Error: {}".format(self.format_message(), file=file))
43
44
45 class UsageError(ClickException):
46 """An internal exception that signals a usage error. This typically
47 aborts any further handling.
48
49 :param message: the error message to display.
50 :param ctx: optionally the context that caused this error. Click will
51 fill in the context automatically in some situations.
52 """
53
54 exit_code = 2
55
56 def __init__(self, message, ctx=None):
57 ClickException.__init__(self, message)
58 self.ctx = ctx
59 self.cmd = self.ctx.command if self.ctx else None
60
61 def show(self, file=None):
62 if file is None:
63 file = get_text_stderr()
64 color = None
65 hint = ""
66 if self.cmd is not None and self.cmd.get_help_option(self.ctx) is not None:
67 hint = "Try '{} {}' for help.\n".format(
68 self.ctx.command_path, self.ctx.help_option_names[0]
69 )
70 if self.ctx is not None:
71 color = self.ctx.color
72 echo("{}\n{}".format(self.ctx.get_usage(), hint), file=file, color=color)
73 echo("Error: {}".format(self.format_message()), file=file, color=color)
74
75
76 class BadParameter(UsageError):
77 """An exception that formats out a standardized error message for a
78 bad parameter. This is useful when thrown from a callback or type as
79 Click will attach contextual information to it (for instance, which
80 parameter it is).
81
82 .. versionadded:: 2.0
83
84 :param param: the parameter object that caused this error. This can
85 be left out, and Click will attach this info itself
86 if possible.
87 :param param_hint: a string that shows up as parameter name. This
88 can be used as alternative to `param` in cases
89 where custom validation should happen. If it is
90 a string it's used as such, if it's a list then
91 each item is quoted and separated.
92 """
93
94 def __init__(self, message, ctx=None, param=None, param_hint=None):
95 UsageError.__init__(self, message, ctx)
96 self.param = param
97 self.param_hint = param_hint
98
99 def format_message(self):
100 if self.param_hint is not None:
101 param_hint = self.param_hint
102 elif self.param is not None:
103 param_hint = self.param.get_error_hint(self.ctx)
104 else:
105 return "Invalid value: {}".format(self.message)
106 param_hint = _join_param_hints(param_hint)
107
108 return "Invalid value for {}: {}".format(param_hint, self.message)
109
110
111 class MissingParameter(BadParameter):
112 """Raised if click required an option or argument but it was not
113 provided when invoking the script.
114
115 .. versionadded:: 4.0
116
117 :param param_type: a string that indicates the type of the parameter.
118 The default is to inherit the parameter type from
119 the given `param`. Valid values are ``'parameter'``,
120 ``'option'`` or ``'argument'``.
121 """
122
123 def __init__(
124 self, message=None, ctx=None, param=None, param_hint=None, param_type=None
125 ):
126 BadParameter.__init__(self, message, ctx, param, param_hint)
127 self.param_type = param_type
128
129 def format_message(self):
130 if self.param_hint is not None:
131 param_hint = self.param_hint
132 elif self.param is not None:
133 param_hint = self.param.get_error_hint(self.ctx)
134 else:
135 param_hint = None
136 param_hint = _join_param_hints(param_hint)
137
138 param_type = self.param_type
139 if param_type is None and self.param is not None:
140 param_type = self.param.param_type_name
141
142 msg = self.message
143 if self.param is not None:
144 msg_extra = self.param.type.get_missing_message(self.param)
145 if msg_extra:
146 if msg:
147 msg += ". {}".format(msg_extra)
148 else:
149 msg = msg_extra
150
151 return "Missing {}{}{}{}".format(
152 param_type,
153 " {}".format(param_hint) if param_hint else "",
154 ". " if msg else ".",
155 msg or "",
156 )
157
158 def __str__(self):
159 if self.message is None:
160 param_name = self.param.name if self.param else None
161 return "missing parameter: {}".format(param_name)
162 else:
163 return self.message
164
165 if PY2:
166 __unicode__ = __str__
167
168 def __str__(self):
169 return self.__unicode__().encode("utf-8")
170
171
172 class NoSuchOption(UsageError):
173 """Raised if click attempted to handle an option that does not
174 exist.
175
176 .. versionadded:: 4.0
177 """
178
179 def __init__(self, option_name, message=None, possibilities=None, ctx=None):
180 if message is None:
181 message = "no such option: {}".format(option_name)
182 UsageError.__init__(self, message, ctx)
183 self.option_name = option_name
184 self.possibilities = possibilities
185
186 def format_message(self):
187 bits = [self.message]
188 if self.possibilities:
189 if len(self.possibilities) == 1:
190 bits.append("Did you mean {}?".format(self.possibilities[0]))
191 else:
192 possibilities = sorted(self.possibilities)
193 bits.append("(Possible options: {})".format(", ".join(possibilities)))
194 return " ".join(bits)
195
196
197 class BadOptionUsage(UsageError):
198 """Raised if an option is generally supplied but the use of the option
199 was incorrect. This is for instance raised if the number of arguments
200 for an option is not correct.
201
202 .. versionadded:: 4.0
203
204 :param option_name: the name of the option being used incorrectly.
205 """
206
207 def __init__(self, option_name, message, ctx=None):
208 UsageError.__init__(self, message, ctx)
209 self.option_name = option_name
210
211
212 class BadArgumentUsage(UsageError):
213 """Raised if an argument is generally supplied but the use of the argument
214 was incorrect. This is for instance raised if the number of values
215 for an argument is not correct.
216
217 .. versionadded:: 6.0
218 """
219
220 def __init__(self, message, ctx=None):
221 UsageError.__init__(self, message, ctx)
222
223
224 class FileError(ClickException):
225 """Raised if a file cannot be opened."""
226
227 def __init__(self, filename, hint=None):
228 ui_filename = filename_to_ui(filename)
229 if hint is None:
230 hint = "unknown error"
231 ClickException.__init__(self, hint)
232 self.ui_filename = ui_filename
233 self.filename = filename
234
235 def format_message(self):
236 return "Could not open file {}: {}".format(self.ui_filename, self.message)
237
238
239 class Abort(RuntimeError):
240 """An internal signalling exception that signals Click to abort."""
241
242
243 class Exit(RuntimeError):
244 """An exception that indicates that the application should exit with some
245 status code.
246
247 :param code: the status code to exit with.
248 """
249
250 __slots__ = ("exit_code",)
251
252 def __init__(self, code=0):
253 self.exit_code = code
254
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/click/exceptions.py b/src/click/exceptions.py
--- a/src/click/exceptions.py
+++ b/src/click/exceptions.py
@@ -39,7 +39,7 @@
def show(self, file=None):
if file is None:
file = get_text_stderr()
- echo("Error: {}".format(self.format_message(), file=file))
+ echo("Error: {}".format(self.format_message()), file=file)
class UsageError(ClickException):
| {"golden_diff": "diff --git a/src/click/exceptions.py b/src/click/exceptions.py\n--- a/src/click/exceptions.py\n+++ b/src/click/exceptions.py\n@@ -39,7 +39,7 @@\n def show(self, file=None):\n if file is None:\n file = get_text_stderr()\n- echo(\"Error: {}\".format(self.format_message(), file=file))\n+ echo(\"Error: {}\".format(self.format_message()), file=file)\n \n \n class UsageError(ClickException):\n", "issue": "ClickException message goes to stdout instead of stderr with version 7.1\nThanks a lot for ``click`` - absolutely fantastic project. \r\n\r\nI've noticed the following change when upgrading to 7.1, and am not sure if it's intentional or not:\r\n\r\n### Expected Behavior\r\n\r\nWhen raising ``click.ClickException``, the corresponding ``Error: <message>`` goes to ``stderr`` using click version 7.0.\r\n\r\nMinimal reproducing code:\r\n\r\n```python\r\nimport click\r\n\r\[email protected]()\r\ndef run():\r\n raise click.ClickException('exception message')\r\n\r\nif __name__ == '__main__':\r\n run()\r\n```\r\n```bash\r\npython <filename>.py 1> stdout 2> stderr\r\n```\r\n\r\n### Actual Behavior\r\n\r\nWith version 7.1, the ``Error: exception message`` ends up in ``stdout`` instead of ``stderr``.\r\n\r\n### Environment\r\n\r\n* Python version: Python 3.7.5\r\n* Click version: 7.1\r\n* OS: Ubuntu 18.04 [through WSL1]\r\n* Shell: GNU bash, version 4.4.20\r\n\r\n### Additional comments\r\n\r\nAs mentioned above I'm not sure if this is an intended change, but I couldn't find any mention on the [Changelog](https://click.palletsprojects.com/en/7.x/changelog/#version-7-1), and [this part](https://click.palletsprojects.com/en/7.x/exceptions/#which-exceptions-exist) of the docs still referes to ``show`` being printed to ``stderr``.\r\n\r\nHappy to do some more digging if this happens only on my system.\r\n\n", "before_files": [{"content": "from ._compat import filename_to_ui\nfrom ._compat import get_text_stderr\nfrom ._compat import PY2\nfrom .utils import echo\n\n\ndef _join_param_hints(param_hint):\n if isinstance(param_hint, (tuple, list)):\n return \" / \".join(repr(x) for x in param_hint)\n return param_hint\n\n\nclass ClickException(Exception):\n \"\"\"An exception that Click can handle and show to the user.\"\"\"\n\n #: The exit code for this exception\n exit_code = 1\n\n def __init__(self, message):\n ctor_msg = message\n if PY2:\n if ctor_msg is not None:\n ctor_msg = ctor_msg.encode(\"utf-8\")\n Exception.__init__(self, ctor_msg)\n self.message = message\n\n def format_message(self):\n return self.message\n\n def __str__(self):\n return self.message\n\n if PY2:\n __unicode__ = __str__\n\n def __str__(self):\n return self.message.encode(\"utf-8\")\n\n def show(self, file=None):\n if file is None:\n file = get_text_stderr()\n echo(\"Error: {}\".format(self.format_message(), file=file))\n\n\nclass UsageError(ClickException):\n \"\"\"An internal exception that signals a usage error. This typically\n aborts any further handling.\n\n :param message: the error message to display.\n :param ctx: optionally the context that caused this error. Click will\n fill in the context automatically in some situations.\n \"\"\"\n\n exit_code = 2\n\n def __init__(self, message, ctx=None):\n ClickException.__init__(self, message)\n self.ctx = ctx\n self.cmd = self.ctx.command if self.ctx else None\n\n def show(self, file=None):\n if file is None:\n file = get_text_stderr()\n color = None\n hint = \"\"\n if self.cmd is not None and self.cmd.get_help_option(self.ctx) is not None:\n hint = \"Try '{} {}' for help.\\n\".format(\n self.ctx.command_path, self.ctx.help_option_names[0]\n )\n if self.ctx is not None:\n color = self.ctx.color\n echo(\"{}\\n{}\".format(self.ctx.get_usage(), hint), file=file, color=color)\n echo(\"Error: {}\".format(self.format_message()), file=file, color=color)\n\n\nclass BadParameter(UsageError):\n \"\"\"An exception that formats out a standardized error message for a\n bad parameter. This is useful when thrown from a callback or type as\n Click will attach contextual information to it (for instance, which\n parameter it is).\n\n .. versionadded:: 2.0\n\n :param param: the parameter object that caused this error. This can\n be left out, and Click will attach this info itself\n if possible.\n :param param_hint: a string that shows up as parameter name. This\n can be used as alternative to `param` in cases\n where custom validation should happen. If it is\n a string it's used as such, if it's a list then\n each item is quoted and separated.\n \"\"\"\n\n def __init__(self, message, ctx=None, param=None, param_hint=None):\n UsageError.__init__(self, message, ctx)\n self.param = param\n self.param_hint = param_hint\n\n def format_message(self):\n if self.param_hint is not None:\n param_hint = self.param_hint\n elif self.param is not None:\n param_hint = self.param.get_error_hint(self.ctx)\n else:\n return \"Invalid value: {}\".format(self.message)\n param_hint = _join_param_hints(param_hint)\n\n return \"Invalid value for {}: {}\".format(param_hint, self.message)\n\n\nclass MissingParameter(BadParameter):\n \"\"\"Raised if click required an option or argument but it was not\n provided when invoking the script.\n\n .. versionadded:: 4.0\n\n :param param_type: a string that indicates the type of the parameter.\n The default is to inherit the parameter type from\n the given `param`. Valid values are ``'parameter'``,\n ``'option'`` or ``'argument'``.\n \"\"\"\n\n def __init__(\n self, message=None, ctx=None, param=None, param_hint=None, param_type=None\n ):\n BadParameter.__init__(self, message, ctx, param, param_hint)\n self.param_type = param_type\n\n def format_message(self):\n if self.param_hint is not None:\n param_hint = self.param_hint\n elif self.param is not None:\n param_hint = self.param.get_error_hint(self.ctx)\n else:\n param_hint = None\n param_hint = _join_param_hints(param_hint)\n\n param_type = self.param_type\n if param_type is None and self.param is not None:\n param_type = self.param.param_type_name\n\n msg = self.message\n if self.param is not None:\n msg_extra = self.param.type.get_missing_message(self.param)\n if msg_extra:\n if msg:\n msg += \". {}\".format(msg_extra)\n else:\n msg = msg_extra\n\n return \"Missing {}{}{}{}\".format(\n param_type,\n \" {}\".format(param_hint) if param_hint else \"\",\n \". \" if msg else \".\",\n msg or \"\",\n )\n\n def __str__(self):\n if self.message is None:\n param_name = self.param.name if self.param else None\n return \"missing parameter: {}\".format(param_name)\n else:\n return self.message\n\n if PY2:\n __unicode__ = __str__\n\n def __str__(self):\n return self.__unicode__().encode(\"utf-8\")\n\n\nclass NoSuchOption(UsageError):\n \"\"\"Raised if click attempted to handle an option that does not\n exist.\n\n .. versionadded:: 4.0\n \"\"\"\n\n def __init__(self, option_name, message=None, possibilities=None, ctx=None):\n if message is None:\n message = \"no such option: {}\".format(option_name)\n UsageError.__init__(self, message, ctx)\n self.option_name = option_name\n self.possibilities = possibilities\n\n def format_message(self):\n bits = [self.message]\n if self.possibilities:\n if len(self.possibilities) == 1:\n bits.append(\"Did you mean {}?\".format(self.possibilities[0]))\n else:\n possibilities = sorted(self.possibilities)\n bits.append(\"(Possible options: {})\".format(\", \".join(possibilities)))\n return \" \".join(bits)\n\n\nclass BadOptionUsage(UsageError):\n \"\"\"Raised if an option is generally supplied but the use of the option\n was incorrect. This is for instance raised if the number of arguments\n for an option is not correct.\n\n .. versionadded:: 4.0\n\n :param option_name: the name of the option being used incorrectly.\n \"\"\"\n\n def __init__(self, option_name, message, ctx=None):\n UsageError.__init__(self, message, ctx)\n self.option_name = option_name\n\n\nclass BadArgumentUsage(UsageError):\n \"\"\"Raised if an argument is generally supplied but the use of the argument\n was incorrect. This is for instance raised if the number of values\n for an argument is not correct.\n\n .. versionadded:: 6.0\n \"\"\"\n\n def __init__(self, message, ctx=None):\n UsageError.__init__(self, message, ctx)\n\n\nclass FileError(ClickException):\n \"\"\"Raised if a file cannot be opened.\"\"\"\n\n def __init__(self, filename, hint=None):\n ui_filename = filename_to_ui(filename)\n if hint is None:\n hint = \"unknown error\"\n ClickException.__init__(self, hint)\n self.ui_filename = ui_filename\n self.filename = filename\n\n def format_message(self):\n return \"Could not open file {}: {}\".format(self.ui_filename, self.message)\n\n\nclass Abort(RuntimeError):\n \"\"\"An internal signalling exception that signals Click to abort.\"\"\"\n\n\nclass Exit(RuntimeError):\n \"\"\"An exception that indicates that the application should exit with some\n status code.\n\n :param code: the status code to exit with.\n \"\"\"\n\n __slots__ = (\"exit_code\",)\n\n def __init__(self, code=0):\n self.exit_code = code\n", "path": "src/click/exceptions.py"}], "after_files": [{"content": "from ._compat import filename_to_ui\nfrom ._compat import get_text_stderr\nfrom ._compat import PY2\nfrom .utils import echo\n\n\ndef _join_param_hints(param_hint):\n if isinstance(param_hint, (tuple, list)):\n return \" / \".join(repr(x) for x in param_hint)\n return param_hint\n\n\nclass ClickException(Exception):\n \"\"\"An exception that Click can handle and show to the user.\"\"\"\n\n #: The exit code for this exception\n exit_code = 1\n\n def __init__(self, message):\n ctor_msg = message\n if PY2:\n if ctor_msg is not None:\n ctor_msg = ctor_msg.encode(\"utf-8\")\n Exception.__init__(self, ctor_msg)\n self.message = message\n\n def format_message(self):\n return self.message\n\n def __str__(self):\n return self.message\n\n if PY2:\n __unicode__ = __str__\n\n def __str__(self):\n return self.message.encode(\"utf-8\")\n\n def show(self, file=None):\n if file is None:\n file = get_text_stderr()\n echo(\"Error: {}\".format(self.format_message()), file=file)\n\n\nclass UsageError(ClickException):\n \"\"\"An internal exception that signals a usage error. This typically\n aborts any further handling.\n\n :param message: the error message to display.\n :param ctx: optionally the context that caused this error. Click will\n fill in the context automatically in some situations.\n \"\"\"\n\n exit_code = 2\n\n def __init__(self, message, ctx=None):\n ClickException.__init__(self, message)\n self.ctx = ctx\n self.cmd = self.ctx.command if self.ctx else None\n\n def show(self, file=None):\n if file is None:\n file = get_text_stderr()\n color = None\n hint = \"\"\n if self.cmd is not None and self.cmd.get_help_option(self.ctx) is not None:\n hint = \"Try '{} {}' for help.\\n\".format(\n self.ctx.command_path, self.ctx.help_option_names[0]\n )\n if self.ctx is not None:\n color = self.ctx.color\n echo(\"{}\\n{}\".format(self.ctx.get_usage(), hint), file=file, color=color)\n echo(\"Error: {}\".format(self.format_message()), file=file, color=color)\n\n\nclass BadParameter(UsageError):\n \"\"\"An exception that formats out a standardized error message for a\n bad parameter. This is useful when thrown from a callback or type as\n Click will attach contextual information to it (for instance, which\n parameter it is).\n\n .. versionadded:: 2.0\n\n :param param: the parameter object that caused this error. This can\n be left out, and Click will attach this info itself\n if possible.\n :param param_hint: a string that shows up as parameter name. This\n can be used as alternative to `param` in cases\n where custom validation should happen. If it is\n a string it's used as such, if it's a list then\n each item is quoted and separated.\n \"\"\"\n\n def __init__(self, message, ctx=None, param=None, param_hint=None):\n UsageError.__init__(self, message, ctx)\n self.param = param\n self.param_hint = param_hint\n\n def format_message(self):\n if self.param_hint is not None:\n param_hint = self.param_hint\n elif self.param is not None:\n param_hint = self.param.get_error_hint(self.ctx)\n else:\n return \"Invalid value: {}\".format(self.message)\n param_hint = _join_param_hints(param_hint)\n\n return \"Invalid value for {}: {}\".format(param_hint, self.message)\n\n\nclass MissingParameter(BadParameter):\n \"\"\"Raised if click required an option or argument but it was not\n provided when invoking the script.\n\n .. versionadded:: 4.0\n\n :param param_type: a string that indicates the type of the parameter.\n The default is to inherit the parameter type from\n the given `param`. Valid values are ``'parameter'``,\n ``'option'`` or ``'argument'``.\n \"\"\"\n\n def __init__(\n self, message=None, ctx=None, param=None, param_hint=None, param_type=None\n ):\n BadParameter.__init__(self, message, ctx, param, param_hint)\n self.param_type = param_type\n\n def format_message(self):\n if self.param_hint is not None:\n param_hint = self.param_hint\n elif self.param is not None:\n param_hint = self.param.get_error_hint(self.ctx)\n else:\n param_hint = None\n param_hint = _join_param_hints(param_hint)\n\n param_type = self.param_type\n if param_type is None and self.param is not None:\n param_type = self.param.param_type_name\n\n msg = self.message\n if self.param is not None:\n msg_extra = self.param.type.get_missing_message(self.param)\n if msg_extra:\n if msg:\n msg += \". {}\".format(msg_extra)\n else:\n msg = msg_extra\n\n return \"Missing {}{}{}{}\".format(\n param_type,\n \" {}\".format(param_hint) if param_hint else \"\",\n \". \" if msg else \".\",\n msg or \"\",\n )\n\n def __str__(self):\n if self.message is None:\n param_name = self.param.name if self.param else None\n return \"missing parameter: {}\".format(param_name)\n else:\n return self.message\n\n if PY2:\n __unicode__ = __str__\n\n def __str__(self):\n return self.__unicode__().encode(\"utf-8\")\n\n\nclass NoSuchOption(UsageError):\n \"\"\"Raised if click attempted to handle an option that does not\n exist.\n\n .. versionadded:: 4.0\n \"\"\"\n\n def __init__(self, option_name, message=None, possibilities=None, ctx=None):\n if message is None:\n message = \"no such option: {}\".format(option_name)\n UsageError.__init__(self, message, ctx)\n self.option_name = option_name\n self.possibilities = possibilities\n\n def format_message(self):\n bits = [self.message]\n if self.possibilities:\n if len(self.possibilities) == 1:\n bits.append(\"Did you mean {}?\".format(self.possibilities[0]))\n else:\n possibilities = sorted(self.possibilities)\n bits.append(\"(Possible options: {})\".format(\", \".join(possibilities)))\n return \" \".join(bits)\n\n\nclass BadOptionUsage(UsageError):\n \"\"\"Raised if an option is generally supplied but the use of the option\n was incorrect. This is for instance raised if the number of arguments\n for an option is not correct.\n\n .. versionadded:: 4.0\n\n :param option_name: the name of the option being used incorrectly.\n \"\"\"\n\n def __init__(self, option_name, message, ctx=None):\n UsageError.__init__(self, message, ctx)\n self.option_name = option_name\n\n\nclass BadArgumentUsage(UsageError):\n \"\"\"Raised if an argument is generally supplied but the use of the argument\n was incorrect. This is for instance raised if the number of values\n for an argument is not correct.\n\n .. versionadded:: 6.0\n \"\"\"\n\n def __init__(self, message, ctx=None):\n UsageError.__init__(self, message, ctx)\n\n\nclass FileError(ClickException):\n \"\"\"Raised if a file cannot be opened.\"\"\"\n\n def __init__(self, filename, hint=None):\n ui_filename = filename_to_ui(filename)\n if hint is None:\n hint = \"unknown error\"\n ClickException.__init__(self, hint)\n self.ui_filename = ui_filename\n self.filename = filename\n\n def format_message(self):\n return \"Could not open file {}: {}\".format(self.ui_filename, self.message)\n\n\nclass Abort(RuntimeError):\n \"\"\"An internal signalling exception that signals Click to abort.\"\"\"\n\n\nclass Exit(RuntimeError):\n \"\"\"An exception that indicates that the application should exit with some\n status code.\n\n :param code: the status code to exit with.\n \"\"\"\n\n __slots__ = (\"exit_code\",)\n\n def __init__(self, code=0):\n self.exit_code = code\n", "path": "src/click/exceptions.py"}]} | 3,096 | 107 |
gh_patches_debug_35737 | rasdani/github-patches | git_diff | CTFd__CTFd-899 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot unset country field
**Environment**:
- CTFd Version/Commit: tag 2.0.4
- Operating System: Ubuntu 16.04
- Web Browser and Version: Chrome latest
**What happened?**
I changed my country to "Blank" (`<option></option>`) in settings, hit update, it said success, but refresh showed old country.
**What did you expect to happen?**
My country to be blank upon reload.
**How to reproduce your issue**
Set your country to anything (except blank). Try to change back to blank.
**Any associated stack traces or error logs**
N/A
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `CTFd/schemas/users.py`
Content:
```
1 from flask import session
2 from sqlalchemy.sql.expression import union_all
3 from marshmallow import fields, post_load
4 from marshmallow import validate, ValidationError, pre_load
5 from marshmallow.decorators import validates_schema
6 from marshmallow_sqlalchemy import field_for
7 from CTFd.models import ma, Users
8 from CTFd.utils import get_config
9 from CTFd.utils.validators import unique_email, validate_country_code
10 from CTFd.utils.user import is_admin, get_current_user
11 from CTFd.utils.countries import lookup_country_code
12 from CTFd.utils.crypto import verify_password, hash_password
13 from CTFd.utils.email import check_email_is_whitelisted
14
15
16 class UserSchema(ma.ModelSchema):
17 class Meta:
18 model = Users
19 include_fk = True
20 dump_only = ('id', 'oauth_id', 'created')
21 load_only = ('password',)
22
23 name = field_for(
24 Users,
25 'name',
26 required=True,
27 validate=[
28 validate.Length(min=1, max=128, error='User names must not be empty')
29 ]
30 )
31 email = field_for(
32 Users,
33 'email',
34 validate=[
35 validate.Email('Emails must be a properly formatted email address'),
36 validate.Length(min=1, max=128, error='Emails must not be empty'),
37 ]
38 )
39 website = field_for(
40 Users,
41 'website',
42 validate=validate.URL(
43 error='Websites must be a proper URL starting with http or https',
44 schemes={'http', 'https'}
45 )
46 )
47 country = field_for(
48 Users,
49 'country',
50 validate=[
51 validate_country_code
52 ]
53 )
54 password = field_for(
55 Users,
56 'password',
57 validate=[
58 validate.Length(min=1, error='Passwords must not be empty'),
59 ]
60 )
61
62 @pre_load
63 def validate_name(self, data):
64 name = data.get('name')
65 if name is None:
66 return
67
68 existing_user = Users.query.filter_by(name=name).first()
69 if is_admin():
70 user_id = data.get('id')
71 if user_id:
72 if existing_user and existing_user.id != user_id:
73 raise ValidationError('User name has already been taken', field_names=['name'])
74 else:
75 if existing_user:
76 raise ValidationError('User name has already been taken', field_names=['name'])
77 else:
78 current_user = get_current_user()
79 if name == current_user.name:
80 return data
81 else:
82 name_changes = get_config('name_changes', default=True)
83 if bool(name_changes) is False:
84 raise ValidationError('Name changes are disabled', field_names=['name'])
85 if existing_user:
86 raise ValidationError('User name has already been taken', field_names=['name'])
87
88 @pre_load
89 def validate_email(self, data):
90 email = data.get('email')
91 if email is None:
92 return
93
94 existing_user = Users.query.filter_by(email=email).first()
95
96 if is_admin():
97 user_id = data.get('id')
98 if user_id:
99 if existing_user and existing_user.id != user_id:
100 raise ValidationError('Email address has already been used', field_names=['email'])
101 else:
102 if existing_user:
103 raise ValidationError('Email address has already been used', field_names=['email'])
104 else:
105 current_user = get_current_user()
106 if email == current_user.email:
107 return data
108 else:
109 if existing_user:
110 raise ValidationError('Email address has already been used', field_names=['email'])
111 if check_email_is_whitelisted(email) is False:
112 raise ValidationError(
113 "Only email addresses under {domains} may register".format(
114 domains=get_config('domain_whitelist')
115 ),
116 field_names=['email']
117 )
118 if get_config('verify_emails'):
119 current_user.verified = False
120
121 @pre_load
122 def validate_password_confirmation(self, data):
123 password = data.get('password')
124 confirm = data.get('confirm')
125 target_user = get_current_user()
126 user_id = data.get('id')
127
128 if is_admin():
129 pass
130 else:
131 if password and (confirm is None):
132 raise ValidationError('Please confirm your current password', field_names=['confirm'])
133
134 if password and confirm:
135 test = verify_password(plaintext=confirm, ciphertext=target_user.password)
136 if test is True:
137 return data
138 else:
139 raise ValidationError('Your previous password is incorrect', field_names=['confirm'])
140
141 views = {
142 'user': [
143 'website',
144 'name',
145 'country',
146 'affiliation',
147 'bracket',
148 'id',
149 'oauth_id',
150 ],
151 'self': [
152 'website',
153 'name',
154 'email',
155 'country',
156 'affiliation',
157 'bracket',
158 'id',
159 'oauth_id',
160 'password'
161 ],
162 'admin': [
163 'website',
164 'name',
165 'created',
166 'country',
167 'banned',
168 'email',
169 'affiliation',
170 'secret',
171 'bracket',
172 'hidden',
173 'id',
174 'oauth_id',
175 'password',
176 'type',
177 'verified'
178 ]
179 }
180
181 def __init__(self, view=None, *args, **kwargs):
182 if view:
183 if type(view) == str:
184 kwargs['only'] = self.views[view]
185 elif type(view) == list:
186 kwargs['only'] = view
187
188 super(UserSchema, self).__init__(*args, **kwargs)
189
```
Path: `CTFd/utils/validators/__init__.py`
Content:
```
1 from flask import session
2 from CTFd.utils.user import is_admin, get_current_team, get_current_user
3 from CTFd.models import Teams, Users
4 from CTFd.utils.countries import lookup_country_code
5 from six.moves.urllib.parse import urlparse, urljoin, quote, unquote
6 from flask import request
7 from marshmallow import ValidationError
8 import re
9
10
11 def is_safe_url(target):
12 ref_url = urlparse(request.host_url)
13 test_url = urlparse(urljoin(request.host_url, target))
14 return test_url.scheme in ('http', 'https') and ref_url.netloc == test_url.netloc
15
16
17 def validate_url(url):
18 return urlparse(url).scheme.startswith('http')
19
20
21 def validate_email(email):
22 return bool(re.match(r"(^[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+$)", email))
23
24
25 def unique_email(email, model=Users):
26 obj = model.query.filter_by(email=email).first()
27 if is_admin():
28 if obj:
29 raise ValidationError('Email address has already been used')
30 if obj and obj.id != get_current_user().id:
31 raise ValidationError('Email address has already been used')
32
33
34 def validate_country_code(country_code):
35 if lookup_country_code(country_code) is None:
36 raise ValidationError('Invalid Country')
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/CTFd/schemas/users.py b/CTFd/schemas/users.py
--- a/CTFd/schemas/users.py
+++ b/CTFd/schemas/users.py
@@ -39,10 +39,13 @@
website = field_for(
Users,
'website',
- validate=validate.URL(
- error='Websites must be a proper URL starting with http or https',
- schemes={'http', 'https'}
- )
+ validate=[
+ # This is a dirty hack to let website accept empty strings so you can remove your website
+ lambda website: validate.URL(
+ error='Websites must be a proper URL starting with http or https',
+ schemes={'http', 'https'}
+ )(website) if website else True
+ ]
)
country = field_for(
Users,
@@ -54,9 +57,6 @@
password = field_for(
Users,
'password',
- validate=[
- validate.Length(min=1, error='Passwords must not be empty'),
- ]
)
@pre_load
@@ -123,12 +123,11 @@
password = data.get('password')
confirm = data.get('confirm')
target_user = get_current_user()
- user_id = data.get('id')
if is_admin():
pass
else:
- if password and (confirm is None):
+ if password and (bool(confirm) is False):
raise ValidationError('Please confirm your current password', field_names=['confirm'])
if password and confirm:
@@ -137,6 +136,9 @@
return data
else:
raise ValidationError('Your previous password is incorrect', field_names=['confirm'])
+ else:
+ data.pop('password', None)
+ data.pop('confirm', None)
views = {
'user': [
diff --git a/CTFd/utils/validators/__init__.py b/CTFd/utils/validators/__init__.py
--- a/CTFd/utils/validators/__init__.py
+++ b/CTFd/utils/validators/__init__.py
@@ -32,5 +32,7 @@
def validate_country_code(country_code):
+ if country_code.strip() == "":
+ return
if lookup_country_code(country_code) is None:
raise ValidationError('Invalid Country')
| {"golden_diff": "diff --git a/CTFd/schemas/users.py b/CTFd/schemas/users.py\n--- a/CTFd/schemas/users.py\n+++ b/CTFd/schemas/users.py\n@@ -39,10 +39,13 @@\n website = field_for(\n Users,\n 'website',\n- validate=validate.URL(\n- error='Websites must be a proper URL starting with http or https',\n- schemes={'http', 'https'}\n- )\n+ validate=[\n+ # This is a dirty hack to let website accept empty strings so you can remove your website\n+ lambda website: validate.URL(\n+ error='Websites must be a proper URL starting with http or https',\n+ schemes={'http', 'https'}\n+ )(website) if website else True\n+ ]\n )\n country = field_for(\n Users,\n@@ -54,9 +57,6 @@\n password = field_for(\n Users,\n 'password',\n- validate=[\n- validate.Length(min=1, error='Passwords must not be empty'),\n- ]\n )\n \n @pre_load\n@@ -123,12 +123,11 @@\n password = data.get('password')\n confirm = data.get('confirm')\n target_user = get_current_user()\n- user_id = data.get('id')\n \n if is_admin():\n pass\n else:\n- if password and (confirm is None):\n+ if password and (bool(confirm) is False):\n raise ValidationError('Please confirm your current password', field_names=['confirm'])\n \n if password and confirm:\n@@ -137,6 +136,9 @@\n return data\n else:\n raise ValidationError('Your previous password is incorrect', field_names=['confirm'])\n+ else:\n+ data.pop('password', None)\n+ data.pop('confirm', None)\n \n views = {\n 'user': [\ndiff --git a/CTFd/utils/validators/__init__.py b/CTFd/utils/validators/__init__.py\n--- a/CTFd/utils/validators/__init__.py\n+++ b/CTFd/utils/validators/__init__.py\n@@ -32,5 +32,7 @@\n \n \n def validate_country_code(country_code):\n+ if country_code.strip() == \"\":\n+ return\n if lookup_country_code(country_code) is None:\n raise ValidationError('Invalid Country')\n", "issue": "Cannot unset country field\n**Environment**:\r\n\r\n - CTFd Version/Commit: tag 2.0.4\r\n - Operating System: Ubuntu 16.04\r\n - Web Browser and Version: Chrome latest\r\n\r\n**What happened?**\r\nI changed my country to \"Blank\" (`<option></option>`) in settings, hit update, it said success, but refresh showed old country.\r\n\r\n**What did you expect to happen?**\r\nMy country to be blank upon reload.\r\n\r\n**How to reproduce your issue**\r\nSet your country to anything (except blank). Try to change back to blank.\r\n\r\n**Any associated stack traces or error logs**\r\nN/A\r\n\n", "before_files": [{"content": "from flask import session\nfrom sqlalchemy.sql.expression import union_all\nfrom marshmallow import fields, post_load\nfrom marshmallow import validate, ValidationError, pre_load\nfrom marshmallow.decorators import validates_schema\nfrom marshmallow_sqlalchemy import field_for\nfrom CTFd.models import ma, Users\nfrom CTFd.utils import get_config\nfrom CTFd.utils.validators import unique_email, validate_country_code\nfrom CTFd.utils.user import is_admin, get_current_user\nfrom CTFd.utils.countries import lookup_country_code\nfrom CTFd.utils.crypto import verify_password, hash_password\nfrom CTFd.utils.email import check_email_is_whitelisted\n\n\nclass UserSchema(ma.ModelSchema):\n class Meta:\n model = Users\n include_fk = True\n dump_only = ('id', 'oauth_id', 'created')\n load_only = ('password',)\n\n name = field_for(\n Users,\n 'name',\n required=True,\n validate=[\n validate.Length(min=1, max=128, error='User names must not be empty')\n ]\n )\n email = field_for(\n Users,\n 'email',\n validate=[\n validate.Email('Emails must be a properly formatted email address'),\n validate.Length(min=1, max=128, error='Emails must not be empty'),\n ]\n )\n website = field_for(\n Users,\n 'website',\n validate=validate.URL(\n error='Websites must be a proper URL starting with http or https',\n schemes={'http', 'https'}\n )\n )\n country = field_for(\n Users,\n 'country',\n validate=[\n validate_country_code\n ]\n )\n password = field_for(\n Users,\n 'password',\n validate=[\n validate.Length(min=1, error='Passwords must not be empty'),\n ]\n )\n\n @pre_load\n def validate_name(self, data):\n name = data.get('name')\n if name is None:\n return\n\n existing_user = Users.query.filter_by(name=name).first()\n if is_admin():\n user_id = data.get('id')\n if user_id:\n if existing_user and existing_user.id != user_id:\n raise ValidationError('User name has already been taken', field_names=['name'])\n else:\n if existing_user:\n raise ValidationError('User name has already been taken', field_names=['name'])\n else:\n current_user = get_current_user()\n if name == current_user.name:\n return data\n else:\n name_changes = get_config('name_changes', default=True)\n if bool(name_changes) is False:\n raise ValidationError('Name changes are disabled', field_names=['name'])\n if existing_user:\n raise ValidationError('User name has already been taken', field_names=['name'])\n\n @pre_load\n def validate_email(self, data):\n email = data.get('email')\n if email is None:\n return\n\n existing_user = Users.query.filter_by(email=email).first()\n\n if is_admin():\n user_id = data.get('id')\n if user_id:\n if existing_user and existing_user.id != user_id:\n raise ValidationError('Email address has already been used', field_names=['email'])\n else:\n if existing_user:\n raise ValidationError('Email address has already been used', field_names=['email'])\n else:\n current_user = get_current_user()\n if email == current_user.email:\n return data\n else:\n if existing_user:\n raise ValidationError('Email address has already been used', field_names=['email'])\n if check_email_is_whitelisted(email) is False:\n raise ValidationError(\n \"Only email addresses under {domains} may register\".format(\n domains=get_config('domain_whitelist')\n ),\n field_names=['email']\n )\n if get_config('verify_emails'):\n current_user.verified = False\n\n @pre_load\n def validate_password_confirmation(self, data):\n password = data.get('password')\n confirm = data.get('confirm')\n target_user = get_current_user()\n user_id = data.get('id')\n\n if is_admin():\n pass\n else:\n if password and (confirm is None):\n raise ValidationError('Please confirm your current password', field_names=['confirm'])\n\n if password and confirm:\n test = verify_password(plaintext=confirm, ciphertext=target_user.password)\n if test is True:\n return data\n else:\n raise ValidationError('Your previous password is incorrect', field_names=['confirm'])\n\n views = {\n 'user': [\n 'website',\n 'name',\n 'country',\n 'affiliation',\n 'bracket',\n 'id',\n 'oauth_id',\n ],\n 'self': [\n 'website',\n 'name',\n 'email',\n 'country',\n 'affiliation',\n 'bracket',\n 'id',\n 'oauth_id',\n 'password'\n ],\n 'admin': [\n 'website',\n 'name',\n 'created',\n 'country',\n 'banned',\n 'email',\n 'affiliation',\n 'secret',\n 'bracket',\n 'hidden',\n 'id',\n 'oauth_id',\n 'password',\n 'type',\n 'verified'\n ]\n }\n\n def __init__(self, view=None, *args, **kwargs):\n if view:\n if type(view) == str:\n kwargs['only'] = self.views[view]\n elif type(view) == list:\n kwargs['only'] = view\n\n super(UserSchema, self).__init__(*args, **kwargs)\n", "path": "CTFd/schemas/users.py"}, {"content": "from flask import session\nfrom CTFd.utils.user import is_admin, get_current_team, get_current_user\nfrom CTFd.models import Teams, Users\nfrom CTFd.utils.countries import lookup_country_code\nfrom six.moves.urllib.parse import urlparse, urljoin, quote, unquote\nfrom flask import request\nfrom marshmallow import ValidationError\nimport re\n\n\ndef is_safe_url(target):\n ref_url = urlparse(request.host_url)\n test_url = urlparse(urljoin(request.host_url, target))\n return test_url.scheme in ('http', 'https') and ref_url.netloc == test_url.netloc\n\n\ndef validate_url(url):\n return urlparse(url).scheme.startswith('http')\n\n\ndef validate_email(email):\n return bool(re.match(r\"(^[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\\.[a-zA-Z0-9-.]+$)\", email))\n\n\ndef unique_email(email, model=Users):\n obj = model.query.filter_by(email=email).first()\n if is_admin():\n if obj:\n raise ValidationError('Email address has already been used')\n if obj and obj.id != get_current_user().id:\n raise ValidationError('Email address has already been used')\n\n\ndef validate_country_code(country_code):\n if lookup_country_code(country_code) is None:\n raise ValidationError('Invalid Country')\n", "path": "CTFd/utils/validators/__init__.py"}], "after_files": [{"content": "from flask import session\nfrom sqlalchemy.sql.expression import union_all\nfrom marshmallow import fields, post_load\nfrom marshmallow import validate, ValidationError, pre_load\nfrom marshmallow.decorators import validates_schema\nfrom marshmallow_sqlalchemy import field_for\nfrom CTFd.models import ma, Users\nfrom CTFd.utils import get_config\nfrom CTFd.utils.validators import unique_email, validate_country_code\nfrom CTFd.utils.user import is_admin, get_current_user\nfrom CTFd.utils.countries import lookup_country_code\nfrom CTFd.utils.crypto import verify_password, hash_password\nfrom CTFd.utils.email import check_email_is_whitelisted\n\n\nclass UserSchema(ma.ModelSchema):\n class Meta:\n model = Users\n include_fk = True\n dump_only = ('id', 'oauth_id', 'created')\n load_only = ('password',)\n\n name = field_for(\n Users,\n 'name',\n required=True,\n validate=[\n validate.Length(min=1, max=128, error='User names must not be empty')\n ]\n )\n email = field_for(\n Users,\n 'email',\n validate=[\n validate.Email('Emails must be a properly formatted email address'),\n validate.Length(min=1, max=128, error='Emails must not be empty'),\n ]\n )\n website = field_for(\n Users,\n 'website',\n validate=[\n # This is a dirty hack to let website accept empty strings so you can remove your website\n lambda website: validate.URL(\n error='Websites must be a proper URL starting with http or https',\n schemes={'http', 'https'}\n )(website) if website else True\n ]\n )\n country = field_for(\n Users,\n 'country',\n validate=[\n validate_country_code\n ]\n )\n password = field_for(\n Users,\n 'password',\n )\n\n @pre_load\n def validate_name(self, data):\n name = data.get('name')\n if name is None:\n return\n\n existing_user = Users.query.filter_by(name=name).first()\n if is_admin():\n user_id = data.get('id')\n if user_id:\n if existing_user and existing_user.id != user_id:\n raise ValidationError('User name has already been taken', field_names=['name'])\n else:\n if existing_user:\n raise ValidationError('User name has already been taken', field_names=['name'])\n else:\n current_user = get_current_user()\n if name == current_user.name:\n return data\n else:\n name_changes = get_config('name_changes', default=True)\n if bool(name_changes) is False:\n raise ValidationError('Name changes are disabled', field_names=['name'])\n if existing_user:\n raise ValidationError('User name has already been taken', field_names=['name'])\n\n @pre_load\n def validate_email(self, data):\n email = data.get('email')\n if email is None:\n return\n\n existing_user = Users.query.filter_by(email=email).first()\n\n if is_admin():\n user_id = data.get('id')\n if user_id:\n if existing_user and existing_user.id != user_id:\n raise ValidationError('Email address has already been used', field_names=['email'])\n else:\n if existing_user:\n raise ValidationError('Email address has already been used', field_names=['email'])\n else:\n current_user = get_current_user()\n if email == current_user.email:\n return data\n else:\n if existing_user:\n raise ValidationError('Email address has already been used', field_names=['email'])\n if check_email_is_whitelisted(email) is False:\n raise ValidationError(\n \"Only email addresses under {domains} may register\".format(\n domains=get_config('domain_whitelist')\n ),\n field_names=['email']\n )\n if get_config('verify_emails'):\n current_user.verified = False\n\n @pre_load\n def validate_password_confirmation(self, data):\n password = data.get('password')\n confirm = data.get('confirm')\n target_user = get_current_user()\n\n if is_admin():\n pass\n else:\n if password and (bool(confirm) is False):\n raise ValidationError('Please confirm your current password', field_names=['confirm'])\n\n if password and confirm:\n test = verify_password(plaintext=confirm, ciphertext=target_user.password)\n if test is True:\n return data\n else:\n raise ValidationError('Your previous password is incorrect', field_names=['confirm'])\n else:\n data.pop('password', None)\n data.pop('confirm', None)\n\n views = {\n 'user': [\n 'website',\n 'name',\n 'country',\n 'affiliation',\n 'bracket',\n 'id',\n 'oauth_id',\n ],\n 'self': [\n 'website',\n 'name',\n 'email',\n 'country',\n 'affiliation',\n 'bracket',\n 'id',\n 'oauth_id',\n 'password'\n ],\n 'admin': [\n 'website',\n 'name',\n 'created',\n 'country',\n 'banned',\n 'email',\n 'affiliation',\n 'secret',\n 'bracket',\n 'hidden',\n 'id',\n 'oauth_id',\n 'password',\n 'type',\n 'verified'\n ]\n }\n\n def __init__(self, view=None, *args, **kwargs):\n if view:\n if type(view) == str:\n kwargs['only'] = self.views[view]\n elif type(view) == list:\n kwargs['only'] = view\n\n super(UserSchema, self).__init__(*args, **kwargs)\n", "path": "CTFd/schemas/users.py"}, {"content": "from flask import session\nfrom CTFd.utils.user import is_admin, get_current_team, get_current_user\nfrom CTFd.models import Teams, Users\nfrom CTFd.utils.countries import lookup_country_code\nfrom six.moves.urllib.parse import urlparse, urljoin, quote, unquote\nfrom flask import request\nfrom marshmallow import ValidationError\nimport re\n\n\ndef is_safe_url(target):\n ref_url = urlparse(request.host_url)\n test_url = urlparse(urljoin(request.host_url, target))\n return test_url.scheme in ('http', 'https') and ref_url.netloc == test_url.netloc\n\n\ndef validate_url(url):\n return urlparse(url).scheme.startswith('http')\n\n\ndef validate_email(email):\n return bool(re.match(r\"(^[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\\.[a-zA-Z0-9-.]+$)\", email))\n\n\ndef unique_email(email, model=Users):\n obj = model.query.filter_by(email=email).first()\n if is_admin():\n if obj:\n raise ValidationError('Email address has already been used')\n if obj and obj.id != get_current_user().id:\n raise ValidationError('Email address has already been used')\n\n\ndef validate_country_code(country_code):\n if country_code.strip() == \"\":\n return\n if lookup_country_code(country_code) is None:\n raise ValidationError('Invalid Country')\n", "path": "CTFd/utils/validators/__init__.py"}]} | 2,415 | 519 |
gh_patches_debug_526 | rasdani/github-patches | git_diff | Parsl__parsl-2302 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove parsl container bits
This issue is to remind us to remove Parsl container support and update the docs as soon as the funcX executor is integrated-- we should switch to recommending container support through it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docker/app1/app1.py`
Content:
```
1
2 def predict(list_items):
3 """Returns the double of the items"""
4 return [i*2 for i in list_items]
5
```
Path: `docker/app2/app2.py`
Content:
```
1
2 def predict(list_items):
3 """Returns items+10"""
4 return [i+10 for i in list_items]
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docker/app1/app1.py b/docker/app1/app1.py
deleted file mode 100644
--- a/docker/app1/app1.py
+++ /dev/null
@@ -1,4 +0,0 @@
-
-def predict(list_items):
- """Returns the double of the items"""
- return [i*2 for i in list_items]
diff --git a/docker/app2/app2.py b/docker/app2/app2.py
deleted file mode 100644
--- a/docker/app2/app2.py
+++ /dev/null
@@ -1,4 +0,0 @@
-
-def predict(list_items):
- """Returns items+10"""
- return [i+10 for i in list_items]
| {"golden_diff": "diff --git a/docker/app1/app1.py b/docker/app1/app1.py\ndeleted file mode 100644\n--- a/docker/app1/app1.py\n+++ /dev/null\n@@ -1,4 +0,0 @@\n-\n-def predict(list_items):\n- \"\"\"Returns the double of the items\"\"\"\n- return [i*2 for i in list_items]\ndiff --git a/docker/app2/app2.py b/docker/app2/app2.py\ndeleted file mode 100644\n--- a/docker/app2/app2.py\n+++ /dev/null\n@@ -1,4 +0,0 @@\n-\n-def predict(list_items):\n- \"\"\"Returns items+10\"\"\"\n- return [i+10 for i in list_items]\n", "issue": "Remove parsl container bits\nThis issue is to remind us to remove Parsl container support and update the docs as soon as the funcX executor is integrated-- we should switch to recommending container support through it.\n", "before_files": [{"content": "\ndef predict(list_items):\n \"\"\"Returns the double of the items\"\"\"\n return [i*2 for i in list_items]\n", "path": "docker/app1/app1.py"}, {"content": "\ndef predict(list_items):\n \"\"\"Returns items+10\"\"\"\n return [i+10 for i in list_items]\n", "path": "docker/app2/app2.py"}], "after_files": [{"content": null, "path": "docker/app1/app1.py"}, {"content": null, "path": "docker/app2/app2.py"}]} | 383 | 164 |
gh_patches_debug_38871 | rasdani/github-patches | git_diff | ansible-collections__community.general-4360 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ldap_entry: recursive deletion
### Summary
I would like to delete a whole branch of my ldap tree with `ldap_entry` but it seems recursive deletions are not supported.
I suggest handling recursive deletions.
### Issue Type
Feature Idea
### Component Name
ldap_entry
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: delete all the users
ldap_entry:
server_uri: "{{ ldap_api_url }}"
bind_dn: "{{ ldap_bind_id }}"
bind_pw: "{{ ldap_admin_password }}"
dn: "{{ ldap_users_base }}"
state: absent
```
```python
The full traceback is:
Traceback (most recent call last):
File "master:~/.ansible/collections/ansible_collections/community/general/plugins/modules/ldap_entry.py", line 229, in main
File "master:~/.ansible/collections/ansible_collections/community/general/plugins/modules/ldap_entry.py", line 166, in _delete
File "/usr/lib/python3/dist-packages/ldap/ldapobject.py", line 560, in delete_s
return self.delete_ext_s(dn,None,None)
File "/usr/lib/python3/dist-packages/ldap/ldapobject.py", line 553, in delete_ext_s
resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all=1,timeout=self.timeout)
File "/usr/lib/python3/dist-packages/ldap/ldapobject.py", line 748, in result3
resp_type, resp_data, resp_msgid, decoded_resp_ctrls, retoid, retval = self.result4(
File "/usr/lib/python3/dist-packages/ldap/ldapobject.py", line 758, in result4
ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop)
File "/usr/lib/python3/dist-packages/ldap/ldapobject.py", line 331, in _ldap_call
reraise(exc_type, exc_value, exc_traceback)
File "/usr/lib/python3/dist-packages/ldap/compat.py", line 44, in reraise
raise exc_value
File "/usr/lib/python3/dist-packages/ldap/ldapobject.py", line 315, in _ldap_call
result = func(*args,**kwargs)
ldap.NOT_ALLOWED_ON_NONLEAF: {'desc': 'Operation not allowed on non-leaf', 'info': 'subordinate objects must be deleted first'}
fatal: [brumaire.yaal.coop]: FAILED! => {
"changed": false,
"details": "{'desc': 'Operation not allowed on non-leaf', 'info': 'subordinate objects must be deleted first'}",
"invocation": {
"module_args": {
"attributes": {
"objectClass": null
},
"bind_dn": "cn=admin,dc=mydomain,dc=tld",
"bind_pw": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"dn": "ou=Users,dc=mydomain,dc=tld",
"objectClass": null,
"params": null,
"server_uri": "ldapi:///",
"start_tls": false,
"state": "absent",
"validate_certs": true
}
},
"msg": "Entry action failed."
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugins/modules/net_tools/ldap/ldap_entry.py`
Content:
```
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # Copyright: (c) 2016, Peter Sagerson <[email protected]>
5 # Copyright: (c) 2016, Jiri Tyr <[email protected]>
6 #
7 # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
8
9 from __future__ import absolute_import, division, print_function
10 __metaclass__ = type
11
12
13 DOCUMENTATION = '''
14 ---
15 module: ldap_entry
16 short_description: Add or remove LDAP entries.
17 description:
18 - Add or remove LDAP entries. This module only asserts the existence or
19 non-existence of an LDAP entry, not its attributes. To assert the
20 attribute values of an entry, see M(community.general.ldap_attrs).
21 notes:
22 - The default authentication settings will attempt to use a SASL EXTERNAL
23 bind over a UNIX domain socket. This works well with the default Ubuntu
24 install for example, which includes a cn=peercred,cn=external,cn=auth ACL
25 rule allowing root to modify the server configuration. If you need to use
26 a simple bind to access your server, pass the credentials in I(bind_dn)
27 and I(bind_pw).
28 author:
29 - Jiri Tyr (@jtyr)
30 requirements:
31 - python-ldap
32 options:
33 attributes:
34 description:
35 - If I(state=present), attributes necessary to create an entry. Existing
36 entries are never modified. To assert specific attribute values on an
37 existing entry, use M(community.general.ldap_attrs) module instead.
38 type: dict
39 objectClass:
40 description:
41 - If I(state=present), value or list of values to use when creating
42 the entry. It can either be a string or an actual list of
43 strings.
44 type: list
45 elements: str
46 state:
47 description:
48 - The target state of the entry.
49 choices: [present, absent]
50 default: present
51 type: str
52 extends_documentation_fragment:
53 - community.general.ldap.documentation
54
55 '''
56
57
58 EXAMPLES = """
59 - name: Make sure we have a parent entry for users
60 community.general.ldap_entry:
61 dn: ou=users,dc=example,dc=com
62 objectClass: organizationalUnit
63
64 - name: Make sure we have an admin user
65 community.general.ldap_entry:
66 dn: cn=admin,dc=example,dc=com
67 objectClass:
68 - simpleSecurityObject
69 - organizationalRole
70 attributes:
71 description: An LDAP administrator
72 userPassword: "{SSHA}tabyipcHzhwESzRaGA7oQ/SDoBZQOGND"
73
74 - name: Get rid of an old entry
75 community.general.ldap_entry:
76 dn: ou=stuff,dc=example,dc=com
77 state: absent
78 server_uri: ldap://localhost/
79 bind_dn: cn=admin,dc=example,dc=com
80 bind_pw: password
81
82 #
83 # The same as in the previous example but with the authentication details
84 # stored in the ldap_auth variable:
85 #
86 # ldap_auth:
87 # server_uri: ldap://localhost/
88 # bind_dn: cn=admin,dc=example,dc=com
89 # bind_pw: password
90 #
91 # In the example below, 'args' is a task keyword, passed at the same level as the module
92 - name: Get rid of an old entry
93 community.general.ldap_entry:
94 dn: ou=stuff,dc=example,dc=com
95 state: absent
96 args: "{{ ldap_auth }}"
97 """
98
99
100 RETURN = """
101 # Default return values
102 """
103
104 import traceback
105
106 from ansible.module_utils.basic import AnsibleModule, missing_required_lib
107 from ansible.module_utils.common.text.converters import to_native, to_bytes
108 from ansible_collections.community.general.plugins.module_utils.ldap import LdapGeneric, gen_specs
109
110 LDAP_IMP_ERR = None
111 try:
112 import ldap.modlist
113
114 HAS_LDAP = True
115 except ImportError:
116 LDAP_IMP_ERR = traceback.format_exc()
117 HAS_LDAP = False
118
119
120 class LdapEntry(LdapGeneric):
121 def __init__(self, module):
122 LdapGeneric.__init__(self, module)
123
124 # Shortcuts
125 self.state = self.module.params['state']
126
127 # Add the objectClass into the list of attributes
128 self.module.params['attributes']['objectClass'] = (
129 self.module.params['objectClass'])
130
131 # Load attributes
132 if self.state == 'present':
133 self.attrs = self._load_attrs()
134
135 def _load_attrs(self):
136 """ Turn attribute's value to array. """
137 attrs = {}
138
139 for name, value in self.module.params['attributes'].items():
140 if isinstance(value, list):
141 attrs[name] = list(map(to_bytes, value))
142 else:
143 attrs[name] = [to_bytes(value)]
144
145 return attrs
146
147 def add(self):
148 """ If self.dn does not exist, returns a callable that will add it. """
149 def _add():
150 self.connection.add_s(self.dn, modlist)
151
152 if not self._is_entry_present():
153 modlist = ldap.modlist.addModlist(self.attrs)
154 action = _add
155 else:
156 action = None
157
158 return action
159
160 def delete(self):
161 """ If self.dn exists, returns a callable that will delete it. """
162 def _delete():
163 self.connection.delete_s(self.dn)
164
165 if self._is_entry_present():
166 action = _delete
167 else:
168 action = None
169
170 return action
171
172 def _is_entry_present(self):
173 try:
174 self.connection.search_s(self.dn, ldap.SCOPE_BASE)
175 except ldap.NO_SUCH_OBJECT:
176 is_present = False
177 else:
178 is_present = True
179
180 return is_present
181
182
183 def main():
184 module = AnsibleModule(
185 argument_spec=gen_specs(
186 attributes=dict(default={}, type='dict'),
187 objectClass=dict(type='list', elements='str'),
188 state=dict(default='present', choices=['present', 'absent']),
189 ),
190 required_if=[('state', 'present', ['objectClass'])],
191 supports_check_mode=True,
192 )
193
194 if not HAS_LDAP:
195 module.fail_json(msg=missing_required_lib('python-ldap'),
196 exception=LDAP_IMP_ERR)
197
198 state = module.params['state']
199
200 # Instantiate the LdapEntry object
201 ldap = LdapEntry(module)
202
203 # Get the action function
204 if state == 'present':
205 action = ldap.add()
206 elif state == 'absent':
207 action = ldap.delete()
208
209 # Perform the action
210 if action is not None and not module.check_mode:
211 try:
212 action()
213 except Exception as e:
214 module.fail_json(msg="Entry action failed.", details=to_native(e), exception=traceback.format_exc())
215
216 module.exit_json(changed=(action is not None))
217
218
219 if __name__ == '__main__':
220 main()
221
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/plugins/modules/net_tools/ldap/ldap_entry.py b/plugins/modules/net_tools/ldap/ldap_entry.py
--- a/plugins/modules/net_tools/ldap/ldap_entry.py
+++ b/plugins/modules/net_tools/ldap/ldap_entry.py
@@ -49,6 +49,13 @@
choices: [present, absent]
default: present
type: str
+ recursive:
+ description:
+ - If I(state=delete), a flag indicating whether a single entry or the
+ whole branch must be deleted.
+ type: bool
+ default: false
+ version_added: 4.6.0
extends_documentation_fragment:
- community.general.ldap.documentation
@@ -110,6 +117,7 @@
LDAP_IMP_ERR = None
try:
import ldap.modlist
+ import ldap.controls
HAS_LDAP = True
except ImportError:
@@ -123,6 +131,7 @@
# Shortcuts
self.state = self.module.params['state']
+ self.recursive = self.module.params['recursive']
# Add the objectClass into the list of attributes
self.module.params['attributes']['objectClass'] = (
@@ -158,12 +167,29 @@
return action
def delete(self):
- """ If self.dn exists, returns a callable that will delete it. """
+ """ If self.dn exists, returns a callable that will delete either
+ the item itself if the recursive option is not set or the whole branch
+ if it is. """
def _delete():
self.connection.delete_s(self.dn)
+ def _delete_recursive():
+ """ Attempt recurive deletion using the subtree-delete control.
+ If that fails, do it manually. """
+ try:
+ subtree_delete = ldap.controls.ValueLessRequestControl('1.2.840.113556.1.4.805')
+ self.connection.delete_ext_s(self.dn, serverctrls=[subtree_delete])
+ except ldap.NOT_ALLOWED_ON_NONLEAF:
+ search = self.connection.search_s(self.dn, ldap.SCOPE_SUBTREE, attrlist=('dn',))
+ search.reverse()
+ for entry in search:
+ self.connection.delete_s(entry[0])
+
if self._is_entry_present():
- action = _delete
+ if self.recursive:
+ action = _delete_recursive
+ else:
+ action = _delete
else:
action = None
@@ -186,6 +212,7 @@
attributes=dict(default={}, type='dict'),
objectClass=dict(type='list', elements='str'),
state=dict(default='present', choices=['present', 'absent']),
+ recursive=dict(default=False, type='bool'),
),
required_if=[('state', 'present', ['objectClass'])],
supports_check_mode=True,
| {"golden_diff": "diff --git a/plugins/modules/net_tools/ldap/ldap_entry.py b/plugins/modules/net_tools/ldap/ldap_entry.py\n--- a/plugins/modules/net_tools/ldap/ldap_entry.py\n+++ b/plugins/modules/net_tools/ldap/ldap_entry.py\n@@ -49,6 +49,13 @@\n choices: [present, absent]\n default: present\n type: str\n+ recursive:\n+ description:\n+ - If I(state=delete), a flag indicating whether a single entry or the\n+ whole branch must be deleted.\n+ type: bool\n+ default: false\n+ version_added: 4.6.0\n extends_documentation_fragment:\n - community.general.ldap.documentation\n \n@@ -110,6 +117,7 @@\n LDAP_IMP_ERR = None\n try:\n import ldap.modlist\n+ import ldap.controls\n \n HAS_LDAP = True\n except ImportError:\n@@ -123,6 +131,7 @@\n \n # Shortcuts\n self.state = self.module.params['state']\n+ self.recursive = self.module.params['recursive']\n \n # Add the objectClass into the list of attributes\n self.module.params['attributes']['objectClass'] = (\n@@ -158,12 +167,29 @@\n return action\n \n def delete(self):\n- \"\"\" If self.dn exists, returns a callable that will delete it. \"\"\"\n+ \"\"\" If self.dn exists, returns a callable that will delete either\n+ the item itself if the recursive option is not set or the whole branch\n+ if it is. \"\"\"\n def _delete():\n self.connection.delete_s(self.dn)\n \n+ def _delete_recursive():\n+ \"\"\" Attempt recurive deletion using the subtree-delete control.\n+ If that fails, do it manually. \"\"\"\n+ try:\n+ subtree_delete = ldap.controls.ValueLessRequestControl('1.2.840.113556.1.4.805')\n+ self.connection.delete_ext_s(self.dn, serverctrls=[subtree_delete])\n+ except ldap.NOT_ALLOWED_ON_NONLEAF:\n+ search = self.connection.search_s(self.dn, ldap.SCOPE_SUBTREE, attrlist=('dn',))\n+ search.reverse()\n+ for entry in search:\n+ self.connection.delete_s(entry[0])\n+\n if self._is_entry_present():\n- action = _delete\n+ if self.recursive:\n+ action = _delete_recursive\n+ else:\n+ action = _delete\n else:\n action = None\n \n@@ -186,6 +212,7 @@\n attributes=dict(default={}, type='dict'),\n objectClass=dict(type='list', elements='str'),\n state=dict(default='present', choices=['present', 'absent']),\n+ recursive=dict(default=False, type='bool'),\n ),\n required_if=[('state', 'present', ['objectClass'])],\n supports_check_mode=True,\n", "issue": "ldap_entry: recursive deletion\n### Summary\n\nI would like to delete a whole branch of my ldap tree with `ldap_entry` but it seems recursive deletions are not supported.\r\n\r\nI suggest handling recursive deletions.\n\n### Issue Type\n\nFeature Idea\n\n### Component Name\n\nldap_entry\n\n### Additional Information\n\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml (paste below)\r\n- name: delete all the users\r\n ldap_entry:\r\n server_uri: \"{{ ldap_api_url }}\"\r\n bind_dn: \"{{ ldap_bind_id }}\"\r\n bind_pw: \"{{ ldap_admin_password }}\"\r\n dn: \"{{ ldap_users_base }}\"\r\n state: absent\r\n```\r\n```python\r\nThe full traceback is:\r\nTraceback (most recent call last):\r\n File \"master:~/.ansible/collections/ansible_collections/community/general/plugins/modules/ldap_entry.py\", line 229, in main\r\n File \"master:~/.ansible/collections/ansible_collections/community/general/plugins/modules/ldap_entry.py\", line 166, in _delete\r\n File \"/usr/lib/python3/dist-packages/ldap/ldapobject.py\", line 560, in delete_s\r\n return self.delete_ext_s(dn,None,None)\r\n File \"/usr/lib/python3/dist-packages/ldap/ldapobject.py\", line 553, in delete_ext_s\r\n resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all=1,timeout=self.timeout)\r\n File \"/usr/lib/python3/dist-packages/ldap/ldapobject.py\", line 748, in result3\r\n resp_type, resp_data, resp_msgid, decoded_resp_ctrls, retoid, retval = self.result4(\r\n File \"/usr/lib/python3/dist-packages/ldap/ldapobject.py\", line 758, in result4\r\n ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop)\r\n File \"/usr/lib/python3/dist-packages/ldap/ldapobject.py\", line 331, in _ldap_call\r\n reraise(exc_type, exc_value, exc_traceback)\r\n File \"/usr/lib/python3/dist-packages/ldap/compat.py\", line 44, in reraise\r\n raise exc_value\r\n File \"/usr/lib/python3/dist-packages/ldap/ldapobject.py\", line 315, in _ldap_call\r\n result = func(*args,**kwargs)\r\nldap.NOT_ALLOWED_ON_NONLEAF: {'desc': 'Operation not allowed on non-leaf', 'info': 'subordinate objects must be deleted first'}\r\nfatal: [brumaire.yaal.coop]: FAILED! => {\r\n \"changed\": false,\r\n \"details\": \"{'desc': 'Operation not allowed on non-leaf', 'info': 'subordinate objects must be deleted first'}\",\r\n \"invocation\": {\r\n \"module_args\": {\r\n \"attributes\": {\r\n \"objectClass\": null\r\n },\r\n \"bind_dn\": \"cn=admin,dc=mydomain,dc=tld\",\r\n \"bind_pw\": \"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\",\r\n \"dn\": \"ou=Users,dc=mydomain,dc=tld\",\r\n \"objectClass\": null,\r\n \"params\": null,\r\n \"server_uri\": \"ldapi:///\",\r\n \"start_tls\": false,\r\n \"state\": \"absent\",\r\n \"validate_certs\": true\r\n }\r\n },\r\n \"msg\": \"Entry action failed.\"\r\n```\n\n### Code of Conduct\n\n- [X] I agree to follow the Ansible Code of Conduct\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# Copyright: (c) 2016, Peter Sagerson <[email protected]>\n# Copyright: (c) 2016, Jiri Tyr <[email protected]>\n#\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nDOCUMENTATION = '''\n---\nmodule: ldap_entry\nshort_description: Add or remove LDAP entries.\ndescription:\n - Add or remove LDAP entries. This module only asserts the existence or\n non-existence of an LDAP entry, not its attributes. To assert the\n attribute values of an entry, see M(community.general.ldap_attrs).\nnotes:\n - The default authentication settings will attempt to use a SASL EXTERNAL\n bind over a UNIX domain socket. This works well with the default Ubuntu\n install for example, which includes a cn=peercred,cn=external,cn=auth ACL\n rule allowing root to modify the server configuration. If you need to use\n a simple bind to access your server, pass the credentials in I(bind_dn)\n and I(bind_pw).\nauthor:\n - Jiri Tyr (@jtyr)\nrequirements:\n - python-ldap\noptions:\n attributes:\n description:\n - If I(state=present), attributes necessary to create an entry. Existing\n entries are never modified. To assert specific attribute values on an\n existing entry, use M(community.general.ldap_attrs) module instead.\n type: dict\n objectClass:\n description:\n - If I(state=present), value or list of values to use when creating\n the entry. It can either be a string or an actual list of\n strings.\n type: list\n elements: str\n state:\n description:\n - The target state of the entry.\n choices: [present, absent]\n default: present\n type: str\nextends_documentation_fragment:\n- community.general.ldap.documentation\n\n'''\n\n\nEXAMPLES = \"\"\"\n- name: Make sure we have a parent entry for users\n community.general.ldap_entry:\n dn: ou=users,dc=example,dc=com\n objectClass: organizationalUnit\n\n- name: Make sure we have an admin user\n community.general.ldap_entry:\n dn: cn=admin,dc=example,dc=com\n objectClass:\n - simpleSecurityObject\n - organizationalRole\n attributes:\n description: An LDAP administrator\n userPassword: \"{SSHA}tabyipcHzhwESzRaGA7oQ/SDoBZQOGND\"\n\n- name: Get rid of an old entry\n community.general.ldap_entry:\n dn: ou=stuff,dc=example,dc=com\n state: absent\n server_uri: ldap://localhost/\n bind_dn: cn=admin,dc=example,dc=com\n bind_pw: password\n\n#\n# The same as in the previous example but with the authentication details\n# stored in the ldap_auth variable:\n#\n# ldap_auth:\n# server_uri: ldap://localhost/\n# bind_dn: cn=admin,dc=example,dc=com\n# bind_pw: password\n#\n# In the example below, 'args' is a task keyword, passed at the same level as the module\n- name: Get rid of an old entry\n community.general.ldap_entry:\n dn: ou=stuff,dc=example,dc=com\n state: absent\n args: \"{{ ldap_auth }}\"\n\"\"\"\n\n\nRETURN = \"\"\"\n# Default return values\n\"\"\"\n\nimport traceback\n\nfrom ansible.module_utils.basic import AnsibleModule, missing_required_lib\nfrom ansible.module_utils.common.text.converters import to_native, to_bytes\nfrom ansible_collections.community.general.plugins.module_utils.ldap import LdapGeneric, gen_specs\n\nLDAP_IMP_ERR = None\ntry:\n import ldap.modlist\n\n HAS_LDAP = True\nexcept ImportError:\n LDAP_IMP_ERR = traceback.format_exc()\n HAS_LDAP = False\n\n\nclass LdapEntry(LdapGeneric):\n def __init__(self, module):\n LdapGeneric.__init__(self, module)\n\n # Shortcuts\n self.state = self.module.params['state']\n\n # Add the objectClass into the list of attributes\n self.module.params['attributes']['objectClass'] = (\n self.module.params['objectClass'])\n\n # Load attributes\n if self.state == 'present':\n self.attrs = self._load_attrs()\n\n def _load_attrs(self):\n \"\"\" Turn attribute's value to array. \"\"\"\n attrs = {}\n\n for name, value in self.module.params['attributes'].items():\n if isinstance(value, list):\n attrs[name] = list(map(to_bytes, value))\n else:\n attrs[name] = [to_bytes(value)]\n\n return attrs\n\n def add(self):\n \"\"\" If self.dn does not exist, returns a callable that will add it. \"\"\"\n def _add():\n self.connection.add_s(self.dn, modlist)\n\n if not self._is_entry_present():\n modlist = ldap.modlist.addModlist(self.attrs)\n action = _add\n else:\n action = None\n\n return action\n\n def delete(self):\n \"\"\" If self.dn exists, returns a callable that will delete it. \"\"\"\n def _delete():\n self.connection.delete_s(self.dn)\n\n if self._is_entry_present():\n action = _delete\n else:\n action = None\n\n return action\n\n def _is_entry_present(self):\n try:\n self.connection.search_s(self.dn, ldap.SCOPE_BASE)\n except ldap.NO_SUCH_OBJECT:\n is_present = False\n else:\n is_present = True\n\n return is_present\n\n\ndef main():\n module = AnsibleModule(\n argument_spec=gen_specs(\n attributes=dict(default={}, type='dict'),\n objectClass=dict(type='list', elements='str'),\n state=dict(default='present', choices=['present', 'absent']),\n ),\n required_if=[('state', 'present', ['objectClass'])],\n supports_check_mode=True,\n )\n\n if not HAS_LDAP:\n module.fail_json(msg=missing_required_lib('python-ldap'),\n exception=LDAP_IMP_ERR)\n\n state = module.params['state']\n\n # Instantiate the LdapEntry object\n ldap = LdapEntry(module)\n\n # Get the action function\n if state == 'present':\n action = ldap.add()\n elif state == 'absent':\n action = ldap.delete()\n\n # Perform the action\n if action is not None and not module.check_mode:\n try:\n action()\n except Exception as e:\n module.fail_json(msg=\"Entry action failed.\", details=to_native(e), exception=traceback.format_exc())\n\n module.exit_json(changed=(action is not None))\n\n\nif __name__ == '__main__':\n main()\n", "path": "plugins/modules/net_tools/ldap/ldap_entry.py"}], "after_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# Copyright: (c) 2016, Peter Sagerson <[email protected]>\n# Copyright: (c) 2016, Jiri Tyr <[email protected]>\n#\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nDOCUMENTATION = '''\n---\nmodule: ldap_entry\nshort_description: Add or remove LDAP entries.\ndescription:\n - Add or remove LDAP entries. This module only asserts the existence or\n non-existence of an LDAP entry, not its attributes. To assert the\n attribute values of an entry, see M(community.general.ldap_attrs).\nnotes:\n - The default authentication settings will attempt to use a SASL EXTERNAL\n bind over a UNIX domain socket. This works well with the default Ubuntu\n install for example, which includes a cn=peercred,cn=external,cn=auth ACL\n rule allowing root to modify the server configuration. If you need to use\n a simple bind to access your server, pass the credentials in I(bind_dn)\n and I(bind_pw).\nauthor:\n - Jiri Tyr (@jtyr)\nrequirements:\n - python-ldap\noptions:\n attributes:\n description:\n - If I(state=present), attributes necessary to create an entry. Existing\n entries are never modified. To assert specific attribute values on an\n existing entry, use M(community.general.ldap_attrs) module instead.\n type: dict\n objectClass:\n description:\n - If I(state=present), value or list of values to use when creating\n the entry. It can either be a string or an actual list of\n strings.\n type: list\n elements: str\n state:\n description:\n - The target state of the entry.\n choices: [present, absent]\n default: present\n type: str\n recursive:\n description:\n - If I(state=delete), a flag indicating whether a single entry or the\n whole branch must be deleted.\n type: bool\n default: false\n version_added: 4.6.0\nextends_documentation_fragment:\n- community.general.ldap.documentation\n\n'''\n\n\nEXAMPLES = \"\"\"\n- name: Make sure we have a parent entry for users\n community.general.ldap_entry:\n dn: ou=users,dc=example,dc=com\n objectClass: organizationalUnit\n\n- name: Make sure we have an admin user\n community.general.ldap_entry:\n dn: cn=admin,dc=example,dc=com\n objectClass:\n - simpleSecurityObject\n - organizationalRole\n attributes:\n description: An LDAP administrator\n userPassword: \"{SSHA}tabyipcHzhwESzRaGA7oQ/SDoBZQOGND\"\n\n- name: Get rid of an old entry\n community.general.ldap_entry:\n dn: ou=stuff,dc=example,dc=com\n state: absent\n server_uri: ldap://localhost/\n bind_dn: cn=admin,dc=example,dc=com\n bind_pw: password\n\n#\n# The same as in the previous example but with the authentication details\n# stored in the ldap_auth variable:\n#\n# ldap_auth:\n# server_uri: ldap://localhost/\n# bind_dn: cn=admin,dc=example,dc=com\n# bind_pw: password\n#\n# In the example below, 'args' is a task keyword, passed at the same level as the module\n- name: Get rid of an old entry\n community.general.ldap_entry:\n dn: ou=stuff,dc=example,dc=com\n state: absent\n args: \"{{ ldap_auth }}\"\n\"\"\"\n\n\nRETURN = \"\"\"\n# Default return values\n\"\"\"\n\nimport traceback\n\nfrom ansible.module_utils.basic import AnsibleModule, missing_required_lib\nfrom ansible.module_utils.common.text.converters import to_native, to_bytes\nfrom ansible_collections.community.general.plugins.module_utils.ldap import LdapGeneric, gen_specs\n\nLDAP_IMP_ERR = None\ntry:\n import ldap.modlist\n import ldap.controls\n\n HAS_LDAP = True\nexcept ImportError:\n LDAP_IMP_ERR = traceback.format_exc()\n HAS_LDAP = False\n\n\nclass LdapEntry(LdapGeneric):\n def __init__(self, module):\n LdapGeneric.__init__(self, module)\n\n # Shortcuts\n self.state = self.module.params['state']\n self.recursive = self.module.params['recursive']\n\n # Add the objectClass into the list of attributes\n self.module.params['attributes']['objectClass'] = (\n self.module.params['objectClass'])\n\n # Load attributes\n if self.state == 'present':\n self.attrs = self._load_attrs()\n\n def _load_attrs(self):\n \"\"\" Turn attribute's value to array. \"\"\"\n attrs = {}\n\n for name, value in self.module.params['attributes'].items():\n if isinstance(value, list):\n attrs[name] = list(map(to_bytes, value))\n else:\n attrs[name] = [to_bytes(value)]\n\n return attrs\n\n def add(self):\n \"\"\" If self.dn does not exist, returns a callable that will add it. \"\"\"\n def _add():\n self.connection.add_s(self.dn, modlist)\n\n if not self._is_entry_present():\n modlist = ldap.modlist.addModlist(self.attrs)\n action = _add\n else:\n action = None\n\n return action\n\n def delete(self):\n \"\"\" If self.dn exists, returns a callable that will delete either\n the item itself if the recursive option is not set or the whole branch\n if it is. \"\"\"\n def _delete():\n self.connection.delete_s(self.dn)\n\n def _delete_recursive():\n \"\"\" Attempt recurive deletion using the subtree-delete control.\n If that fails, do it manually. \"\"\"\n try:\n subtree_delete = ldap.controls.ValueLessRequestControl('1.2.840.113556.1.4.805')\n self.connection.delete_ext_s(self.dn, serverctrls=[subtree_delete])\n except ldap.NOT_ALLOWED_ON_NONLEAF:\n search = self.connection.search_s(self.dn, ldap.SCOPE_SUBTREE, attrlist=('dn',))\n search.reverse()\n for entry in search:\n self.connection.delete_s(entry[0])\n\n if self._is_entry_present():\n if self.recursive:\n action = _delete_recursive\n else:\n action = _delete\n else:\n action = None\n\n return action\n\n def _is_entry_present(self):\n try:\n self.connection.search_s(self.dn, ldap.SCOPE_BASE)\n except ldap.NO_SUCH_OBJECT:\n is_present = False\n else:\n is_present = True\n\n return is_present\n\n\ndef main():\n module = AnsibleModule(\n argument_spec=gen_specs(\n attributes=dict(default={}, type='dict'),\n objectClass=dict(type='list', elements='str'),\n state=dict(default='present', choices=['present', 'absent']),\n recursive=dict(default=False, type='bool'),\n ),\n required_if=[('state', 'present', ['objectClass'])],\n supports_check_mode=True,\n )\n\n if not HAS_LDAP:\n module.fail_json(msg=missing_required_lib('python-ldap'),\n exception=LDAP_IMP_ERR)\n\n state = module.params['state']\n\n # Instantiate the LdapEntry object\n ldap = LdapEntry(module)\n\n # Get the action function\n if state == 'present':\n action = ldap.add()\n elif state == 'absent':\n action = ldap.delete()\n\n # Perform the action\n if action is not None and not module.check_mode:\n try:\n action()\n except Exception as e:\n module.fail_json(msg=\"Entry action failed.\", details=to_native(e), exception=traceback.format_exc())\n\n module.exit_json(changed=(action is not None))\n\n\nif __name__ == '__main__':\n main()\n", "path": "plugins/modules/net_tools/ldap/ldap_entry.py"}]} | 3,104 | 644 |
gh_patches_debug_24047 | rasdani/github-patches | git_diff | pyinstaller__pyinstaller-8192 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
site-packages\django\contrib\sites has managment.py file which is conflicting with managment directory and causing a issue in tree get issue in _pyi_pkgutil_iter_modules as dirretory name and file are same
When I execute a generated exe from pyinstaller, it is failing as django.contrib.sites has mangment.py file which is conflicting with package which by default will have management as directory.
as technically there is no management folder but when find_commands whose iterator is override by _pyi_pkgutil_iter_modules id expecting a tree node with dict but it has a file with managment.py so it is failing as str error.
If any app has management folder and also managment.py it will yield in to this issue. We can rename custom apps to avoid the file name duplication with management folder but how to deal with standard dajngo apps like django.contrib.sites
here is the stack trace
-> for pkg_name_part in pkg_prefix.parts:
(Pdb) pkg_prefix.parts
('django', 'contrib', 'sites', 'management', 'commands') ====> management here is expected a dict , as dict is not there it will be there in toc but as a file is there. it finds the file as str instead of dict and so is the assert.
(Pdb) c
Traceback (most recent call last):
File "manage.py", line 19, in <module>
execute_from_command_line(sys.argv)
File "django\core\management\__init__.py", line 442, in execute_from_command_line
File "django\core\management\__init__.py", line 424, in execute
File "django\core\management\__init__.py", line 222, in main_help_text
File "django\core\management\__init__.py", line 78, in get_commands
File "django\core\management\__init__.py", line 35, in find_commands
File "django\core\management\__init__.py", line 35, in <listcomp>
File "C:\PythonWebCode\testone\Lib\site-packages\PyInstaller\hooks\rthooks\pyi_rth_pkgutil.py", line 102, in _pyi_pkgutil_iter_modules
for pkg_name_part in pkg_prefix.parts:
AttributeError: 'str' object has no attribute 'get'
How can we address this issue ?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `PyInstaller/utils/hooks/django.py`
Content:
```
1 # ----------------------------------------------------------------------------
2 # Copyright (c) 2005-2023, PyInstaller Development Team.
3 #
4 # Distributed under the terms of the GNU General Public License (version 2
5 # or later) with exception for distributing the bootloader.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #
9 # SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)
10 # ----------------------------------------------------------------------------
11 import os
12
13 from PyInstaller import isolated
14
15
16 @isolated.decorate
17 def django_dottedstring_imports(django_root_dir):
18 """
19 An isolated helper that returns list of all Django dependencies, parsed from the `mysite.settings` module.
20
21 NOTE: With newer version of Django this is most likely the part of PyInstaller that will be broken.
22
23 Tested with Django 2.2
24 """
25
26 import sys
27 import os
28
29 import PyInstaller.utils.misc
30 from PyInstaller.utils import hooks as hookutils
31
32 # Extra search paths to add to sys.path:
33 # - parent directory of the django_root_dir
34 # - django_root_dir itself; often, Django users do not specify absolute imports in the settings module.
35 search_paths = [
36 PyInstaller.utils.misc.get_path_to_toplevel_modules(django_root_dir),
37 django_root_dir,
38 ]
39 sys.path += search_paths
40
41 # Set the path to project's settings module
42 default_settings_module = os.path.basename(django_root_dir) + '.settings'
43 settings_module = os.environ.get('DJANGO_SETTINGS_MODULE', default_settings_module)
44 os.environ['DJANGO_SETTINGS_MODULE'] = settings_module
45
46 # Calling django.setup() avoids the exception AppRegistryNotReady() and also reads the user settings
47 # from DJANGO_SETTINGS_MODULE.
48 # https://stackoverflow.com/questions/24793351/django-appregistrynotready
49 import django # noqa: E402
50
51 django.setup()
52
53 # This allows to access all django settings even from the settings.py module.
54 from django.conf import settings # noqa: E402
55
56 hiddenimports = list(settings.INSTALLED_APPS)
57
58 # Do not fail script when settings does not have such attributes.
59 if hasattr(settings, 'TEMPLATE_CONTEXT_PROCESSORS'):
60 hiddenimports += list(settings.TEMPLATE_CONTEXT_PROCESSORS)
61
62 if hasattr(settings, 'TEMPLATE_LOADERS'):
63 hiddenimports += list(settings.TEMPLATE_LOADERS)
64
65 hiddenimports += [settings.ROOT_URLCONF]
66
67 def _remove_class(class_name):
68 return '.'.join(class_name.split('.')[0:-1])
69
70 #-- Changes in Django 1.7.
71
72 # Remove class names and keep just modules.
73 if hasattr(settings, 'AUTHENTICATION_BACKENDS'):
74 for cl in settings.AUTHENTICATION_BACKENDS:
75 cl = _remove_class(cl)
76 hiddenimports.append(cl)
77 if hasattr(settings, 'DEFAULT_FILE_STORAGE'):
78 cl = _remove_class(settings.DEFAULT_FILE_STORAGE)
79 hiddenimports.append(cl)
80 if hasattr(settings, 'FILE_UPLOAD_HANDLERS'):
81 for cl in settings.FILE_UPLOAD_HANDLERS:
82 cl = _remove_class(cl)
83 hiddenimports.append(cl)
84 if hasattr(settings, 'MIDDLEWARE_CLASSES'):
85 for cl in settings.MIDDLEWARE_CLASSES:
86 cl = _remove_class(cl)
87 hiddenimports.append(cl)
88 # Templates is a dict:
89 if hasattr(settings, 'TEMPLATES'):
90 for templ in settings.TEMPLATES:
91 backend = _remove_class(templ['BACKEND'])
92 hiddenimports += backend
93 # Include context_processors.
94 if hasattr(templ, 'OPTIONS'):
95 if hasattr(templ['OPTIONS'], 'context_processors'):
96 # Context processors are functions - strip last word.
97 mods = templ['OPTIONS']['context_processors']
98 mods = [_remove_class(x) for x in mods]
99 hiddenimports += mods
100 # Include database backends - it is a dict.
101 for v in settings.DATABASES.values():
102 hiddenimports.append(v['ENGINE'])
103
104 # Add templatetags and context processors for each installed app.
105 for app in settings.INSTALLED_APPS:
106 app_templatetag_module = app + '.templatetags'
107 app_ctx_proc_module = app + '.context_processors'
108 hiddenimports.append(app_templatetag_module)
109 hiddenimports += hookutils.collect_submodules(app_templatetag_module)
110 hiddenimports.append(app_ctx_proc_module)
111
112 # Deduplicate imports.
113 hiddenimports = list(set(hiddenimports))
114
115 # Return the hidden imports
116 return hiddenimports
117
118
119 def django_find_root_dir():
120 """
121 Return path to directory (top-level Python package) that contains main django files. Return None if no directory
122 was detected.
123
124 Main Django project directory contain files like '__init__.py', 'settings.py' and 'url.py'.
125
126 In Django 1.4+ the script 'manage.py' is not in the directory with 'settings.py' but usually one level up. We
127 need to detect this special case too.
128 """
129 # 'PyInstaller.config' cannot be imported as other top-level modules.
130 from PyInstaller.config import CONF
131
132 # Get the directory with manage.py. Manage.py is supplied to PyInstaller as the first main executable script.
133 manage_py = CONF['main_script']
134 manage_dir = os.path.dirname(os.path.abspath(manage_py))
135
136 # Get the Django root directory. The directory that contains settings.py and url.py. It could be the directory
137 # containing manage.py or any of its subdirectories.
138 settings_dir = None
139 files = set(os.listdir(manage_dir))
140 if ('settings.py' in files or 'settings' in files) and 'urls.py' in files:
141 settings_dir = manage_dir
142 else:
143 for f in files:
144 if os.path.isdir(os.path.join(manage_dir, f)):
145 subfiles = os.listdir(os.path.join(manage_dir, f))
146 # Subdirectory contains critical files.
147 if ('settings.py' in subfiles or 'settings' in subfiles) and 'urls.py' in subfiles:
148 settings_dir = os.path.join(manage_dir, f)
149 break # Find the first directory.
150
151 return settings_dir
152
```
Path: `PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py`
Content:
```
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2021-2023, PyInstaller Development Team.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #
9 # SPDX-License-Identifier: Apache-2.0
10 #-----------------------------------------------------------------------------
11 #
12 # This rthook overrides pkgutil.iter_modules with custom implementation that uses PyInstaller's PyiFrozenImporter to
13 # list sub-modules embedded in the PYZ archive. The non-embedded modules (binary extensions, or .pyc modules in
14 # noarchive build) are handled by original pkgutil iter_modules implementation (and consequently, python's FileFinder).
15 #
16 # The preferred way of adding support for iter_modules would be adding non-standard iter_modules() method to
17 # PyiFrozenImporter itself. However, that seems to work only for path entry finders (for use with sys.path_hooks), while
18 # PyInstaller's PyiFrozenImporter is registered as meta path finders (for use with sys.meta_path). Turning
19 # PyiFrozenImporter into path entry finder, would seemingly require the latter to support on-filesystem resources
20 # (e.g., extension modules) in addition to PYZ-embedded ones.
21 #
22 # Therefore, we instead opt for overriding pkgutil.iter_modules with custom implementation that augments the output of
23 # original implementation with contents of PYZ archive from PyiFrozenImporter's TOC.
24
25
26 def _pyi_rthook():
27 import pathlib
28 import pkgutil
29 import sys
30
31 from pyimod02_importers import PyiFrozenImporter
32 from _pyi_rth_utils import is_macos_app_bundle
33
34 _orig_pkgutil_iter_modules = pkgutil.iter_modules
35
36 def _pyi_pkgutil_iter_modules(path=None, prefix=''):
37 # Use original implementation to discover on-filesystem modules (binary extensions in regular builds, or both
38 # binary extensions and compiled pyc modules in noarchive debug builds).
39 yield from _orig_pkgutil_iter_modules(path, prefix)
40
41 # Find the instance of PyInstaller's PyiFrozenImporter.
42 for importer in pkgutil.iter_importers():
43 if isinstance(importer, PyiFrozenImporter):
44 break
45 else:
46 return
47
48 if path is None:
49 # Search for all top-level packages/modules in the PyiFrozenImporter's prefix tree.
50 for entry_name, entry_data in importer.toc_tree.items():
51 # Package nodes have dict for data, module nodes (leaves) have (empty) strings.
52 is_pkg = isinstance(entry_data, dict)
53 yield pkgutil.ModuleInfo(importer, prefix + entry_name, is_pkg)
54 else:
55 # Fully resolve sys._MEIPASS, in order to avoid path mis-matches when the given search paths also contain
56 # symbolic links and are already fully resolved. See #6537 for an example of such a problem with onefile
57 # build on macOS, where the temporary directory is placed under /var, which is actually a symbolic link
58 # to /private/var.
59 MEIPASS = pathlib.Path(sys._MEIPASS).resolve()
60
61 # For macOS .app bundles, the "true" sys._MEIPASS is `name.app/Contents/Frameworks`, but due to
62 # cross-linking, we must also consider `name.app/Contents/Resources`. See #7884.
63 if is_macos_app_bundle:
64 ALT_MEIPASS = (pathlib.Path(sys._MEIPASS).parent / "Resources").resolve()
65
66 # Process all given paths
67 seen_pkg_prefices = set()
68 for pkg_path in path:
69 # Fully resolve the given path, in case it contains symbolic links.
70 pkg_path = pathlib.Path(pkg_path).resolve()
71
72 # Try to compute package prefix, which is the remainder of the given path, relative to the sys._MEIPASS.
73 pkg_prefix = None
74 try:
75 pkg_prefix = pkg_path.relative_to(MEIPASS)
76 except ValueError: # ValueError: 'a' is not in the subpath of 'b'
77 pass
78
79 # For macOS .app bundle, try the alternative sys._MEIPASS
80 if pkg_prefix is None and is_macos_app_bundle:
81 try:
82 pkg_prefix = pkg_path.relative_to(ALT_MEIPASS)
83 except ValueError:
84 pass
85
86 # Given path is outside of sys._MEIPASS; ignore it.
87 if pkg_prefix is None:
88 continue
89
90 # If we are given multiple paths and they are either duplicated or resolve to the same package prefix,
91 # prevent duplication.
92 if pkg_prefix in seen_pkg_prefices:
93 continue
94 seen_pkg_prefices.add(pkg_prefix)
95
96 # Traverse the PyiFrozenImporter's prefix tree using components of the relative package path, starting
97 # at the tree root. This implicitly handles the case where the given path was actually sys._MEIPASS
98 # itself, as in this case pkg_prefix is pathlib.Path(".") with empty parts tuple.
99 tree_node = importer.toc_tree
100 for pkg_name_part in pkg_prefix.parts:
101 tree_node = tree_node.get(pkg_name_part)
102 if tree_node is None:
103 tree_node = {}
104 break
105
106 # List entries from the target node.
107 for entry_name, entry_data in tree_node.items():
108 is_pkg = isinstance(entry_data, dict)
109 yield pkgutil.ModuleInfo(importer, prefix + entry_name, is_pkg)
110
111 pkgutil.iter_modules = _pyi_pkgutil_iter_modules
112
113
114 _pyi_rthook()
115 del _pyi_rthook
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py b/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py
--- a/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py
+++ b/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py
@@ -99,7 +99,10 @@
tree_node = importer.toc_tree
for pkg_name_part in pkg_prefix.parts:
tree_node = tree_node.get(pkg_name_part)
- if tree_node is None:
+ if not isinstance(tree_node, dict):
+ # This check handles two cases:
+ # a) path does not exist (`tree_node` is `None`)
+ # b) path corresponds to a module instead of a package (`tree_node` is a leaf node (`str`)).
tree_node = {}
break
diff --git a/PyInstaller/utils/hooks/django.py b/PyInstaller/utils/hooks/django.py
--- a/PyInstaller/utils/hooks/django.py
+++ b/PyInstaller/utils/hooks/django.py
@@ -89,7 +89,7 @@
if hasattr(settings, 'TEMPLATES'):
for templ in settings.TEMPLATES:
backend = _remove_class(templ['BACKEND'])
- hiddenimports += backend
+ hiddenimports.append(backend)
# Include context_processors.
if hasattr(templ, 'OPTIONS'):
if hasattr(templ['OPTIONS'], 'context_processors'):
| {"golden_diff": "diff --git a/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py b/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py\n--- a/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py\n+++ b/PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py\n@@ -99,7 +99,10 @@\n tree_node = importer.toc_tree\n for pkg_name_part in pkg_prefix.parts:\n tree_node = tree_node.get(pkg_name_part)\n- if tree_node is None:\n+ if not isinstance(tree_node, dict):\n+ # This check handles two cases:\n+ # a) path does not exist (`tree_node` is `None`)\n+ # b) path corresponds to a module instead of a package (`tree_node` is a leaf node (`str`)).\n tree_node = {}\n break\n \ndiff --git a/PyInstaller/utils/hooks/django.py b/PyInstaller/utils/hooks/django.py\n--- a/PyInstaller/utils/hooks/django.py\n+++ b/PyInstaller/utils/hooks/django.py\n@@ -89,7 +89,7 @@\n if hasattr(settings, 'TEMPLATES'):\n for templ in settings.TEMPLATES:\n backend = _remove_class(templ['BACKEND'])\n- hiddenimports += backend\n+ hiddenimports.append(backend)\n # Include context_processors.\n if hasattr(templ, 'OPTIONS'):\n if hasattr(templ['OPTIONS'], 'context_processors'):\n", "issue": "site-packages\\django\\contrib\\sites has managment.py file which is conflicting with managment directory and causing a issue in tree get issue in _pyi_pkgutil_iter_modules as dirretory name and file are same \nWhen I execute a generated exe from pyinstaller, it is failing as django.contrib.sites has mangment.py file which is conflicting with package which by default will have management as directory.\r\n\r\nas technically there is no management folder but when find_commands whose iterator is override by _pyi_pkgutil_iter_modules id expecting a tree node with dict but it has a file with managment.py so it is failing as str error.\r\n\r\nIf any app has management folder and also managment.py it will yield in to this issue. We can rename custom apps to avoid the file name duplication with management folder but how to deal with standard dajngo apps like django.contrib.sites\r\n\r\nhere is the stack trace \r\n\r\n\r\n\r\n-> for pkg_name_part in pkg_prefix.parts:\r\n(Pdb) pkg_prefix.parts\r\n('django', 'contrib', 'sites', 'management', 'commands') ====> management here is expected a dict , as dict is not there it will be there in toc but as a file is there. it finds the file as str instead of dict and so is the assert.\r\n(Pdb) c\r\nTraceback (most recent call last):\r\n File \"manage.py\", line 19, in <module>\r\n execute_from_command_line(sys.argv)\r\n File \"django\\core\\management\\__init__.py\", line 442, in execute_from_command_line\r\n File \"django\\core\\management\\__init__.py\", line 424, in execute\r\n File \"django\\core\\management\\__init__.py\", line 222, in main_help_text\r\n File \"django\\core\\management\\__init__.py\", line 78, in get_commands\r\n File \"django\\core\\management\\__init__.py\", line 35, in find_commands\r\n File \"django\\core\\management\\__init__.py\", line 35, in <listcomp>\r\n File \"C:\\PythonWebCode\\testone\\Lib\\site-packages\\PyInstaller\\hooks\\rthooks\\pyi_rth_pkgutil.py\", line 102, in _pyi_pkgutil_iter_modules\r\n for pkg_name_part in pkg_prefix.parts:\r\nAttributeError: 'str' object has no attribute 'get'\r\n\r\nHow can we address this issue ?\r\n\r\n\n", "before_files": [{"content": "# ----------------------------------------------------------------------------\n# Copyright (c) 2005-2023, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License (version 2\n# or later) with exception for distributing the bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n# ----------------------------------------------------------------------------\nimport os\n\nfrom PyInstaller import isolated\n\n\[email protected]\ndef django_dottedstring_imports(django_root_dir):\n \"\"\"\n An isolated helper that returns list of all Django dependencies, parsed from the `mysite.settings` module.\n\n NOTE: With newer version of Django this is most likely the part of PyInstaller that will be broken.\n\n Tested with Django 2.2\n \"\"\"\n\n import sys\n import os\n\n import PyInstaller.utils.misc\n from PyInstaller.utils import hooks as hookutils\n\n # Extra search paths to add to sys.path:\n # - parent directory of the django_root_dir\n # - django_root_dir itself; often, Django users do not specify absolute imports in the settings module.\n search_paths = [\n PyInstaller.utils.misc.get_path_to_toplevel_modules(django_root_dir),\n django_root_dir,\n ]\n sys.path += search_paths\n\n # Set the path to project's settings module\n default_settings_module = os.path.basename(django_root_dir) + '.settings'\n settings_module = os.environ.get('DJANGO_SETTINGS_MODULE', default_settings_module)\n os.environ['DJANGO_SETTINGS_MODULE'] = settings_module\n\n # Calling django.setup() avoids the exception AppRegistryNotReady() and also reads the user settings\n # from DJANGO_SETTINGS_MODULE.\n # https://stackoverflow.com/questions/24793351/django-appregistrynotready\n import django # noqa: E402\n\n django.setup()\n\n # This allows to access all django settings even from the settings.py module.\n from django.conf import settings # noqa: E402\n\n hiddenimports = list(settings.INSTALLED_APPS)\n\n # Do not fail script when settings does not have such attributes.\n if hasattr(settings, 'TEMPLATE_CONTEXT_PROCESSORS'):\n hiddenimports += list(settings.TEMPLATE_CONTEXT_PROCESSORS)\n\n if hasattr(settings, 'TEMPLATE_LOADERS'):\n hiddenimports += list(settings.TEMPLATE_LOADERS)\n\n hiddenimports += [settings.ROOT_URLCONF]\n\n def _remove_class(class_name):\n return '.'.join(class_name.split('.')[0:-1])\n\n #-- Changes in Django 1.7.\n\n # Remove class names and keep just modules.\n if hasattr(settings, 'AUTHENTICATION_BACKENDS'):\n for cl in settings.AUTHENTICATION_BACKENDS:\n cl = _remove_class(cl)\n hiddenimports.append(cl)\n if hasattr(settings, 'DEFAULT_FILE_STORAGE'):\n cl = _remove_class(settings.DEFAULT_FILE_STORAGE)\n hiddenimports.append(cl)\n if hasattr(settings, 'FILE_UPLOAD_HANDLERS'):\n for cl in settings.FILE_UPLOAD_HANDLERS:\n cl = _remove_class(cl)\n hiddenimports.append(cl)\n if hasattr(settings, 'MIDDLEWARE_CLASSES'):\n for cl in settings.MIDDLEWARE_CLASSES:\n cl = _remove_class(cl)\n hiddenimports.append(cl)\n # Templates is a dict:\n if hasattr(settings, 'TEMPLATES'):\n for templ in settings.TEMPLATES:\n backend = _remove_class(templ['BACKEND'])\n hiddenimports += backend\n # Include context_processors.\n if hasattr(templ, 'OPTIONS'):\n if hasattr(templ['OPTIONS'], 'context_processors'):\n # Context processors are functions - strip last word.\n mods = templ['OPTIONS']['context_processors']\n mods = [_remove_class(x) for x in mods]\n hiddenimports += mods\n # Include database backends - it is a dict.\n for v in settings.DATABASES.values():\n hiddenimports.append(v['ENGINE'])\n\n # Add templatetags and context processors for each installed app.\n for app in settings.INSTALLED_APPS:\n app_templatetag_module = app + '.templatetags'\n app_ctx_proc_module = app + '.context_processors'\n hiddenimports.append(app_templatetag_module)\n hiddenimports += hookutils.collect_submodules(app_templatetag_module)\n hiddenimports.append(app_ctx_proc_module)\n\n # Deduplicate imports.\n hiddenimports = list(set(hiddenimports))\n\n # Return the hidden imports\n return hiddenimports\n\n\ndef django_find_root_dir():\n \"\"\"\n Return path to directory (top-level Python package) that contains main django files. Return None if no directory\n was detected.\n\n Main Django project directory contain files like '__init__.py', 'settings.py' and 'url.py'.\n\n In Django 1.4+ the script 'manage.py' is not in the directory with 'settings.py' but usually one level up. We\n need to detect this special case too.\n \"\"\"\n # 'PyInstaller.config' cannot be imported as other top-level modules.\n from PyInstaller.config import CONF\n\n # Get the directory with manage.py. Manage.py is supplied to PyInstaller as the first main executable script.\n manage_py = CONF['main_script']\n manage_dir = os.path.dirname(os.path.abspath(manage_py))\n\n # Get the Django root directory. The directory that contains settings.py and url.py. It could be the directory\n # containing manage.py or any of its subdirectories.\n settings_dir = None\n files = set(os.listdir(manage_dir))\n if ('settings.py' in files or 'settings' in files) and 'urls.py' in files:\n settings_dir = manage_dir\n else:\n for f in files:\n if os.path.isdir(os.path.join(manage_dir, f)):\n subfiles = os.listdir(os.path.join(manage_dir, f))\n # Subdirectory contains critical files.\n if ('settings.py' in subfiles or 'settings' in subfiles) and 'urls.py' in subfiles:\n settings_dir = os.path.join(manage_dir, f)\n break # Find the first directory.\n\n return settings_dir\n", "path": "PyInstaller/utils/hooks/django.py"}, {"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2021-2023, PyInstaller Development Team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: Apache-2.0\n#-----------------------------------------------------------------------------\n#\n# This rthook overrides pkgutil.iter_modules with custom implementation that uses PyInstaller's PyiFrozenImporter to\n# list sub-modules embedded in the PYZ archive. The non-embedded modules (binary extensions, or .pyc modules in\n# noarchive build) are handled by original pkgutil iter_modules implementation (and consequently, python's FileFinder).\n#\n# The preferred way of adding support for iter_modules would be adding non-standard iter_modules() method to\n# PyiFrozenImporter itself. However, that seems to work only for path entry finders (for use with sys.path_hooks), while\n# PyInstaller's PyiFrozenImporter is registered as meta path finders (for use with sys.meta_path). Turning\n# PyiFrozenImporter into path entry finder, would seemingly require the latter to support on-filesystem resources\n# (e.g., extension modules) in addition to PYZ-embedded ones.\n#\n# Therefore, we instead opt for overriding pkgutil.iter_modules with custom implementation that augments the output of\n# original implementation with contents of PYZ archive from PyiFrozenImporter's TOC.\n\n\ndef _pyi_rthook():\n import pathlib\n import pkgutil\n import sys\n\n from pyimod02_importers import PyiFrozenImporter\n from _pyi_rth_utils import is_macos_app_bundle\n\n _orig_pkgutil_iter_modules = pkgutil.iter_modules\n\n def _pyi_pkgutil_iter_modules(path=None, prefix=''):\n # Use original implementation to discover on-filesystem modules (binary extensions in regular builds, or both\n # binary extensions and compiled pyc modules in noarchive debug builds).\n yield from _orig_pkgutil_iter_modules(path, prefix)\n\n # Find the instance of PyInstaller's PyiFrozenImporter.\n for importer in pkgutil.iter_importers():\n if isinstance(importer, PyiFrozenImporter):\n break\n else:\n return\n\n if path is None:\n # Search for all top-level packages/modules in the PyiFrozenImporter's prefix tree.\n for entry_name, entry_data in importer.toc_tree.items():\n # Package nodes have dict for data, module nodes (leaves) have (empty) strings.\n is_pkg = isinstance(entry_data, dict)\n yield pkgutil.ModuleInfo(importer, prefix + entry_name, is_pkg)\n else:\n # Fully resolve sys._MEIPASS, in order to avoid path mis-matches when the given search paths also contain\n # symbolic links and are already fully resolved. See #6537 for an example of such a problem with onefile\n # build on macOS, where the temporary directory is placed under /var, which is actually a symbolic link\n # to /private/var.\n MEIPASS = pathlib.Path(sys._MEIPASS).resolve()\n\n # For macOS .app bundles, the \"true\" sys._MEIPASS is `name.app/Contents/Frameworks`, but due to\n # cross-linking, we must also consider `name.app/Contents/Resources`. See #7884.\n if is_macos_app_bundle:\n ALT_MEIPASS = (pathlib.Path(sys._MEIPASS).parent / \"Resources\").resolve()\n\n # Process all given paths\n seen_pkg_prefices = set()\n for pkg_path in path:\n # Fully resolve the given path, in case it contains symbolic links.\n pkg_path = pathlib.Path(pkg_path).resolve()\n\n # Try to compute package prefix, which is the remainder of the given path, relative to the sys._MEIPASS.\n pkg_prefix = None\n try:\n pkg_prefix = pkg_path.relative_to(MEIPASS)\n except ValueError: # ValueError: 'a' is not in the subpath of 'b'\n pass\n\n # For macOS .app bundle, try the alternative sys._MEIPASS\n if pkg_prefix is None and is_macos_app_bundle:\n try:\n pkg_prefix = pkg_path.relative_to(ALT_MEIPASS)\n except ValueError:\n pass\n\n # Given path is outside of sys._MEIPASS; ignore it.\n if pkg_prefix is None:\n continue\n\n # If we are given multiple paths and they are either duplicated or resolve to the same package prefix,\n # prevent duplication.\n if pkg_prefix in seen_pkg_prefices:\n continue\n seen_pkg_prefices.add(pkg_prefix)\n\n # Traverse the PyiFrozenImporter's prefix tree using components of the relative package path, starting\n # at the tree root. This implicitly handles the case where the given path was actually sys._MEIPASS\n # itself, as in this case pkg_prefix is pathlib.Path(\".\") with empty parts tuple.\n tree_node = importer.toc_tree\n for pkg_name_part in pkg_prefix.parts:\n tree_node = tree_node.get(pkg_name_part)\n if tree_node is None:\n tree_node = {}\n break\n\n # List entries from the target node.\n for entry_name, entry_data in tree_node.items():\n is_pkg = isinstance(entry_data, dict)\n yield pkgutil.ModuleInfo(importer, prefix + entry_name, is_pkg)\n\n pkgutil.iter_modules = _pyi_pkgutil_iter_modules\n\n\n_pyi_rthook()\ndel _pyi_rthook\n", "path": "PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py"}], "after_files": [{"content": "# ----------------------------------------------------------------------------\n# Copyright (c) 2005-2023, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License (version 2\n# or later) with exception for distributing the bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n# ----------------------------------------------------------------------------\nimport os\n\nfrom PyInstaller import isolated\n\n\[email protected]\ndef django_dottedstring_imports(django_root_dir):\n \"\"\"\n An isolated helper that returns list of all Django dependencies, parsed from the `mysite.settings` module.\n\n NOTE: With newer version of Django this is most likely the part of PyInstaller that will be broken.\n\n Tested with Django 2.2\n \"\"\"\n\n import sys\n import os\n\n import PyInstaller.utils.misc\n from PyInstaller.utils import hooks as hookutils\n\n # Extra search paths to add to sys.path:\n # - parent directory of the django_root_dir\n # - django_root_dir itself; often, Django users do not specify absolute imports in the settings module.\n search_paths = [\n PyInstaller.utils.misc.get_path_to_toplevel_modules(django_root_dir),\n django_root_dir,\n ]\n sys.path += search_paths\n\n # Set the path to project's settings module\n default_settings_module = os.path.basename(django_root_dir) + '.settings'\n settings_module = os.environ.get('DJANGO_SETTINGS_MODULE', default_settings_module)\n os.environ['DJANGO_SETTINGS_MODULE'] = settings_module\n\n # Calling django.setup() avoids the exception AppRegistryNotReady() and also reads the user settings\n # from DJANGO_SETTINGS_MODULE.\n # https://stackoverflow.com/questions/24793351/django-appregistrynotready\n import django # noqa: E402\n\n django.setup()\n\n # This allows to access all django settings even from the settings.py module.\n from django.conf import settings # noqa: E402\n\n hiddenimports = list(settings.INSTALLED_APPS)\n\n # Do not fail script when settings does not have such attributes.\n if hasattr(settings, 'TEMPLATE_CONTEXT_PROCESSORS'):\n hiddenimports += list(settings.TEMPLATE_CONTEXT_PROCESSORS)\n\n if hasattr(settings, 'TEMPLATE_LOADERS'):\n hiddenimports += list(settings.TEMPLATE_LOADERS)\n\n hiddenimports += [settings.ROOT_URLCONF]\n\n def _remove_class(class_name):\n return '.'.join(class_name.split('.')[0:-1])\n\n #-- Changes in Django 1.7.\n\n # Remove class names and keep just modules.\n if hasattr(settings, 'AUTHENTICATION_BACKENDS'):\n for cl in settings.AUTHENTICATION_BACKENDS:\n cl = _remove_class(cl)\n hiddenimports.append(cl)\n if hasattr(settings, 'DEFAULT_FILE_STORAGE'):\n cl = _remove_class(settings.DEFAULT_FILE_STORAGE)\n hiddenimports.append(cl)\n if hasattr(settings, 'FILE_UPLOAD_HANDLERS'):\n for cl in settings.FILE_UPLOAD_HANDLERS:\n cl = _remove_class(cl)\n hiddenimports.append(cl)\n if hasattr(settings, 'MIDDLEWARE_CLASSES'):\n for cl in settings.MIDDLEWARE_CLASSES:\n cl = _remove_class(cl)\n hiddenimports.append(cl)\n # Templates is a dict:\n if hasattr(settings, 'TEMPLATES'):\n for templ in settings.TEMPLATES:\n backend = _remove_class(templ['BACKEND'])\n hiddenimports.append(backend)\n # Include context_processors.\n if hasattr(templ, 'OPTIONS'):\n if hasattr(templ['OPTIONS'], 'context_processors'):\n # Context processors are functions - strip last word.\n mods = templ['OPTIONS']['context_processors']\n mods = [_remove_class(x) for x in mods]\n hiddenimports += mods\n # Include database backends - it is a dict.\n for v in settings.DATABASES.values():\n hiddenimports.append(v['ENGINE'])\n\n # Add templatetags and context processors for each installed app.\n for app in settings.INSTALLED_APPS:\n app_templatetag_module = app + '.templatetags'\n app_ctx_proc_module = app + '.context_processors'\n hiddenimports.append(app_templatetag_module)\n hiddenimports += hookutils.collect_submodules(app_templatetag_module)\n hiddenimports.append(app_ctx_proc_module)\n\n # Deduplicate imports.\n hiddenimports = list(set(hiddenimports))\n\n # Return the hidden imports\n return hiddenimports\n\n\ndef django_find_root_dir():\n \"\"\"\n Return path to directory (top-level Python package) that contains main django files. Return None if no directory\n was detected.\n\n Main Django project directory contain files like '__init__.py', 'settings.py' and 'url.py'.\n\n In Django 1.4+ the script 'manage.py' is not in the directory with 'settings.py' but usually one level up. We\n need to detect this special case too.\n \"\"\"\n # 'PyInstaller.config' cannot be imported as other top-level modules.\n from PyInstaller.config import CONF\n\n # Get the directory with manage.py. Manage.py is supplied to PyInstaller as the first main executable script.\n manage_py = CONF['main_script']\n manage_dir = os.path.dirname(os.path.abspath(manage_py))\n\n # Get the Django root directory. The directory that contains settings.py and url.py. It could be the directory\n # containing manage.py or any of its subdirectories.\n settings_dir = None\n files = set(os.listdir(manage_dir))\n if ('settings.py' in files or 'settings' in files) and 'urls.py' in files:\n settings_dir = manage_dir\n else:\n for f in files:\n if os.path.isdir(os.path.join(manage_dir, f)):\n subfiles = os.listdir(os.path.join(manage_dir, f))\n # Subdirectory contains critical files.\n if ('settings.py' in subfiles or 'settings' in subfiles) and 'urls.py' in subfiles:\n settings_dir = os.path.join(manage_dir, f)\n break # Find the first directory.\n\n return settings_dir\n", "path": "PyInstaller/utils/hooks/django.py"}, {"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2021-2023, PyInstaller Development Team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: Apache-2.0\n#-----------------------------------------------------------------------------\n#\n# This rthook overrides pkgutil.iter_modules with custom implementation that uses PyInstaller's PyiFrozenImporter to\n# list sub-modules embedded in the PYZ archive. The non-embedded modules (binary extensions, or .pyc modules in\n# noarchive build) are handled by original pkgutil iter_modules implementation (and consequently, python's FileFinder).\n#\n# The preferred way of adding support for iter_modules would be adding non-standard iter_modules() method to\n# PyiFrozenImporter itself. However, that seems to work only for path entry finders (for use with sys.path_hooks), while\n# PyInstaller's PyiFrozenImporter is registered as meta path finders (for use with sys.meta_path). Turning\n# PyiFrozenImporter into path entry finder, would seemingly require the latter to support on-filesystem resources\n# (e.g., extension modules) in addition to PYZ-embedded ones.\n#\n# Therefore, we instead opt for overriding pkgutil.iter_modules with custom implementation that augments the output of\n# original implementation with contents of PYZ archive from PyiFrozenImporter's TOC.\n\n\ndef _pyi_rthook():\n import pathlib\n import pkgutil\n import sys\n\n from pyimod02_importers import PyiFrozenImporter\n from _pyi_rth_utils import is_macos_app_bundle\n\n _orig_pkgutil_iter_modules = pkgutil.iter_modules\n\n def _pyi_pkgutil_iter_modules(path=None, prefix=''):\n # Use original implementation to discover on-filesystem modules (binary extensions in regular builds, or both\n # binary extensions and compiled pyc modules in noarchive debug builds).\n yield from _orig_pkgutil_iter_modules(path, prefix)\n\n # Find the instance of PyInstaller's PyiFrozenImporter.\n for importer in pkgutil.iter_importers():\n if isinstance(importer, PyiFrozenImporter):\n break\n else:\n return\n\n if path is None:\n # Search for all top-level packages/modules in the PyiFrozenImporter's prefix tree.\n for entry_name, entry_data in importer.toc_tree.items():\n # Package nodes have dict for data, module nodes (leaves) have (empty) strings.\n is_pkg = isinstance(entry_data, dict)\n yield pkgutil.ModuleInfo(importer, prefix + entry_name, is_pkg)\n else:\n # Fully resolve sys._MEIPASS, in order to avoid path mis-matches when the given search paths also contain\n # symbolic links and are already fully resolved. See #6537 for an example of such a problem with onefile\n # build on macOS, where the temporary directory is placed under /var, which is actually a symbolic link\n # to /private/var.\n MEIPASS = pathlib.Path(sys._MEIPASS).resolve()\n\n # For macOS .app bundles, the \"true\" sys._MEIPASS is `name.app/Contents/Frameworks`, but due to\n # cross-linking, we must also consider `name.app/Contents/Resources`. See #7884.\n if is_macos_app_bundle:\n ALT_MEIPASS = (pathlib.Path(sys._MEIPASS).parent / \"Resources\").resolve()\n\n # Process all given paths\n seen_pkg_prefices = set()\n for pkg_path in path:\n # Fully resolve the given path, in case it contains symbolic links.\n pkg_path = pathlib.Path(pkg_path).resolve()\n\n # Try to compute package prefix, which is the remainder of the given path, relative to the sys._MEIPASS.\n pkg_prefix = None\n try:\n pkg_prefix = pkg_path.relative_to(MEIPASS)\n except ValueError: # ValueError: 'a' is not in the subpath of 'b'\n pass\n\n # For macOS .app bundle, try the alternative sys._MEIPASS\n if pkg_prefix is None and is_macos_app_bundle:\n try:\n pkg_prefix = pkg_path.relative_to(ALT_MEIPASS)\n except ValueError:\n pass\n\n # Given path is outside of sys._MEIPASS; ignore it.\n if pkg_prefix is None:\n continue\n\n # If we are given multiple paths and they are either duplicated or resolve to the same package prefix,\n # prevent duplication.\n if pkg_prefix in seen_pkg_prefices:\n continue\n seen_pkg_prefices.add(pkg_prefix)\n\n # Traverse the PyiFrozenImporter's prefix tree using components of the relative package path, starting\n # at the tree root. This implicitly handles the case where the given path was actually sys._MEIPASS\n # itself, as in this case pkg_prefix is pathlib.Path(\".\") with empty parts tuple.\n tree_node = importer.toc_tree\n for pkg_name_part in pkg_prefix.parts:\n tree_node = tree_node.get(pkg_name_part)\n if not isinstance(tree_node, dict):\n # This check handles two cases:\n # a) path does not exist (`tree_node` is `None`)\n # b) path corresponds to a module instead of a package (`tree_node` is a leaf node (`str`)).\n tree_node = {}\n break\n\n # List entries from the target node.\n for entry_name, entry_data in tree_node.items():\n is_pkg = isinstance(entry_data, dict)\n yield pkgutil.ModuleInfo(importer, prefix + entry_name, is_pkg)\n\n pkgutil.iter_modules = _pyi_pkgutil_iter_modules\n\n\n_pyi_rthook()\ndel _pyi_rthook\n", "path": "PyInstaller/hooks/rthooks/pyi_rth_pkgutil.py"}]} | 3,935 | 319 |
gh_patches_debug_26453 | rasdani/github-patches | git_diff | StackStorm__st2-4666 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
st2 python3 interpreter is falling back to importing a python2 lib that is incompatible with python3
##### SUMMARY
One of my libraries is trying to import `cassandra.cluster`. That library is trying to import `from concurrent.futures import ThreadPoolExecutor, FIRST_COMPLETED, wait as wait_futures` - which, as I understand, is built into python3 (and a separate library for compatibility with python3)
##### ISSUE TYPE
- Bug Report
##### STACKSTORM VERSION
```
# st2 --version
st2 3.0.0, on Python 2.7.6
```
##### OS / ENVIRONMENT / INSTALL METHOD
Docker container running Ubuntu 14.04
##### STEPS TO REPRODUCE
Create a python runner in your python3-specific pack. Inside the runner import cassandra libs and just create an object.
```
from cassandra.cluster import Cluster
cluster = Cluster()
```
##### EXPECTED RESULTS
I expect the library to import and the object to initialize
##### ACTUAL RESULTS
st2 python3 falls back to python2 to import the lib and it throws an exception similar to
```
File \"/opt/stackstorm/packs/ostk_common/actions/lib/ostkdbs.py\", line 2, in <module>
from cassandra.cluster import Cluster
File \"/opt/stackstorm/virtualenvs/ostk_common/lib/python3.5/site-packages/cassandra/cluster.py\", line 23, in <module>
from concurrent.futures import ThreadPoolExecutor, FIRST_COMPLETED, wait as wait_futures
File \"/opt/stackstorm/st2/lib/python2.7/site-packages/concurrent/futures/__init__.py\", line 8, in <module>
from concurrent.futures._base import (FIRST_COMPLETED,
File \"/opt/stackstorm/st2/lib/python2.7/site-packages/concurrent/futures/_base.py\", line 414
raise exception_type, self._exception, self._traceback
^
SyntaxError: invalid syntax
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `st2common/st2common/util/sandboxing.py`
Content:
```
1 # Copyright 2019 Extreme Networks, Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 Utility functions for our sandboxing model which is implemented on top of separate processes and
17 virtual environments.
18 """
19
20 from __future__ import absolute_import
21
22 import os
23 import sys
24 import fnmatch
25 from distutils.sysconfig import get_python_lib
26
27 from oslo_config import cfg
28
29 from st2common.constants.pack import SYSTEM_PACK_NAMES
30 from st2common.content.utils import get_pack_base_path
31
32 __all__ = [
33 'get_sandbox_python_binary_path',
34 'get_sandbox_python_path',
35 'get_sandbox_python_path_for_python_action',
36 'get_sandbox_path',
37 'get_sandbox_virtualenv_path',
38
39 'is_pack_virtualenv_using_python3'
40 ]
41
42
43 def get_sandbox_python_binary_path(pack=None):
44 """
45 Return path to the Python binary for the provided pack.
46
47 :param pack: Pack name.
48 :type pack: ``str``
49 """
50 system_base_path = cfg.CONF.system.base_path
51 virtualenv_path = os.path.join(system_base_path, 'virtualenvs', pack)
52
53 if pack in SYSTEM_PACK_NAMES:
54 # Use system python for "packs" and "core" actions
55 python_path = sys.executable
56 else:
57 python_path = os.path.join(virtualenv_path, 'bin/python')
58
59 return python_path
60
61
62 def get_sandbox_path(virtualenv_path):
63 """
64 Return PATH environment variable value for the sandboxed environment.
65
66 This function makes sure that virtualenv/bin directory is in the path and has precedence over
67 the global PATH values.
68
69 Note: This function needs to be called from the parent process (one which is spawning a
70 sandboxed process).
71 """
72 sandbox_path = []
73
74 parent_path = os.environ.get('PATH', '')
75 if not virtualenv_path:
76 return parent_path
77
78 parent_path = parent_path.split(':')
79 parent_path = [path for path in parent_path if path]
80
81 # Add virtualenv bin directory
82 virtualenv_bin_path = os.path.join(virtualenv_path, 'bin/')
83 sandbox_path.append(virtualenv_bin_path)
84 sandbox_path.extend(parent_path)
85
86 sandbox_path = ':'.join(sandbox_path)
87 return sandbox_path
88
89
90 def get_sandbox_python_path(inherit_from_parent=True, inherit_parent_virtualenv=True):
91 """
92 Return PYTHONPATH environment variable value for the new sandboxed environment.
93
94 This function takes into account if the current (parent) process is running under virtualenv
95 and other things like that.
96
97 Note: This function needs to be called from the parent process (one which is spawning a
98 sandboxed process).
99
100 :param inherit_from_parent: True to inheir PYTHONPATH from the current process.
101 :type inherit_from_parent: ``str``
102
103 :param inherit_parent_virtualenv: True to inherit virtualenv path if the current process is
104 running inside virtual environment.
105 :type inherit_parent_virtualenv: ``str``
106 """
107 sandbox_python_path = []
108 parent_python_path = os.environ.get('PYTHONPATH', '')
109
110 parent_python_path = parent_python_path.split(':')
111 parent_python_path = [path for path in parent_python_path if path]
112
113 if inherit_from_parent:
114 sandbox_python_path.extend(parent_python_path)
115
116 if inherit_parent_virtualenv and hasattr(sys, 'real_prefix'):
117 # We are running inside virtualenv
118 site_packages_dir = get_python_lib()
119
120 sys_prefix = os.path.abspath(sys.prefix)
121 assert sys_prefix in site_packages_dir
122
123 sandbox_python_path.append(site_packages_dir)
124
125 sandbox_python_path = ':'.join(sandbox_python_path)
126 sandbox_python_path = ':' + sandbox_python_path
127 return sandbox_python_path
128
129
130 def get_sandbox_python_path_for_python_action(pack, inherit_from_parent=True,
131 inherit_parent_virtualenv=True):
132 """
133 Return sandbox PYTHONPATH for a particular Python runner action.
134
135 Same as get_sandbox_python_path() function, but it's intended to be used for Python runner
136 actions and also takes into account if a pack virtual environment uses Python 3.
137 """
138 sandbox_python_path = get_sandbox_python_path(
139 inherit_from_parent=inherit_from_parent,
140 inherit_parent_virtualenv=inherit_parent_virtualenv)
141
142 pack_base_path = get_pack_base_path(pack_name=pack)
143 virtualenv_path = get_sandbox_virtualenv_path(pack=pack)
144
145 if not virtualenv_path:
146 return sandbox_python_path
147
148 uses_python3, virtualenv_directories = is_pack_virtualenv_using_python3(pack=pack)
149 if uses_python3:
150 # Add Python 3 lib directory (lib/python3.x) in front of the PYTHONPATH. This way we avoid
151 # issues with scripts trying to use packages / modules from Python 2.7 site-packages
152 # directory instead of the versions from Python 3 stdlib.
153 pack_actions_lib_paths = os.path.join(pack_base_path, 'actions/lib/')
154 pack_virtualenv_lib_path = os.path.join(virtualenv_path, 'lib')
155 python3_lib_directory = os.path.join(pack_virtualenv_lib_path, virtualenv_directories[0])
156
157 # Add Python 3 site-packages directory (lib/python3.x/site-packages) in front of the Python
158 # 2.7 system site-packages This is important because we want Python 3 compatible libraries
159 # to be used from the pack virtual environment and not system ones.
160 python3_site_packages_directory = os.path.join(pack_virtualenv_lib_path,
161 virtualenv_directories[0],
162 'site-packages')
163 sandbox_python_path = (python3_lib_directory + ':' + python3_site_packages_directory + ':' +
164 pack_actions_lib_paths + ':' + sandbox_python_path)
165
166 return sandbox_python_path
167
168
169 def is_pack_virtualenv_using_python3(pack):
170 """
171 Return True if a particular pack virtual environment is using Python 3.
172
173 :return: (uses_python3_bool, virtualenv_lib_directories)
174 :rtype: ``tuple``
175 """
176 # If python3.? directory exists in pack virtualenv lib/ path it means Python 3 is used by
177 # that virtual environment and we take that in to account when constructing PYTHONPATH
178 virtualenv_path = get_sandbox_virtualenv_path(pack=pack)
179
180 if virtualenv_path and os.path.isdir(virtualenv_path):
181 pack_virtualenv_lib_path = os.path.join(virtualenv_path, 'lib')
182
183 if not os.path.exists(pack_virtualenv_lib_path):
184 return False, None
185
186 virtualenv_directories = os.listdir(pack_virtualenv_lib_path)
187 virtualenv_directories = [dir_name for dir_name in virtualenv_directories if
188 fnmatch.fnmatch(dir_name, 'python3*')]
189 uses_python3 = bool(virtualenv_directories)
190 else:
191 uses_python3 = False
192 virtualenv_directories = None
193
194 return uses_python3, virtualenv_directories
195
196
197 def get_sandbox_virtualenv_path(pack):
198 """
199 Return a path to the virtual environment for the provided pack.
200 """
201
202 if pack in SYSTEM_PACK_NAMES:
203 virtualenv_path = None
204 else:
205 system_base_path = cfg.CONF.system.base_path
206 virtualenv_path = os.path.join(system_base_path, 'virtualenvs', pack)
207
208 return virtualenv_path
209
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/st2common/st2common/util/sandboxing.py b/st2common/st2common/util/sandboxing.py
--- a/st2common/st2common/util/sandboxing.py
+++ b/st2common/st2common/util/sandboxing.py
@@ -160,8 +160,30 @@
python3_site_packages_directory = os.path.join(pack_virtualenv_lib_path,
virtualenv_directories[0],
'site-packages')
- sandbox_python_path = (python3_lib_directory + ':' + python3_site_packages_directory + ':' +
- pack_actions_lib_paths + ':' + sandbox_python_path)
+
+ # Work around to make sure we also add system lib dir to PYTHONPATH and not just virtualenv
+ # one
+ # NOTE: abc.py is always available in base lib directory which is symlinked to virtualenv
+ # lib directory
+ abc_module_path = os.path.join(python3_lib_directory, 'abc.py')
+ link_path = os.path.realpath(abc_module_path)
+ python3_system_lib_directory = os.path.dirname(link_path)
+
+ if not os.path.exists(python3_system_lib_directory):
+ python3_system_lib_directory = None
+
+ full_sandbox_python_path = []
+
+ # NOTE: Order here is very important for imports to function correctly
+ if python3_lib_directory:
+ full_sandbox_python_path.append(python3_system_lib_directory)
+
+ full_sandbox_python_path.append(python3_lib_directory)
+ full_sandbox_python_path.append(python3_site_packages_directory)
+ full_sandbox_python_path.append(pack_actions_lib_paths)
+ full_sandbox_python_path.append(sandbox_python_path)
+
+ sandbox_python_path = ':'.join(full_sandbox_python_path)
return sandbox_python_path
| {"golden_diff": "diff --git a/st2common/st2common/util/sandboxing.py b/st2common/st2common/util/sandboxing.py\n--- a/st2common/st2common/util/sandboxing.py\n+++ b/st2common/st2common/util/sandboxing.py\n@@ -160,8 +160,30 @@\n python3_site_packages_directory = os.path.join(pack_virtualenv_lib_path,\n virtualenv_directories[0],\n 'site-packages')\n- sandbox_python_path = (python3_lib_directory + ':' + python3_site_packages_directory + ':' +\n- pack_actions_lib_paths + ':' + sandbox_python_path)\n+\n+ # Work around to make sure we also add system lib dir to PYTHONPATH and not just virtualenv\n+ # one\n+ # NOTE: abc.py is always available in base lib directory which is symlinked to virtualenv\n+ # lib directory\n+ abc_module_path = os.path.join(python3_lib_directory, 'abc.py')\n+ link_path = os.path.realpath(abc_module_path)\n+ python3_system_lib_directory = os.path.dirname(link_path)\n+\n+ if not os.path.exists(python3_system_lib_directory):\n+ python3_system_lib_directory = None\n+\n+ full_sandbox_python_path = []\n+\n+ # NOTE: Order here is very important for imports to function correctly\n+ if python3_lib_directory:\n+ full_sandbox_python_path.append(python3_system_lib_directory)\n+\n+ full_sandbox_python_path.append(python3_lib_directory)\n+ full_sandbox_python_path.append(python3_site_packages_directory)\n+ full_sandbox_python_path.append(pack_actions_lib_paths)\n+ full_sandbox_python_path.append(sandbox_python_path)\n+\n+ sandbox_python_path = ':'.join(full_sandbox_python_path)\n \n return sandbox_python_path\n", "issue": "st2 python3 interpreter is falling back to importing a python2 lib that is incompatible with python3\n##### SUMMARY\r\n\r\n One of my libraries is trying to import `cassandra.cluster`. That library is trying to import `from concurrent.futures import ThreadPoolExecutor, FIRST_COMPLETED, wait as wait_futures` - which, as I understand, is built into python3 (and a separate library for compatibility with python3)\r\n\r\n##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### STACKSTORM VERSION\r\n```\r\n# st2 --version\r\nst2 3.0.0, on Python 2.7.6\r\n```\r\n##### OS / ENVIRONMENT / INSTALL METHOD\r\nDocker container running Ubuntu 14.04\r\n\r\n##### STEPS TO REPRODUCE\r\nCreate a python runner in your python3-specific pack. Inside the runner import cassandra libs and just create an object.\r\n```\r\nfrom cassandra.cluster import Cluster\r\n cluster = Cluster()\r\n```\r\n\r\n##### EXPECTED RESULTS\r\nI expect the library to import and the object to initialize\r\n\r\n##### ACTUAL RESULTS\r\nst2 python3 falls back to python2 to import the lib and it throws an exception similar to\r\n```\r\n File \\\"/opt/stackstorm/packs/ostk_common/actions/lib/ostkdbs.py\\\", line 2, in <module>\r\n from cassandra.cluster import Cluster\r\n File \\\"/opt/stackstorm/virtualenvs/ostk_common/lib/python3.5/site-packages/cassandra/cluster.py\\\", line 23, in <module>\r\n from concurrent.futures import ThreadPoolExecutor, FIRST_COMPLETED, wait as wait_futures\r\n File \\\"/opt/stackstorm/st2/lib/python2.7/site-packages/concurrent/futures/__init__.py\\\", line 8, in <module>\r\n from concurrent.futures._base import (FIRST_COMPLETED,\r\n File \\\"/opt/stackstorm/st2/lib/python2.7/site-packages/concurrent/futures/_base.py\\\", line 414\r\n raise exception_type, self._exception, self._traceback\r\n ^\r\nSyntaxError: invalid syntax\r\n```\n", "before_files": [{"content": "# Copyright 2019 Extreme Networks, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nUtility functions for our sandboxing model which is implemented on top of separate processes and\nvirtual environments.\n\"\"\"\n\nfrom __future__ import absolute_import\n\nimport os\nimport sys\nimport fnmatch\nfrom distutils.sysconfig import get_python_lib\n\nfrom oslo_config import cfg\n\nfrom st2common.constants.pack import SYSTEM_PACK_NAMES\nfrom st2common.content.utils import get_pack_base_path\n\n__all__ = [\n 'get_sandbox_python_binary_path',\n 'get_sandbox_python_path',\n 'get_sandbox_python_path_for_python_action',\n 'get_sandbox_path',\n 'get_sandbox_virtualenv_path',\n\n 'is_pack_virtualenv_using_python3'\n]\n\n\ndef get_sandbox_python_binary_path(pack=None):\n \"\"\"\n Return path to the Python binary for the provided pack.\n\n :param pack: Pack name.\n :type pack: ``str``\n \"\"\"\n system_base_path = cfg.CONF.system.base_path\n virtualenv_path = os.path.join(system_base_path, 'virtualenvs', pack)\n\n if pack in SYSTEM_PACK_NAMES:\n # Use system python for \"packs\" and \"core\" actions\n python_path = sys.executable\n else:\n python_path = os.path.join(virtualenv_path, 'bin/python')\n\n return python_path\n\n\ndef get_sandbox_path(virtualenv_path):\n \"\"\"\n Return PATH environment variable value for the sandboxed environment.\n\n This function makes sure that virtualenv/bin directory is in the path and has precedence over\n the global PATH values.\n\n Note: This function needs to be called from the parent process (one which is spawning a\n sandboxed process).\n \"\"\"\n sandbox_path = []\n\n parent_path = os.environ.get('PATH', '')\n if not virtualenv_path:\n return parent_path\n\n parent_path = parent_path.split(':')\n parent_path = [path for path in parent_path if path]\n\n # Add virtualenv bin directory\n virtualenv_bin_path = os.path.join(virtualenv_path, 'bin/')\n sandbox_path.append(virtualenv_bin_path)\n sandbox_path.extend(parent_path)\n\n sandbox_path = ':'.join(sandbox_path)\n return sandbox_path\n\n\ndef get_sandbox_python_path(inherit_from_parent=True, inherit_parent_virtualenv=True):\n \"\"\"\n Return PYTHONPATH environment variable value for the new sandboxed environment.\n\n This function takes into account if the current (parent) process is running under virtualenv\n and other things like that.\n\n Note: This function needs to be called from the parent process (one which is spawning a\n sandboxed process).\n\n :param inherit_from_parent: True to inheir PYTHONPATH from the current process.\n :type inherit_from_parent: ``str``\n\n :param inherit_parent_virtualenv: True to inherit virtualenv path if the current process is\n running inside virtual environment.\n :type inherit_parent_virtualenv: ``str``\n \"\"\"\n sandbox_python_path = []\n parent_python_path = os.environ.get('PYTHONPATH', '')\n\n parent_python_path = parent_python_path.split(':')\n parent_python_path = [path for path in parent_python_path if path]\n\n if inherit_from_parent:\n sandbox_python_path.extend(parent_python_path)\n\n if inherit_parent_virtualenv and hasattr(sys, 'real_prefix'):\n # We are running inside virtualenv\n site_packages_dir = get_python_lib()\n\n sys_prefix = os.path.abspath(sys.prefix)\n assert sys_prefix in site_packages_dir\n\n sandbox_python_path.append(site_packages_dir)\n\n sandbox_python_path = ':'.join(sandbox_python_path)\n sandbox_python_path = ':' + sandbox_python_path\n return sandbox_python_path\n\n\ndef get_sandbox_python_path_for_python_action(pack, inherit_from_parent=True,\n inherit_parent_virtualenv=True):\n \"\"\"\n Return sandbox PYTHONPATH for a particular Python runner action.\n\n Same as get_sandbox_python_path() function, but it's intended to be used for Python runner\n actions and also takes into account if a pack virtual environment uses Python 3.\n \"\"\"\n sandbox_python_path = get_sandbox_python_path(\n inherit_from_parent=inherit_from_parent,\n inherit_parent_virtualenv=inherit_parent_virtualenv)\n\n pack_base_path = get_pack_base_path(pack_name=pack)\n virtualenv_path = get_sandbox_virtualenv_path(pack=pack)\n\n if not virtualenv_path:\n return sandbox_python_path\n\n uses_python3, virtualenv_directories = is_pack_virtualenv_using_python3(pack=pack)\n if uses_python3:\n # Add Python 3 lib directory (lib/python3.x) in front of the PYTHONPATH. This way we avoid\n # issues with scripts trying to use packages / modules from Python 2.7 site-packages\n # directory instead of the versions from Python 3 stdlib.\n pack_actions_lib_paths = os.path.join(pack_base_path, 'actions/lib/')\n pack_virtualenv_lib_path = os.path.join(virtualenv_path, 'lib')\n python3_lib_directory = os.path.join(pack_virtualenv_lib_path, virtualenv_directories[0])\n\n # Add Python 3 site-packages directory (lib/python3.x/site-packages) in front of the Python\n # 2.7 system site-packages This is important because we want Python 3 compatible libraries\n # to be used from the pack virtual environment and not system ones.\n python3_site_packages_directory = os.path.join(pack_virtualenv_lib_path,\n virtualenv_directories[0],\n 'site-packages')\n sandbox_python_path = (python3_lib_directory + ':' + python3_site_packages_directory + ':' +\n pack_actions_lib_paths + ':' + sandbox_python_path)\n\n return sandbox_python_path\n\n\ndef is_pack_virtualenv_using_python3(pack):\n \"\"\"\n Return True if a particular pack virtual environment is using Python 3.\n\n :return: (uses_python3_bool, virtualenv_lib_directories)\n :rtype: ``tuple``\n \"\"\"\n # If python3.? directory exists in pack virtualenv lib/ path it means Python 3 is used by\n # that virtual environment and we take that in to account when constructing PYTHONPATH\n virtualenv_path = get_sandbox_virtualenv_path(pack=pack)\n\n if virtualenv_path and os.path.isdir(virtualenv_path):\n pack_virtualenv_lib_path = os.path.join(virtualenv_path, 'lib')\n\n if not os.path.exists(pack_virtualenv_lib_path):\n return False, None\n\n virtualenv_directories = os.listdir(pack_virtualenv_lib_path)\n virtualenv_directories = [dir_name for dir_name in virtualenv_directories if\n fnmatch.fnmatch(dir_name, 'python3*')]\n uses_python3 = bool(virtualenv_directories)\n else:\n uses_python3 = False\n virtualenv_directories = None\n\n return uses_python3, virtualenv_directories\n\n\ndef get_sandbox_virtualenv_path(pack):\n \"\"\"\n Return a path to the virtual environment for the provided pack.\n \"\"\"\n\n if pack in SYSTEM_PACK_NAMES:\n virtualenv_path = None\n else:\n system_base_path = cfg.CONF.system.base_path\n virtualenv_path = os.path.join(system_base_path, 'virtualenvs', pack)\n\n return virtualenv_path\n", "path": "st2common/st2common/util/sandboxing.py"}], "after_files": [{"content": "# Copyright 2019 Extreme Networks, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nUtility functions for our sandboxing model which is implemented on top of separate processes and\nvirtual environments.\n\"\"\"\n\nfrom __future__ import absolute_import\n\nimport os\nimport sys\nimport fnmatch\nfrom distutils.sysconfig import get_python_lib\n\nfrom oslo_config import cfg\n\nfrom st2common.constants.pack import SYSTEM_PACK_NAMES\nfrom st2common.content.utils import get_pack_base_path\n\n__all__ = [\n 'get_sandbox_python_binary_path',\n 'get_sandbox_python_path',\n 'get_sandbox_python_path_for_python_action',\n 'get_sandbox_path',\n 'get_sandbox_virtualenv_path',\n\n 'is_pack_virtualenv_using_python3'\n]\n\n\ndef get_sandbox_python_binary_path(pack=None):\n \"\"\"\n Return path to the Python binary for the provided pack.\n\n :param pack: Pack name.\n :type pack: ``str``\n \"\"\"\n system_base_path = cfg.CONF.system.base_path\n virtualenv_path = os.path.join(system_base_path, 'virtualenvs', pack)\n\n if pack in SYSTEM_PACK_NAMES:\n # Use system python for \"packs\" and \"core\" actions\n python_path = sys.executable\n else:\n python_path = os.path.join(virtualenv_path, 'bin/python')\n\n return python_path\n\n\ndef get_sandbox_path(virtualenv_path):\n \"\"\"\n Return PATH environment variable value for the sandboxed environment.\n\n This function makes sure that virtualenv/bin directory is in the path and has precedence over\n the global PATH values.\n\n Note: This function needs to be called from the parent process (one which is spawning a\n sandboxed process).\n \"\"\"\n sandbox_path = []\n\n parent_path = os.environ.get('PATH', '')\n if not virtualenv_path:\n return parent_path\n\n parent_path = parent_path.split(':')\n parent_path = [path for path in parent_path if path]\n\n # Add virtualenv bin directory\n virtualenv_bin_path = os.path.join(virtualenv_path, 'bin/')\n sandbox_path.append(virtualenv_bin_path)\n sandbox_path.extend(parent_path)\n\n sandbox_path = ':'.join(sandbox_path)\n return sandbox_path\n\n\ndef get_sandbox_python_path(inherit_from_parent=True, inherit_parent_virtualenv=True):\n \"\"\"\n Return PYTHONPATH environment variable value for the new sandboxed environment.\n\n This function takes into account if the current (parent) process is running under virtualenv\n and other things like that.\n\n Note: This function needs to be called from the parent process (one which is spawning a\n sandboxed process).\n\n :param inherit_from_parent: True to inheir PYTHONPATH from the current process.\n :type inherit_from_parent: ``str``\n\n :param inherit_parent_virtualenv: True to inherit virtualenv path if the current process is\n running inside virtual environment.\n :type inherit_parent_virtualenv: ``str``\n \"\"\"\n sandbox_python_path = []\n parent_python_path = os.environ.get('PYTHONPATH', '')\n\n parent_python_path = parent_python_path.split(':')\n parent_python_path = [path for path in parent_python_path if path]\n\n if inherit_from_parent:\n sandbox_python_path.extend(parent_python_path)\n\n if inherit_parent_virtualenv and hasattr(sys, 'real_prefix'):\n # We are running inside virtualenv\n site_packages_dir = get_python_lib()\n\n sys_prefix = os.path.abspath(sys.prefix)\n assert sys_prefix in site_packages_dir\n\n sandbox_python_path.append(site_packages_dir)\n\n sandbox_python_path = ':'.join(sandbox_python_path)\n sandbox_python_path = ':' + sandbox_python_path\n return sandbox_python_path\n\n\ndef get_sandbox_python_path_for_python_action(pack, inherit_from_parent=True,\n inherit_parent_virtualenv=True):\n \"\"\"\n Return sandbox PYTHONPATH for a particular Python runner action.\n\n Same as get_sandbox_python_path() function, but it's intended to be used for Python runner\n actions and also takes into account if a pack virtual environment uses Python 3.\n \"\"\"\n sandbox_python_path = get_sandbox_python_path(\n inherit_from_parent=inherit_from_parent,\n inherit_parent_virtualenv=inherit_parent_virtualenv)\n\n pack_base_path = get_pack_base_path(pack_name=pack)\n virtualenv_path = get_sandbox_virtualenv_path(pack=pack)\n\n if not virtualenv_path:\n return sandbox_python_path\n\n uses_python3, virtualenv_directories = is_pack_virtualenv_using_python3(pack=pack)\n if uses_python3:\n # Add Python 3 lib directory (lib/python3.x) in front of the PYTHONPATH. This way we avoid\n # issues with scripts trying to use packages / modules from Python 2.7 site-packages\n # directory instead of the versions from Python 3 stdlib.\n pack_actions_lib_paths = os.path.join(pack_base_path, 'actions/lib/')\n pack_virtualenv_lib_path = os.path.join(virtualenv_path, 'lib')\n python3_lib_directory = os.path.join(pack_virtualenv_lib_path, virtualenv_directories[0])\n\n # Add Python 3 site-packages directory (lib/python3.x/site-packages) in front of the Python\n # 2.7 system site-packages This is important because we want Python 3 compatible libraries\n # to be used from the pack virtual environment and not system ones.\n python3_site_packages_directory = os.path.join(pack_virtualenv_lib_path,\n virtualenv_directories[0],\n 'site-packages')\n\n # Work around to make sure we also add system lib dir to PYTHONPATH and not just virtualenv\n # one\n # NOTE: abc.py is always available in base lib directory which is symlinked to virtualenv\n # lib directory\n abc_module_path = os.path.join(python3_lib_directory, 'abc.py')\n link_path = os.path.realpath(abc_module_path)\n python3_system_lib_directory = os.path.dirname(link_path)\n\n if not os.path.exists(python3_system_lib_directory):\n python3_system_lib_directory = None\n\n full_sandbox_python_path = []\n\n # NOTE: Order here is very important for imports to function correctly\n if python3_lib_directory:\n full_sandbox_python_path.append(python3_system_lib_directory)\n\n full_sandbox_python_path.append(python3_lib_directory)\n full_sandbox_python_path.append(python3_site_packages_directory)\n full_sandbox_python_path.append(pack_actions_lib_paths)\n full_sandbox_python_path.append(sandbox_python_path)\n\n sandbox_python_path = ':'.join(full_sandbox_python_path)\n\n return sandbox_python_path\n\n\ndef is_pack_virtualenv_using_python3(pack):\n \"\"\"\n Return True if a particular pack virtual environment is using Python 3.\n\n :return: (uses_python3_bool, virtualenv_lib_directories)\n :rtype: ``tuple``\n \"\"\"\n # If python3.? directory exists in pack virtualenv lib/ path it means Python 3 is used by\n # that virtual environment and we take that in to account when constructing PYTHONPATH\n virtualenv_path = get_sandbox_virtualenv_path(pack=pack)\n\n if virtualenv_path and os.path.isdir(virtualenv_path):\n pack_virtualenv_lib_path = os.path.join(virtualenv_path, 'lib')\n\n if not os.path.exists(pack_virtualenv_lib_path):\n return False, None\n\n virtualenv_directories = os.listdir(pack_virtualenv_lib_path)\n virtualenv_directories = [dir_name for dir_name in virtualenv_directories if\n fnmatch.fnmatch(dir_name, 'python3*')]\n uses_python3 = bool(virtualenv_directories)\n else:\n uses_python3 = False\n virtualenv_directories = None\n\n return uses_python3, virtualenv_directories\n\n\ndef get_sandbox_virtualenv_path(pack):\n \"\"\"\n Return a path to the virtual environment for the provided pack.\n \"\"\"\n\n if pack in SYSTEM_PACK_NAMES:\n virtualenv_path = None\n else:\n system_base_path = cfg.CONF.system.base_path\n virtualenv_path = os.path.join(system_base_path, 'virtualenvs', pack)\n\n return virtualenv_path\n", "path": "st2common/st2common/util/sandboxing.py"}]} | 2,911 | 392 |
gh_patches_debug_12266 | rasdani/github-patches | git_diff | mindsdb__mindsdb-1799 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Connection to CocroachDB is not possible
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mindsdb/integrations/postgres/postgres.py`
Content:
```
1 from contextlib import closing
2 import pg8000
3
4 from lightwood.api import dtype
5 from mindsdb.integrations.base import Integration
6 from mindsdb.utilities.log import log
7
8
9 class PostgreSQLConnectionChecker:
10 def __init__(self, **kwargs):
11 self.host = kwargs.get('host')
12 self.port = kwargs.get('port')
13 self.user = kwargs.get('user')
14 self.password = kwargs.get('password')
15 self.database = kwargs.get('database', 'postgres')
16
17 def _get_connection(self):
18 return pg8000.connect(
19 database=self.database,
20 user=self.user,
21 password=self.password,
22 host=self.host,
23 port=self.port
24 )
25
26 def check_connection(self):
27 try:
28 con = self._get_connection()
29 with closing(con) as con:
30 con.run('select 1;')
31 connected = True
32 except Exception:
33 connected = False
34 return connected
35
36
37 class PostgreSQL(Integration, PostgreSQLConnectionChecker):
38 def __init__(self, config, name, db_info):
39 super().__init__(config, name)
40 self.user = db_info.get('user')
41 self.password = db_info.get('password')
42 self.host = db_info.get('host')
43 self.port = db_info.get('port')
44 self.database = db_info.get('database', 'postgres')
45
46 def _to_postgres_table(self, dtype_dict, predicted_cols, columns):
47 subtype_map = {
48 dtype.integer: ' int8',
49 dtype.float: 'float8',
50 dtype.binary: 'bool',
51 dtype.date: 'date',
52 dtype.datetime: 'timestamp',
53 dtype.binary: 'text',
54 dtype.categorical: 'text',
55 dtype.tags: 'text',
56 dtype.image: 'text',
57 dtype.video: 'text',
58 dtype.audio: 'text',
59 dtype.short_text: 'text',
60 dtype.rich_text: 'text',
61 dtype.array: 'text',
62 dtype.quantity: 'text',
63 dtype.tsarray: 'text',
64 'default': 'text'
65 }
66
67 column_declaration = []
68 for name in columns:
69 try:
70 col_subtype = dtype_dict[name]
71 new_type = subtype_map.get(col_subtype, subtype_map.get('default'))
72 column_declaration.append(f' "{name}" {new_type} ')
73 if name in predicted_cols:
74 column_declaration.append(f' "{name}_original" {new_type} ')
75 except Exception as e:
76 log.error(f'Error: can not determine postgres data type for column {name}: {e}')
77
78 return column_declaration
79
80 def _escape_table_name(self, name):
81 return '"' + name.replace('"', '""') + '"'
82
83 def _query(self, query):
84 con = self._get_connection()
85 with closing(con) as con:
86
87 cur = con.cursor()
88 res = True
89 cur.execute(query)
90
91 try:
92 rows = cur.fetchall()
93 keys = [k[0] if isinstance(k[0], str) else k[0].decode('ascii') for k in cur.description]
94 res = [dict(zip(keys, row)) for row in rows]
95 except Exception:
96 pass
97
98 con.commit()
99
100 return res
101
102 def setup(self):
103 user = f"{self.config['api']['mysql']['user']}_{self.name}"
104 password = self.config['api']['mysql']['password']
105 host = self.config['api']['mysql']['host']
106 port = self.config['api']['mysql']['port']
107
108 try:
109 self._query('''
110 DO $$
111 begin
112 if not exists (SELECT 1 FROM pg_extension where extname = 'mysql_fdw') then
113 CREATE EXTENSION mysql_fdw;
114 end if;
115 END
116 $$;
117 ''')
118 except Exception:
119 print('Error: cant find or activate mysql_fdw extension for PostgreSQL.')
120
121 self._query(f'DROP SCHEMA IF EXISTS {self.mindsdb_database} CASCADE')
122
123 self._query(f"DROP USER MAPPING IF EXISTS FOR {self.user} SERVER server_{self.mindsdb_database}")
124
125 self._query(f'DROP SERVER IF EXISTS server_{self.mindsdb_database} CASCADE')
126
127 self._query(f'''
128 CREATE SERVER server_{self.mindsdb_database}
129 FOREIGN DATA WRAPPER mysql_fdw
130 OPTIONS (host '{host}', port '{port}');
131 ''')
132
133 self._query(f'''
134 CREATE USER MAPPING FOR {self.user}
135 SERVER server_{self.mindsdb_database}
136 OPTIONS (username '{user}', password '{password}');
137 ''')
138
139 self._query(f'CREATE SCHEMA {self.mindsdb_database}')
140
141 q = f"""
142 CREATE FOREIGN TABLE IF NOT EXISTS {self.mindsdb_database}.predictors (
143 name text,
144 status text,
145 accuracy text,
146 predict text,
147 select_data_query text,
148 training_options text
149 )
150 SERVER server_{self.mindsdb_database}
151 OPTIONS (dbname 'mindsdb', table_name 'predictors');
152 """
153 self._query(q)
154
155 q = f"""
156 CREATE FOREIGN TABLE IF NOT EXISTS {self.mindsdb_database}.commands (
157 command text
158 ) SERVER server_{self.mindsdb_database}
159 OPTIONS (dbname 'mindsdb', table_name 'commands');
160 """
161 self._query(q)
162
163 def register_predictors(self, model_data_arr):
164 for model_meta in model_data_arr:
165 name = model_meta['name']
166 predict = model_meta['predict']
167 if not isinstance(predict, list):
168 predict = [predict]
169 columns_sql = ','.join(self._to_postgres_table(
170 model_meta['dtype_dict'],
171 predict,
172 list(model_meta['dtype_dict'].keys())
173 ))
174 columns_sql += ',"select_data_query" text'
175 for col in predict:
176 columns_sql += f',"{col}_confidence" float8'
177 if model_meta['dtype_dict'][col] in (dtype.integer, dtype.float):
178 columns_sql += f',"{col}_min" float8'
179 columns_sql += f',"{col}_max" float8'
180 columns_sql += f',"{col}_explain" text'
181
182 self.unregister_predictor(name)
183 q = f"""
184 CREATE FOREIGN TABLE {self.mindsdb_database}.{self._escape_table_name(name)} (
185 {columns_sql}
186 ) SERVER server_{self.mindsdb_database}
187 OPTIONS (dbname 'mindsdb', table_name '{name}');
188 """
189 self._query(q)
190
191 def unregister_predictor(self, name):
192 q = f"""
193 DROP FOREIGN TABLE IF EXISTS {self.mindsdb_database}.{self._escape_table_name(name)};
194 """
195 self._query(q)
196
197 def get_row_count(self, query):
198 q = f"""
199 SELECT COUNT(*) as count
200 FROM ({query}) as query;
201 """
202 result = self._query(q)
203 return result[0]['count']
204
205 def get_tables_list(self):
206 q = """
207 SELECT table_schema, table_name
208 FROM information_schema.tables
209 WHERE table_schema != 'pg_catalog'
210 AND table_schema != 'information_schema'
211 ORDER BY table_schema, table_name
212 """
213 tables_list = self._query(q)
214 tables = [f"{table['table_schema']}.{table['table_name']}" for table in tables_list]
215 return tables
216
217 def get_columns(self, query):
218 q = f"""SELECT * from ({query}) LIMIT 1;"""
219 query_response = self._query(q)
220 if len(query_response) > 0:
221 columns = list(query_response[0].keys())
222 return columns
223 else:
224 return []
225
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mindsdb/integrations/postgres/postgres.py b/mindsdb/integrations/postgres/postgres.py
--- a/mindsdb/integrations/postgres/postgres.py
+++ b/mindsdb/integrations/postgres/postgres.py
@@ -15,12 +15,16 @@
self.database = kwargs.get('database', 'postgres')
def _get_connection(self):
+ additional_args = {}
+ if 'cockroachlabs.cloud' in self.host:
+ additional_args['ssl_context'] = True
return pg8000.connect(
database=self.database,
user=self.user,
password=self.password,
host=self.host,
- port=self.port
+ port=self.port,
+ **additional_args
)
def check_connection(self):
| {"golden_diff": "diff --git a/mindsdb/integrations/postgres/postgres.py b/mindsdb/integrations/postgres/postgres.py\n--- a/mindsdb/integrations/postgres/postgres.py\n+++ b/mindsdb/integrations/postgres/postgres.py\n@@ -15,12 +15,16 @@\n self.database = kwargs.get('database', 'postgres')\n \n def _get_connection(self):\n+ additional_args = {}\n+ if 'cockroachlabs.cloud' in self.host:\n+ additional_args['ssl_context'] = True\n return pg8000.connect(\n database=self.database,\n user=self.user,\n password=self.password,\n host=self.host,\n- port=self.port\n+ port=self.port,\n+ **additional_args\n )\n \n def check_connection(self):\n", "issue": "Connection to CocroachDB is not possible\n\n", "before_files": [{"content": "from contextlib import closing\nimport pg8000\n\nfrom lightwood.api import dtype\nfrom mindsdb.integrations.base import Integration\nfrom mindsdb.utilities.log import log\n\n\nclass PostgreSQLConnectionChecker:\n def __init__(self, **kwargs):\n self.host = kwargs.get('host')\n self.port = kwargs.get('port')\n self.user = kwargs.get('user')\n self.password = kwargs.get('password')\n self.database = kwargs.get('database', 'postgres')\n\n def _get_connection(self):\n return pg8000.connect(\n database=self.database,\n user=self.user,\n password=self.password,\n host=self.host,\n port=self.port\n )\n\n def check_connection(self):\n try:\n con = self._get_connection()\n with closing(con) as con:\n con.run('select 1;')\n connected = True\n except Exception:\n connected = False\n return connected\n\n\nclass PostgreSQL(Integration, PostgreSQLConnectionChecker):\n def __init__(self, config, name, db_info):\n super().__init__(config, name)\n self.user = db_info.get('user')\n self.password = db_info.get('password')\n self.host = db_info.get('host')\n self.port = db_info.get('port')\n self.database = db_info.get('database', 'postgres')\n\n def _to_postgres_table(self, dtype_dict, predicted_cols, columns):\n subtype_map = {\n dtype.integer: ' int8',\n dtype.float: 'float8',\n dtype.binary: 'bool',\n dtype.date: 'date',\n dtype.datetime: 'timestamp',\n dtype.binary: 'text',\n dtype.categorical: 'text',\n dtype.tags: 'text',\n dtype.image: 'text',\n dtype.video: 'text',\n dtype.audio: 'text',\n dtype.short_text: 'text',\n dtype.rich_text: 'text',\n dtype.array: 'text',\n dtype.quantity: 'text',\n dtype.tsarray: 'text',\n 'default': 'text'\n }\n\n column_declaration = []\n for name in columns:\n try:\n col_subtype = dtype_dict[name]\n new_type = subtype_map.get(col_subtype, subtype_map.get('default'))\n column_declaration.append(f' \"{name}\" {new_type} ')\n if name in predicted_cols:\n column_declaration.append(f' \"{name}_original\" {new_type} ')\n except Exception as e:\n log.error(f'Error: can not determine postgres data type for column {name}: {e}')\n\n return column_declaration\n\n def _escape_table_name(self, name):\n return '\"' + name.replace('\"', '\"\"') + '\"'\n\n def _query(self, query):\n con = self._get_connection()\n with closing(con) as con:\n\n cur = con.cursor()\n res = True\n cur.execute(query)\n\n try:\n rows = cur.fetchall()\n keys = [k[0] if isinstance(k[0], str) else k[0].decode('ascii') for k in cur.description]\n res = [dict(zip(keys, row)) for row in rows]\n except Exception:\n pass\n\n con.commit()\n\n return res\n\n def setup(self):\n user = f\"{self.config['api']['mysql']['user']}_{self.name}\"\n password = self.config['api']['mysql']['password']\n host = self.config['api']['mysql']['host']\n port = self.config['api']['mysql']['port']\n\n try:\n self._query('''\n DO $$\n begin\n if not exists (SELECT 1 FROM pg_extension where extname = 'mysql_fdw') then\n CREATE EXTENSION mysql_fdw;\n end if;\n END\n $$;\n ''')\n except Exception:\n print('Error: cant find or activate mysql_fdw extension for PostgreSQL.')\n\n self._query(f'DROP SCHEMA IF EXISTS {self.mindsdb_database} CASCADE')\n\n self._query(f\"DROP USER MAPPING IF EXISTS FOR {self.user} SERVER server_{self.mindsdb_database}\")\n\n self._query(f'DROP SERVER IF EXISTS server_{self.mindsdb_database} CASCADE')\n\n self._query(f'''\n CREATE SERVER server_{self.mindsdb_database}\n FOREIGN DATA WRAPPER mysql_fdw\n OPTIONS (host '{host}', port '{port}');\n ''')\n\n self._query(f'''\n CREATE USER MAPPING FOR {self.user}\n SERVER server_{self.mindsdb_database}\n OPTIONS (username '{user}', password '{password}');\n ''')\n\n self._query(f'CREATE SCHEMA {self.mindsdb_database}')\n\n q = f\"\"\"\n CREATE FOREIGN TABLE IF NOT EXISTS {self.mindsdb_database}.predictors (\n name text,\n status text,\n accuracy text,\n predict text,\n select_data_query text,\n training_options text\n )\n SERVER server_{self.mindsdb_database}\n OPTIONS (dbname 'mindsdb', table_name 'predictors');\n \"\"\"\n self._query(q)\n\n q = f\"\"\"\n CREATE FOREIGN TABLE IF NOT EXISTS {self.mindsdb_database}.commands (\n command text\n ) SERVER server_{self.mindsdb_database}\n OPTIONS (dbname 'mindsdb', table_name 'commands');\n \"\"\"\n self._query(q)\n\n def register_predictors(self, model_data_arr):\n for model_meta in model_data_arr:\n name = model_meta['name']\n predict = model_meta['predict']\n if not isinstance(predict, list):\n predict = [predict]\n columns_sql = ','.join(self._to_postgres_table(\n model_meta['dtype_dict'],\n predict,\n list(model_meta['dtype_dict'].keys())\n ))\n columns_sql += ',\"select_data_query\" text'\n for col in predict:\n columns_sql += f',\"{col}_confidence\" float8'\n if model_meta['dtype_dict'][col] in (dtype.integer, dtype.float):\n columns_sql += f',\"{col}_min\" float8'\n columns_sql += f',\"{col}_max\" float8'\n columns_sql += f',\"{col}_explain\" text'\n\n self.unregister_predictor(name)\n q = f\"\"\"\n CREATE FOREIGN TABLE {self.mindsdb_database}.{self._escape_table_name(name)} (\n {columns_sql}\n ) SERVER server_{self.mindsdb_database}\n OPTIONS (dbname 'mindsdb', table_name '{name}');\n \"\"\"\n self._query(q)\n\n def unregister_predictor(self, name):\n q = f\"\"\"\n DROP FOREIGN TABLE IF EXISTS {self.mindsdb_database}.{self._escape_table_name(name)};\n \"\"\"\n self._query(q)\n\n def get_row_count(self, query):\n q = f\"\"\"\n SELECT COUNT(*) as count\n FROM ({query}) as query;\n \"\"\"\n result = self._query(q)\n return result[0]['count']\n\n def get_tables_list(self):\n q = \"\"\"\n SELECT table_schema, table_name\n FROM information_schema.tables\n WHERE table_schema != 'pg_catalog'\n AND table_schema != 'information_schema'\n ORDER BY table_schema, table_name\n \"\"\"\n tables_list = self._query(q)\n tables = [f\"{table['table_schema']}.{table['table_name']}\" for table in tables_list]\n return tables\n\n def get_columns(self, query):\n q = f\"\"\"SELECT * from ({query}) LIMIT 1;\"\"\"\n query_response = self._query(q)\n if len(query_response) > 0:\n columns = list(query_response[0].keys())\n return columns\n else:\n return []\n", "path": "mindsdb/integrations/postgres/postgres.py"}], "after_files": [{"content": "from contextlib import closing\nimport pg8000\n\nfrom lightwood.api import dtype\nfrom mindsdb.integrations.base import Integration\nfrom mindsdb.utilities.log import log\n\n\nclass PostgreSQLConnectionChecker:\n def __init__(self, **kwargs):\n self.host = kwargs.get('host')\n self.port = kwargs.get('port')\n self.user = kwargs.get('user')\n self.password = kwargs.get('password')\n self.database = kwargs.get('database', 'postgres')\n\n def _get_connection(self):\n additional_args = {}\n if 'cockroachlabs.cloud' in self.host:\n additional_args['ssl_context'] = True\n return pg8000.connect(\n database=self.database,\n user=self.user,\n password=self.password,\n host=self.host,\n port=self.port,\n **additional_args\n )\n\n def check_connection(self):\n try:\n con = self._get_connection()\n with closing(con) as con:\n con.run('select 1;')\n connected = True\n except Exception:\n connected = False\n return connected\n\n\nclass PostgreSQL(Integration, PostgreSQLConnectionChecker):\n def __init__(self, config, name, db_info):\n super().__init__(config, name)\n self.user = db_info.get('user')\n self.password = db_info.get('password')\n self.host = db_info.get('host')\n self.port = db_info.get('port')\n self.database = db_info.get('database', 'postgres')\n\n def _to_postgres_table(self, dtype_dict, predicted_cols, columns):\n subtype_map = {\n dtype.integer: ' int8',\n dtype.float: 'float8',\n dtype.binary: 'bool',\n dtype.date: 'date',\n dtype.datetime: 'timestamp',\n dtype.binary: 'text',\n dtype.categorical: 'text',\n dtype.tags: 'text',\n dtype.image: 'text',\n dtype.video: 'text',\n dtype.audio: 'text',\n dtype.short_text: 'text',\n dtype.rich_text: 'text',\n dtype.array: 'text',\n dtype.quantity: 'text',\n dtype.tsarray: 'text',\n 'default': 'text'\n }\n\n column_declaration = []\n for name in columns:\n try:\n col_subtype = dtype_dict[name]\n new_type = subtype_map.get(col_subtype, subtype_map.get('default'))\n column_declaration.append(f' \"{name}\" {new_type} ')\n if name in predicted_cols:\n column_declaration.append(f' \"{name}_original\" {new_type} ')\n except Exception as e:\n log.error(f'Error: can not determine postgres data type for column {name}: {e}')\n\n return column_declaration\n\n def _escape_table_name(self, name):\n return '\"' + name.replace('\"', '\"\"') + '\"'\n\n def _query(self, query):\n con = self._get_connection()\n with closing(con) as con:\n\n cur = con.cursor()\n res = True\n cur.execute(query)\n\n try:\n rows = cur.fetchall()\n keys = [k[0] if isinstance(k[0], str) else k[0].decode('ascii') for k in cur.description]\n res = [dict(zip(keys, row)) for row in rows]\n except Exception:\n pass\n\n con.commit()\n\n return res\n\n def setup(self):\n user = f\"{self.config['api']['mysql']['user']}_{self.name}\"\n password = self.config['api']['mysql']['password']\n host = self.config['api']['mysql']['host']\n port = self.config['api']['mysql']['port']\n\n try:\n self._query('''\n DO $$\n begin\n if not exists (SELECT 1 FROM pg_extension where extname = 'mysql_fdw') then\n CREATE EXTENSION mysql_fdw;\n end if;\n END\n $$;\n ''')\n except Exception:\n print('Error: cant find or activate mysql_fdw extension for PostgreSQL.')\n\n self._query(f'DROP SCHEMA IF EXISTS {self.mindsdb_database} CASCADE')\n\n self._query(f\"DROP USER MAPPING IF EXISTS FOR {self.user} SERVER server_{self.mindsdb_database}\")\n\n self._query(f'DROP SERVER IF EXISTS server_{self.mindsdb_database} CASCADE')\n\n self._query(f'''\n CREATE SERVER server_{self.mindsdb_database}\n FOREIGN DATA WRAPPER mysql_fdw\n OPTIONS (host '{host}', port '{port}');\n ''')\n\n self._query(f'''\n CREATE USER MAPPING FOR {self.user}\n SERVER server_{self.mindsdb_database}\n OPTIONS (username '{user}', password '{password}');\n ''')\n\n self._query(f'CREATE SCHEMA {self.mindsdb_database}')\n\n q = f\"\"\"\n CREATE FOREIGN TABLE IF NOT EXISTS {self.mindsdb_database}.predictors (\n name text,\n status text,\n accuracy text,\n predict text,\n select_data_query text,\n training_options text\n )\n SERVER server_{self.mindsdb_database}\n OPTIONS (dbname 'mindsdb', table_name 'predictors');\n \"\"\"\n self._query(q)\n\n q = f\"\"\"\n CREATE FOREIGN TABLE IF NOT EXISTS {self.mindsdb_database}.commands (\n command text\n ) SERVER server_{self.mindsdb_database}\n OPTIONS (dbname 'mindsdb', table_name 'commands');\n \"\"\"\n self._query(q)\n\n def register_predictors(self, model_data_arr):\n for model_meta in model_data_arr:\n name = model_meta['name']\n predict = model_meta['predict']\n if not isinstance(predict, list):\n predict = [predict]\n columns_sql = ','.join(self._to_postgres_table(\n model_meta['dtype_dict'],\n predict,\n list(model_meta['dtype_dict'].keys())\n ))\n columns_sql += ',\"select_data_query\" text'\n for col in predict:\n columns_sql += f',\"{col}_confidence\" float8'\n if model_meta['dtype_dict'][col] in (dtype.integer, dtype.float):\n columns_sql += f',\"{col}_min\" float8'\n columns_sql += f',\"{col}_max\" float8'\n columns_sql += f',\"{col}_explain\" text'\n\n self.unregister_predictor(name)\n q = f\"\"\"\n CREATE FOREIGN TABLE {self.mindsdb_database}.{self._escape_table_name(name)} (\n {columns_sql}\n ) SERVER server_{self.mindsdb_database}\n OPTIONS (dbname 'mindsdb', table_name '{name}');\n \"\"\"\n self._query(q)\n\n def unregister_predictor(self, name):\n q = f\"\"\"\n DROP FOREIGN TABLE IF EXISTS {self.mindsdb_database}.{self._escape_table_name(name)};\n \"\"\"\n self._query(q)\n\n def get_row_count(self, query):\n q = f\"\"\"\n SELECT COUNT(*) as count\n FROM ({query}) as query;\n \"\"\"\n result = self._query(q)\n return result[0]['count']\n\n def get_tables_list(self):\n q = \"\"\"\n SELECT table_schema, table_name\n FROM information_schema.tables\n WHERE table_schema != 'pg_catalog'\n AND table_schema != 'information_schema'\n ORDER BY table_schema, table_name\n \"\"\"\n tables_list = self._query(q)\n tables = [f\"{table['table_schema']}.{table['table_name']}\" for table in tables_list]\n return tables\n\n def get_columns(self, query):\n q = f\"\"\"SELECT * from ({query}) LIMIT 1;\"\"\"\n query_response = self._query(q)\n if len(query_response) > 0:\n columns = list(query_response[0].keys())\n return columns\n else:\n return []\n", "path": "mindsdb/integrations/postgres/postgres.py"}]} | 2,476 | 176 |
gh_patches_debug_8280 | rasdani/github-patches | git_diff | spyder-ide__spyder-11533 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Crash when trying to run a Python file in a project from the explorer pane - Spyder 4.0.1
<!--- **PLEASE READ:** When submitting here, please ensure you've completed the following checklist and checked the boxes to confirm. Issue reports without it may be closed. Thanks! --->
### Issue Report Checklist
* [y] Searched the [issues page](https://github.com/spyder-ide/spyder/issues?q=is%3Aissue) for similar reports
* [ y] Read the relevant sections of the [Spyder Troubleshooting Guide](https://github.com/spyder-ide/spyder/wiki/Troubleshooting-Guide-and-FAQ) and followed its advice
* [ y] Reproduced the issue after updating with ``conda update spyder`` (or ``pip``, if not using Anaconda)
* [n/a ] Could not reproduce inside ``jupyter qtconsole`` (if console-related)
* [ y] Tried basic troubleshooting (if a bug/error)
* [y ] Restarted Spyder
* [ n] Reset preferences with ``spyder --reset`` I reinstalled everything from scratch using `Anaconda3-2019.10-Windows-x86_64`
* [y ] Reinstalled the latest version of [Anaconda](https://www.anaconda.com/download/) updated following: https://docs.anaconda.com/anaconda/install/update-version/
* [n/a] Tried the other applicable steps from the Troubleshooting Guide
* [y ] Completed the **Problem Description**, **Steps to Reproduce** and **Version** sections below
## Problem Description
Crash when trying to run a Python file in a project
Same as in: https://github.com/spyder-ide/spyder/issues/10590
### What steps reproduce the problem?
1. right click on a python file in the left pane
2. choose "run" from the context menu
### What is the expected output? What do you see instead?
file should execute but instead I get an error message
### Paste Traceback/Error Below (if applicable)
<!--- Copy from error dialog or View > Panes > Internal Console --->
```python-traceback
File "C:\Users\...\AppData\Local\Continuum\anaconda3\envs\py37\lib\site-packages\spyder\plugins\explorer\plugin.py", line 100, in <lambda>
False, True))
TypeError: run_script() missing 1 required positional argument: 'console_namespace'
```
## Versions
<!--- You can get this information from Help > About Spyder...
or (if Spyder won't launch) the "conda list" command
from the Anaconda Prompt/Terminal/command line. --->
* Spyder version: 4.0.1
* Python version: 3.7.6
* Qt version: 5.12.5
* PyQt version: 5.12.3
* Operating System name/version: Windows 10
__N.B. In practice I am using a py37 (python 3.7.6) virtual environment selected from the Anaconda Navigator, and with Spyder launched from there.__


### Dependencies
<!--- Please go to the menu entry Help > Dependencies,
press the Copy to clipboard button and paste below --->
```
atomicwrites >=1.2.0 : 1.3.0 (OK)
chardet >=2.0.0 : 3.0.4 (OK)
cloudpickle >=0.5.0 : 1.2.2 (OK)
diff_match_patch >=20181111 : 20181111 (OK)
intervaltree : None (OK)
IPython >=4.0 : 7.12.0 (OK)
jedi =0.14.1 : 0.14.1 (OK)
nbconvert >=4.0 : 5.6.1 (OK)
numpydoc >=0.6.0 : 0.9.2 (OK)
pexpect >=4.4.0 : 4.8.0 (OK)
pickleshare >=0.4 : 0.7.5 (OK)
psutil >=0.3 : 5.6.7 (OK)
pygments >=2.0 : 2.5.2 (OK)
pylint >=0.25 : 2.4.4 (OK)
pyls >=0.31.2;<0.32.0 : 0.31.7 (OK)
zmq >=17 : 18.1.1 (OK)
qdarkstyle >=2.7 : 2.8 (OK)
qtawesome >=0.5.7 : 0.6.1 (OK)
qtconsole >=4.6.0 : 4.6.0 (OK)
qtpy >=1.5.0 : 1.9.0 (OK)
rtree >=0.8.3 : 0.9.4 (OK)
sphinx >=0.6.6 : 2.4.0 (OK)
spyder_kernels >=1.8.1;<2.0.0: 1.8.1 (OK)
watchdog : None (OK)
cython >=0.21 : None (NOK)
matplotlib >=2.0.0 : 3.1.3 (OK)
numpy >=1.7 : 1.18.1 (OK)
pandas >=0.13.1 : 1.0.1 (OK)
scipy >=0.17.0 : 1.4.1 (OK)
sympy >=0.7.3 : None (NOK)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `spyder/plugins/explorer/plugin.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright © Spyder Project Contributors
4 # Licensed under the terms of the MIT License
5 # (see spyder/__init__.py for details)
6
7 """Files and Directories Explorer Plugin"""
8
9 # pylint: disable=C0103
10 # pylint: disable=R0903
11 # pylint: disable=R0911
12 # pylint: disable=R0201
13
14 # Standard library imports
15 import os.path as osp
16
17 # Third party imports
18 from qtpy.QtWidgets import QVBoxLayout
19
20 # Local imports
21 from spyder.config.base import _
22 from spyder.api.plugins import SpyderPluginWidget
23 from spyder.plugins.explorer.widgets.explorer import ExplorerWidget
24 from spyder.plugins.explorer.confpage import ExplorerConfigPage
25
26
27 class Explorer(SpyderPluginWidget):
28 """File and Directories Explorer DockWidget."""
29
30 CONF_SECTION = 'explorer'
31 CONFIGWIDGET_CLASS = ExplorerConfigPage
32 CONF_FILE = False
33
34 def __init__(self, parent=None):
35 """Initialization."""
36 SpyderPluginWidget.__init__(self, parent)
37
38 visible_columns = self.get_option('visible_columns',
39 default=[0, 3]) # Name & Last Mod
40 self.fileexplorer = ExplorerWidget(
41 self,
42 name_filters=self.get_option('name_filters'),
43 show_all=self.get_option('show_all'),
44 show_icontext=self.get_option('show_icontext'),
45 options_button=self.options_button,
46 single_click_to_open=self.get_option('single_click_to_open'),
47 file_associations=self.get_option('file_associations',
48 default={}),
49 visible_columns=visible_columns,
50 )
51 layout = QVBoxLayout()
52 layout.addWidget(self.fileexplorer)
53 self.setLayout(layout)
54 self.fileexplorer.sig_option_changed.connect(
55 self._update_config_options)
56
57 def _update_config_options(self, option, value):
58 """Update the config options of the explorer to make them permanent."""
59 self.set_option(option, value)
60
61 #------ SpyderPluginWidget API ---------------------------------------------
62 def get_plugin_title(self):
63 """Return widget title"""
64 return _("Files")
65
66 def get_focus_widget(self):
67 """
68 Return the widget to give focus to when
69 this plugin's dockwidget is raised on top-level
70 """
71 return self.fileexplorer.treewidget
72
73 def get_plugin_actions(self):
74 """Return a list of actions related to plugin"""
75 return self.fileexplorer.treewidget.common_actions
76
77 def register_plugin(self):
78 """Register plugin in Spyder's main window"""
79 ipyconsole = self.main.ipyconsole
80 treewidget = self.fileexplorer.treewidget
81
82 self.add_dockwidget()
83 self.fileexplorer.sig_open_file.connect(self.main.open_file)
84 self.register_widget_shortcuts(treewidget)
85
86 treewidget.sig_edit.connect(self.main.editor.load)
87 treewidget.sig_removed.connect(self.main.editor.removed)
88 treewidget.sig_removed_tree.connect(self.main.editor.removed_tree)
89 treewidget.sig_renamed.connect(self.main.editor.renamed)
90 treewidget.sig_renamed_tree.connect(self.main.editor.renamed_tree)
91 treewidget.sig_create_module.connect(self.main.editor.new)
92 treewidget.sig_new_file.connect(lambda t: self.main.editor.new(text=t))
93 treewidget.sig_open_interpreter.connect(
94 ipyconsole.create_client_from_path)
95 treewidget.redirect_stdio.connect(
96 self.main.redirect_internalshell_stdio)
97 treewidget.sig_run.connect(
98 lambda fname:
99 ipyconsole.run_script(fname, osp.dirname(fname), '', False, False,
100 False, True))
101 treewidget.sig_open_dir.connect(
102 lambda dirname:
103 self.main.workingdirectory.chdir(dirname,
104 refresh_explorer=False,
105 refresh_console=True))
106
107 self.main.editor.open_dir.connect(self.chdir)
108
109 # Signal "set_explorer_cwd(QString)" will refresh only the
110 # contents of path passed by the signal in explorer:
111 self.main.workingdirectory.set_explorer_cwd.connect(
112 lambda directory: self.refresh_plugin(new_path=directory,
113 force_current=True))
114
115 def refresh_plugin(self, new_path=None, force_current=True):
116 """Refresh explorer widget"""
117 self.fileexplorer.treewidget.update_history(new_path)
118 self.fileexplorer.treewidget.refresh(new_path,
119 force_current=force_current)
120
121 def on_first_registration(self):
122 """Action to be performed on first plugin registration."""
123 # TODO: Remove this for spyder 5
124 # self.tabify(self.main.projects)
125 self.tabify(self.main.variableexplorer)
126
127 def apply_plugin_settings(self, options):
128 """Handle preference options update."""
129 method_map = {
130 'file_associations':
131 self.fileexplorer.treewidget.set_file_associations,
132 'single_click_to_open':
133 self.fileexplorer.treewidget.set_single_click_to_open,
134 'name_filters':
135 self.fileexplorer.treewidget.set_name_filters,
136 'show_all':
137 self.fileexplorer.treewidget.toggle_all,
138 'show_icontext':
139 self.fileexplorer.toggle_icontext,
140 }
141 for option in options:
142 if option in method_map:
143 value = self.get_option(option)
144 method = method_map.get(option)
145 method(value)
146 self.fileexplorer.treewidget.update_common_actions()
147
148 #------ Public API ---------------------------------------------------------
149 def chdir(self, directory):
150 """Set working directory"""
151 self.fileexplorer.treewidget.chdir(directory)
152
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/spyder/plugins/explorer/plugin.py b/spyder/plugins/explorer/plugin.py
--- a/spyder/plugins/explorer/plugin.py
+++ b/spyder/plugins/explorer/plugin.py
@@ -97,7 +97,7 @@
treewidget.sig_run.connect(
lambda fname:
ipyconsole.run_script(fname, osp.dirname(fname), '', False, False,
- False, True))
+ False, True, False))
treewidget.sig_open_dir.connect(
lambda dirname:
self.main.workingdirectory.chdir(dirname,
| {"golden_diff": "diff --git a/spyder/plugins/explorer/plugin.py b/spyder/plugins/explorer/plugin.py\n--- a/spyder/plugins/explorer/plugin.py\n+++ b/spyder/plugins/explorer/plugin.py\n@@ -97,7 +97,7 @@\n treewidget.sig_run.connect(\r\n lambda fname:\r\n ipyconsole.run_script(fname, osp.dirname(fname), '', False, False,\r\n- False, True))\r\n+ False, True, False))\r\n treewidget.sig_open_dir.connect(\r\n lambda dirname:\r\n self.main.workingdirectory.chdir(dirname,\n", "issue": "Crash when trying to run a Python file in a project from the explorer pane - Spyder 4.0.1\n<!--- **PLEASE READ:** When submitting here, please ensure you've completed the following checklist and checked the boxes to confirm. Issue reports without it may be closed. Thanks! --->\r\n\r\n### Issue Report Checklist\r\n\r\n* [y] Searched the [issues page](https://github.com/spyder-ide/spyder/issues?q=is%3Aissue) for similar reports\r\n* [ y] Read the relevant sections of the [Spyder Troubleshooting Guide](https://github.com/spyder-ide/spyder/wiki/Troubleshooting-Guide-and-FAQ) and followed its advice\r\n* [ y] Reproduced the issue after updating with ``conda update spyder`` (or ``pip``, if not using Anaconda)\r\n* [n/a ] Could not reproduce inside ``jupyter qtconsole`` (if console-related)\r\n* [ y] Tried basic troubleshooting (if a bug/error)\r\n * [y ] Restarted Spyder\r\n * [ n] Reset preferences with ``spyder --reset`` I reinstalled everything from scratch using `Anaconda3-2019.10-Windows-x86_64`\r\n * [y ] Reinstalled the latest version of [Anaconda](https://www.anaconda.com/download/) updated following: https://docs.anaconda.com/anaconda/install/update-version/ \r\n * [n/a] Tried the other applicable steps from the Troubleshooting Guide\r\n* [y ] Completed the **Problem Description**, **Steps to Reproduce** and **Version** sections below\r\n\r\n\r\n## Problem Description\r\nCrash when trying to run a Python file in a project\r\nSame as in: https://github.com/spyder-ide/spyder/issues/10590\r\n\r\n\r\n### What steps reproduce the problem?\r\n1. right click on a python file in the left pane\r\n2. choose \"run\" from the context menu\r\n\r\n### What is the expected output? What do you see instead?\r\nfile should execute but instead I get an error message\r\n\r\n\r\n### Paste Traceback/Error Below (if applicable)\r\n<!--- Copy from error dialog or View > Panes > Internal Console --->\r\n\r\n```python-traceback\r\n\r\n File \"C:\\Users\\...\\AppData\\Local\\Continuum\\anaconda3\\envs\\py37\\lib\\site-packages\\spyder\\plugins\\explorer\\plugin.py\", line 100, in <lambda>\r\n False, True))\r\nTypeError: run_script() missing 1 required positional argument: 'console_namespace'\r\n\r\n```\r\n\r\n## Versions\r\n<!--- You can get this information from Help > About Spyder...\r\nor (if Spyder won't launch) the \"conda list\" command\r\nfrom the Anaconda Prompt/Terminal/command line. --->\r\n\r\n* Spyder version: 4.0.1\r\n* Python version: 3.7.6 \r\n* Qt version: 5.12.5\r\n* PyQt version: 5.12.3 \r\n* Operating System name/version: Windows 10\r\n\r\n__N.B. In practice I am using a py37 (python 3.7.6) virtual environment selected from the Anaconda Navigator, and with Spyder launched from there.__\r\n\r\n\r\n\r\n\r\n\r\n### Dependencies\r\n<!--- Please go to the menu entry Help > Dependencies,\r\npress the Copy to clipboard button and paste below --->\r\n\r\n```\r\natomicwrites >=1.2.0 : 1.3.0 (OK)\r\nchardet >=2.0.0 : 3.0.4 (OK)\r\ncloudpickle >=0.5.0 : 1.2.2 (OK)\r\ndiff_match_patch >=20181111 : 20181111 (OK)\r\nintervaltree : None (OK)\r\nIPython >=4.0 : 7.12.0 (OK)\r\njedi =0.14.1 : 0.14.1 (OK)\r\nnbconvert >=4.0 : 5.6.1 (OK)\r\nnumpydoc >=0.6.0 : 0.9.2 (OK)\r\npexpect >=4.4.0 : 4.8.0 (OK)\r\npickleshare >=0.4 : 0.7.5 (OK)\r\npsutil >=0.3 : 5.6.7 (OK)\r\npygments >=2.0 : 2.5.2 (OK)\r\npylint >=0.25 : 2.4.4 (OK)\r\npyls >=0.31.2;<0.32.0 : 0.31.7 (OK)\r\nzmq >=17 : 18.1.1 (OK)\r\nqdarkstyle >=2.7 : 2.8 (OK)\r\nqtawesome >=0.5.7 : 0.6.1 (OK)\r\nqtconsole >=4.6.0 : 4.6.0 (OK)\r\nqtpy >=1.5.0 : 1.9.0 (OK)\r\nrtree >=0.8.3 : 0.9.4 (OK)\r\nsphinx >=0.6.6 : 2.4.0 (OK)\r\nspyder_kernels >=1.8.1;<2.0.0: 1.8.1 (OK)\r\nwatchdog : None (OK)\r\ncython >=0.21 : None (NOK)\r\nmatplotlib >=2.0.0 : 3.1.3 (OK)\r\nnumpy >=1.7 : 1.18.1 (OK)\r\npandas >=0.13.1 : 1.0.1 (OK)\r\nscipy >=0.17.0 : 1.4.1 (OK)\r\nsympy >=0.7.3 : None (NOK)\r\n\r\n```\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\r\n#\r\n# Copyright \u00a9 Spyder Project Contributors\r\n# Licensed under the terms of the MIT License\r\n# (see spyder/__init__.py for details)\r\n\r\n\"\"\"Files and Directories Explorer Plugin\"\"\"\r\n\r\n# pylint: disable=C0103\r\n# pylint: disable=R0903\r\n# pylint: disable=R0911\r\n# pylint: disable=R0201\r\n\r\n# Standard library imports\r\nimport os.path as osp\r\n\r\n# Third party imports\r\nfrom qtpy.QtWidgets import QVBoxLayout\r\n\r\n# Local imports\r\nfrom spyder.config.base import _\r\nfrom spyder.api.plugins import SpyderPluginWidget\r\nfrom spyder.plugins.explorer.widgets.explorer import ExplorerWidget\r\nfrom spyder.plugins.explorer.confpage import ExplorerConfigPage\r\n\r\n\r\nclass Explorer(SpyderPluginWidget):\r\n \"\"\"File and Directories Explorer DockWidget.\"\"\"\r\n\r\n CONF_SECTION = 'explorer'\r\n CONFIGWIDGET_CLASS = ExplorerConfigPage\r\n CONF_FILE = False\r\n\r\n def __init__(self, parent=None):\r\n \"\"\"Initialization.\"\"\"\r\n SpyderPluginWidget.__init__(self, parent)\r\n\r\n visible_columns = self.get_option('visible_columns',\r\n default=[0, 3]) # Name & Last Mod\r\n self.fileexplorer = ExplorerWidget(\r\n self,\r\n name_filters=self.get_option('name_filters'),\r\n show_all=self.get_option('show_all'),\r\n show_icontext=self.get_option('show_icontext'),\r\n options_button=self.options_button,\r\n single_click_to_open=self.get_option('single_click_to_open'),\r\n file_associations=self.get_option('file_associations',\r\n default={}),\r\n visible_columns=visible_columns,\r\n )\r\n layout = QVBoxLayout()\r\n layout.addWidget(self.fileexplorer)\r\n self.setLayout(layout)\r\n self.fileexplorer.sig_option_changed.connect(\r\n self._update_config_options)\r\n\r\n def _update_config_options(self, option, value):\r\n \"\"\"Update the config options of the explorer to make them permanent.\"\"\"\r\n self.set_option(option, value)\r\n\r\n #------ SpyderPluginWidget API ---------------------------------------------\r\n def get_plugin_title(self):\r\n \"\"\"Return widget title\"\"\"\r\n return _(\"Files\")\r\n \r\n def get_focus_widget(self):\r\n \"\"\"\r\n Return the widget to give focus to when\r\n this plugin's dockwidget is raised on top-level\r\n \"\"\"\r\n return self.fileexplorer.treewidget\r\n \r\n def get_plugin_actions(self):\r\n \"\"\"Return a list of actions related to plugin\"\"\"\r\n return self.fileexplorer.treewidget.common_actions\r\n \r\n def register_plugin(self):\r\n \"\"\"Register plugin in Spyder's main window\"\"\"\r\n ipyconsole = self.main.ipyconsole\r\n treewidget = self.fileexplorer.treewidget\r\n\r\n self.add_dockwidget()\r\n self.fileexplorer.sig_open_file.connect(self.main.open_file)\r\n self.register_widget_shortcuts(treewidget)\r\n\r\n treewidget.sig_edit.connect(self.main.editor.load)\r\n treewidget.sig_removed.connect(self.main.editor.removed)\r\n treewidget.sig_removed_tree.connect(self.main.editor.removed_tree)\r\n treewidget.sig_renamed.connect(self.main.editor.renamed)\r\n treewidget.sig_renamed_tree.connect(self.main.editor.renamed_tree)\r\n treewidget.sig_create_module.connect(self.main.editor.new)\r\n treewidget.sig_new_file.connect(lambda t: self.main.editor.new(text=t))\r\n treewidget.sig_open_interpreter.connect(\r\n ipyconsole.create_client_from_path)\r\n treewidget.redirect_stdio.connect(\r\n self.main.redirect_internalshell_stdio)\r\n treewidget.sig_run.connect(\r\n lambda fname:\r\n ipyconsole.run_script(fname, osp.dirname(fname), '', False, False,\r\n False, True))\r\n treewidget.sig_open_dir.connect(\r\n lambda dirname:\r\n self.main.workingdirectory.chdir(dirname,\r\n refresh_explorer=False,\r\n refresh_console=True))\r\n\r\n self.main.editor.open_dir.connect(self.chdir)\r\n\r\n # Signal \"set_explorer_cwd(QString)\" will refresh only the\r\n # contents of path passed by the signal in explorer:\r\n self.main.workingdirectory.set_explorer_cwd.connect(\r\n lambda directory: self.refresh_plugin(new_path=directory,\r\n force_current=True))\r\n\r\n def refresh_plugin(self, new_path=None, force_current=True):\r\n \"\"\"Refresh explorer widget\"\"\"\r\n self.fileexplorer.treewidget.update_history(new_path)\r\n self.fileexplorer.treewidget.refresh(new_path,\r\n force_current=force_current)\r\n\r\n def on_first_registration(self):\r\n \"\"\"Action to be performed on first plugin registration.\"\"\"\r\n # TODO: Remove this for spyder 5\r\n # self.tabify(self.main.projects)\r\n self.tabify(self.main.variableexplorer)\r\n\r\n def apply_plugin_settings(self, options):\r\n \"\"\"Handle preference options update.\"\"\"\r\n method_map = {\r\n 'file_associations':\r\n self.fileexplorer.treewidget.set_file_associations,\r\n 'single_click_to_open':\r\n self.fileexplorer.treewidget.set_single_click_to_open,\r\n 'name_filters':\r\n self.fileexplorer.treewidget.set_name_filters,\r\n 'show_all':\r\n self.fileexplorer.treewidget.toggle_all,\r\n 'show_icontext':\r\n self.fileexplorer.toggle_icontext,\r\n }\r\n for option in options:\r\n if option in method_map:\r\n value = self.get_option(option)\r\n method = method_map.get(option)\r\n method(value)\r\n self.fileexplorer.treewidget.update_common_actions()\r\n\r\n #------ Public API ---------------------------------------------------------\r\n def chdir(self, directory):\r\n \"\"\"Set working directory\"\"\"\r\n self.fileexplorer.treewidget.chdir(directory)\r\n", "path": "spyder/plugins/explorer/plugin.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\r\n#\r\n# Copyright \u00a9 Spyder Project Contributors\r\n# Licensed under the terms of the MIT License\r\n# (see spyder/__init__.py for details)\r\n\r\n\"\"\"Files and Directories Explorer Plugin\"\"\"\r\n\r\n# pylint: disable=C0103\r\n# pylint: disable=R0903\r\n# pylint: disable=R0911\r\n# pylint: disable=R0201\r\n\r\n# Standard library imports\r\nimport os.path as osp\r\n\r\n# Third party imports\r\nfrom qtpy.QtWidgets import QVBoxLayout\r\n\r\n# Local imports\r\nfrom spyder.config.base import _\r\nfrom spyder.api.plugins import SpyderPluginWidget\r\nfrom spyder.plugins.explorer.widgets.explorer import ExplorerWidget\r\nfrom spyder.plugins.explorer.confpage import ExplorerConfigPage\r\n\r\n\r\nclass Explorer(SpyderPluginWidget):\r\n \"\"\"File and Directories Explorer DockWidget.\"\"\"\r\n\r\n CONF_SECTION = 'explorer'\r\n CONFIGWIDGET_CLASS = ExplorerConfigPage\r\n CONF_FILE = False\r\n\r\n def __init__(self, parent=None):\r\n \"\"\"Initialization.\"\"\"\r\n SpyderPluginWidget.__init__(self, parent)\r\n\r\n visible_columns = self.get_option('visible_columns',\r\n default=[0, 3]) # Name & Last Mod\r\n self.fileexplorer = ExplorerWidget(\r\n self,\r\n name_filters=self.get_option('name_filters'),\r\n show_all=self.get_option('show_all'),\r\n show_icontext=self.get_option('show_icontext'),\r\n options_button=self.options_button,\r\n single_click_to_open=self.get_option('single_click_to_open'),\r\n file_associations=self.get_option('file_associations',\r\n default={}),\r\n visible_columns=visible_columns,\r\n )\r\n layout = QVBoxLayout()\r\n layout.addWidget(self.fileexplorer)\r\n self.setLayout(layout)\r\n self.fileexplorer.sig_option_changed.connect(\r\n self._update_config_options)\r\n\r\n def _update_config_options(self, option, value):\r\n \"\"\"Update the config options of the explorer to make them permanent.\"\"\"\r\n self.set_option(option, value)\r\n\r\n #------ SpyderPluginWidget API ---------------------------------------------\r\n def get_plugin_title(self):\r\n \"\"\"Return widget title\"\"\"\r\n return _(\"Files\")\r\n \r\n def get_focus_widget(self):\r\n \"\"\"\r\n Return the widget to give focus to when\r\n this plugin's dockwidget is raised on top-level\r\n \"\"\"\r\n return self.fileexplorer.treewidget\r\n \r\n def get_plugin_actions(self):\r\n \"\"\"Return a list of actions related to plugin\"\"\"\r\n return self.fileexplorer.treewidget.common_actions\r\n \r\n def register_plugin(self):\r\n \"\"\"Register plugin in Spyder's main window\"\"\"\r\n ipyconsole = self.main.ipyconsole\r\n treewidget = self.fileexplorer.treewidget\r\n\r\n self.add_dockwidget()\r\n self.fileexplorer.sig_open_file.connect(self.main.open_file)\r\n self.register_widget_shortcuts(treewidget)\r\n\r\n treewidget.sig_edit.connect(self.main.editor.load)\r\n treewidget.sig_removed.connect(self.main.editor.removed)\r\n treewidget.sig_removed_tree.connect(self.main.editor.removed_tree)\r\n treewidget.sig_renamed.connect(self.main.editor.renamed)\r\n treewidget.sig_renamed_tree.connect(self.main.editor.renamed_tree)\r\n treewidget.sig_create_module.connect(self.main.editor.new)\r\n treewidget.sig_new_file.connect(lambda t: self.main.editor.new(text=t))\r\n treewidget.sig_open_interpreter.connect(\r\n ipyconsole.create_client_from_path)\r\n treewidget.redirect_stdio.connect(\r\n self.main.redirect_internalshell_stdio)\r\n treewidget.sig_run.connect(\r\n lambda fname:\r\n ipyconsole.run_script(fname, osp.dirname(fname), '', False, False,\r\n False, True, False))\r\n treewidget.sig_open_dir.connect(\r\n lambda dirname:\r\n self.main.workingdirectory.chdir(dirname,\r\n refresh_explorer=False,\r\n refresh_console=True))\r\n\r\n self.main.editor.open_dir.connect(self.chdir)\r\n\r\n # Signal \"set_explorer_cwd(QString)\" will refresh only the\r\n # contents of path passed by the signal in explorer:\r\n self.main.workingdirectory.set_explorer_cwd.connect(\r\n lambda directory: self.refresh_plugin(new_path=directory,\r\n force_current=True))\r\n\r\n def refresh_plugin(self, new_path=None, force_current=True):\r\n \"\"\"Refresh explorer widget\"\"\"\r\n self.fileexplorer.treewidget.update_history(new_path)\r\n self.fileexplorer.treewidget.refresh(new_path,\r\n force_current=force_current)\r\n\r\n def on_first_registration(self):\r\n \"\"\"Action to be performed on first plugin registration.\"\"\"\r\n # TODO: Remove this for spyder 5\r\n # self.tabify(self.main.projects)\r\n self.tabify(self.main.variableexplorer)\r\n\r\n def apply_plugin_settings(self, options):\r\n \"\"\"Handle preference options update.\"\"\"\r\n method_map = {\r\n 'file_associations':\r\n self.fileexplorer.treewidget.set_file_associations,\r\n 'single_click_to_open':\r\n self.fileexplorer.treewidget.set_single_click_to_open,\r\n 'name_filters':\r\n self.fileexplorer.treewidget.set_name_filters,\r\n 'show_all':\r\n self.fileexplorer.treewidget.toggle_all,\r\n 'show_icontext':\r\n self.fileexplorer.toggle_icontext,\r\n }\r\n for option in options:\r\n if option in method_map:\r\n value = self.get_option(option)\r\n method = method_map.get(option)\r\n method(value)\r\n self.fileexplorer.treewidget.update_common_actions()\r\n\r\n #------ Public API ---------------------------------------------------------\r\n def chdir(self, directory):\r\n \"\"\"Set working directory\"\"\"\r\n self.fileexplorer.treewidget.chdir(directory)\r\n", "path": "spyder/plugins/explorer/plugin.py"}]} | 3,232 | 121 |
gh_patches_debug_1401 | rasdani/github-patches | git_diff | ktbyers__netmiko-1073 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Huawei vrpv8 commit func issue
After commiting changes on huawei vrpv8, cli on devices look like this:
```
[~HUAWEI]dot1x enable
[*HUAWEI]snmp-agent sys-info version all
Warning: SNMPv1/SNMPv2c is not secure, and SNMPv3 in either authentication or privacy mode is recommended.
[*HUAWEI]commit
[~HUAWEI]
```
with following code:
```
from netmiko import Netmiko
device = {
"host": "10.0.0.3",
"username": "yyy",
"password": "xxx",
"device_type": "huawei_vrpv8",
"session_log": "log_file2.txt"
}
config_commands = ['dot1x enable','snmp-agent sys-info version all']
net_connect = Netmiko(**device)
output = net_connect.send_config_set(config_commands,exit_config_mode=False)
output += net_connect.commit()
print(output)
```
i got this error:
```
Traceback (most recent call last):
File "/home/kafooo/PycharmProjects/nornir_scripts/venv/huawei_netmiko_test.py", line 18, in <module>
output2 = net_connect.commit()
File "/home/kafooo/PycharmProjects/nornir_scripts/venv/lib/python3.6/site-packages/netmiko/huawei/huawei_ssh.py", line 114, in commit
strip_command=False, delay_factor=delay_factor)
File "/home/kafooo/PycharmProjects/nornir_scripts/venv/lib/python3.6/site-packages/netmiko/base_connection.py", line 1206, in send_command_expect
return self.send_command(*args, **kwargs)
File "/home/kafooo/PycharmProjects/nornir_scripts/venv/lib/python3.6/site-packages/netmiko/base_connection.py", line 1188, in send_command
search_pattern))
OSError: Search pattern never detected in send_command_expect: \[\*HUAWEI\]
```
looks like netmiko is expecting [*hostname] after commit, but in reality there is [~hostname] after commit
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `netmiko/huawei/huawei_ssh.py`
Content:
```
1 from __future__ import print_function
2 from __future__ import unicode_literals
3 import time
4 import re
5 from netmiko.cisco_base_connection import CiscoSSHConnection
6 from netmiko import log
7
8
9 class HuaweiSSH(CiscoSSHConnection):
10 def session_preparation(self):
11 """Prepare the session after the connection has been established."""
12 self._test_channel_read()
13 self.set_base_prompt()
14 self.disable_paging(command="screen-length 0 temporary")
15 # Clear the read buffer
16 time.sleep(0.3 * self.global_delay_factor)
17 self.clear_buffer()
18
19 def config_mode(self, config_command="system-view"):
20 """Enter configuration mode."""
21 return super(HuaweiSSH, self).config_mode(config_command=config_command)
22
23 def exit_config_mode(self, exit_config="return", pattern=r">"):
24 """Exit configuration mode."""
25 return super(HuaweiSSH, self).exit_config_mode(
26 exit_config=exit_config, pattern=pattern
27 )
28
29 def check_config_mode(self, check_string="]"):
30 """Checks whether in configuration mode. Returns a boolean."""
31 return super(HuaweiSSH, self).check_config_mode(check_string=check_string)
32
33 def check_enable_mode(self, *args, **kwargs):
34 """Huawei has no enable mode."""
35 pass
36
37 def enable(self, *args, **kwargs):
38 """Huawei has no enable mode."""
39 return ""
40
41 def exit_enable_mode(self, *args, **kwargs):
42 """Huawei has no enable mode."""
43 return ""
44
45 def set_base_prompt(
46 self, pri_prompt_terminator=">", alt_prompt_terminator="]", delay_factor=1
47 ):
48 """
49 Sets self.base_prompt
50
51 Used as delimiter for stripping of trailing prompt in output.
52
53 Should be set to something that is general and applies in multiple contexts. For Comware
54 this will be the router prompt with < > or [ ] stripped off.
55
56 This will be set on logging in, but not when entering system-view
57 """
58 log.debug("In set_base_prompt")
59 delay_factor = self.select_delay_factor(delay_factor)
60 self.clear_buffer()
61 self.write_channel(self.RETURN)
62 time.sleep(0.5 * delay_factor)
63
64 prompt = self.read_channel()
65 prompt = self.normalize_linefeeds(prompt)
66
67 # If multiple lines in the output take the last line
68 prompt = prompt.split(self.RESPONSE_RETURN)[-1]
69 prompt = prompt.strip()
70
71 # Check that ends with a valid terminator character
72 if not prompt[-1] in (pri_prompt_terminator, alt_prompt_terminator):
73 raise ValueError("Router prompt not found: {0}".format(prompt))
74
75 # Strip off any leading HRP_. characters for USGv5 HA
76 prompt = re.sub(r"^HRP_.", "", prompt, flags=re.M)
77
78 # Strip off leading and trailing terminator
79 prompt = prompt[1:-1]
80 prompt = prompt.strip()
81 self.base_prompt = prompt
82 log.debug("prompt: {0}".format(self.base_prompt))
83
84 return self.base_prompt
85
86 def save_config(self, cmd="save", confirm=False, confirm_response=""):
87 """ Save Config for HuaweiSSH"""
88 return super(HuaweiSSH, self).save_config(cmd=cmd, confirm=confirm)
89
90
91 class HuaweiVrpv8SSH(HuaweiSSH):
92 def commit(self, comment="", delay_factor=1):
93 """
94 Commit the candidate configuration.
95
96 Commit the entered configuration. Raise an error and return the failure
97 if the commit fails.
98
99 default:
100 command_string = commit
101 comment:
102 command_string = commit comment <comment>
103
104 """
105 delay_factor = self.select_delay_factor(delay_factor)
106 error_marker = "Failed to generate committed config"
107 command_string = "commit"
108
109 if comment:
110 command_string += ' comment "{}"'.format(comment)
111
112 output = self.config_mode()
113 output += self.send_command_expect(
114 command_string,
115 strip_prompt=False,
116 strip_command=False,
117 delay_factor=delay_factor,
118 )
119 output += self.exit_config_mode()
120
121 if error_marker in output:
122 raise ValueError(
123 "Commit failed with following errors:\n\n{}".format(output)
124 )
125 return output
126
127 def save_config(self, cmd="", confirm=True, confirm_response=""):
128 """Not Implemented"""
129 raise NotImplementedError
130
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/netmiko/huawei/huawei_ssh.py b/netmiko/huawei/huawei_ssh.py
--- a/netmiko/huawei/huawei_ssh.py
+++ b/netmiko/huawei/huawei_ssh.py
@@ -115,6 +115,7 @@
strip_prompt=False,
strip_command=False,
delay_factor=delay_factor,
+ expect_string=r"]",
)
output += self.exit_config_mode()
| {"golden_diff": "diff --git a/netmiko/huawei/huawei_ssh.py b/netmiko/huawei/huawei_ssh.py\n--- a/netmiko/huawei/huawei_ssh.py\n+++ b/netmiko/huawei/huawei_ssh.py\n@@ -115,6 +115,7 @@\n strip_prompt=False,\n strip_command=False,\n delay_factor=delay_factor,\n+ expect_string=r\"]\",\n )\n output += self.exit_config_mode()\n", "issue": "Huawei vrpv8 commit func issue\nAfter commiting changes on huawei vrpv8, cli on devices look like this: \r\n```\r\n[~HUAWEI]dot1x enable\r\n[*HUAWEI]snmp-agent sys-info version all\r\nWarning: SNMPv1/SNMPv2c is not secure, and SNMPv3 in either authentication or privacy mode is recommended.\r\n[*HUAWEI]commit\r\n[~HUAWEI]\r\n```\r\n\r\n\r\nwith following code: \r\n\r\n```\r\nfrom netmiko import Netmiko\r\n\r\ndevice = {\r\n \"host\": \"10.0.0.3\",\r\n \"username\": \"yyy\",\r\n \"password\": \"xxx\",\r\n \"device_type\": \"huawei_vrpv8\",\r\n \"session_log\": \"log_file2.txt\"\r\n}\r\nconfig_commands = ['dot1x enable','snmp-agent sys-info version all']\r\nnet_connect = Netmiko(**device)\r\n\r\noutput = net_connect.send_config_set(config_commands,exit_config_mode=False)\r\noutput += net_connect.commit()\r\nprint(output)\r\n```\r\n\r\ni got this error: \r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/kafooo/PycharmProjects/nornir_scripts/venv/huawei_netmiko_test.py\", line 18, in <module>\r\n output2 = net_connect.commit()\r\n File \"/home/kafooo/PycharmProjects/nornir_scripts/venv/lib/python3.6/site-packages/netmiko/huawei/huawei_ssh.py\", line 114, in commit\r\n strip_command=False, delay_factor=delay_factor)\r\n File \"/home/kafooo/PycharmProjects/nornir_scripts/venv/lib/python3.6/site-packages/netmiko/base_connection.py\", line 1206, in send_command_expect\r\n return self.send_command(*args, **kwargs)\r\n File \"/home/kafooo/PycharmProjects/nornir_scripts/venv/lib/python3.6/site-packages/netmiko/base_connection.py\", line 1188, in send_command\r\n search_pattern))\r\nOSError: Search pattern never detected in send_command_expect: \\[\\*HUAWEI\\]\r\n```\r\n\r\n\r\nlooks like netmiko is expecting [*hostname] after commit, but in reality there is [~hostname] after commit\n", "before_files": [{"content": "from __future__ import print_function\nfrom __future__ import unicode_literals\nimport time\nimport re\nfrom netmiko.cisco_base_connection import CiscoSSHConnection\nfrom netmiko import log\n\n\nclass HuaweiSSH(CiscoSSHConnection):\n def session_preparation(self):\n \"\"\"Prepare the session after the connection has been established.\"\"\"\n self._test_channel_read()\n self.set_base_prompt()\n self.disable_paging(command=\"screen-length 0 temporary\")\n # Clear the read buffer\n time.sleep(0.3 * self.global_delay_factor)\n self.clear_buffer()\n\n def config_mode(self, config_command=\"system-view\"):\n \"\"\"Enter configuration mode.\"\"\"\n return super(HuaweiSSH, self).config_mode(config_command=config_command)\n\n def exit_config_mode(self, exit_config=\"return\", pattern=r\">\"):\n \"\"\"Exit configuration mode.\"\"\"\n return super(HuaweiSSH, self).exit_config_mode(\n exit_config=exit_config, pattern=pattern\n )\n\n def check_config_mode(self, check_string=\"]\"):\n \"\"\"Checks whether in configuration mode. Returns a boolean.\"\"\"\n return super(HuaweiSSH, self).check_config_mode(check_string=check_string)\n\n def check_enable_mode(self, *args, **kwargs):\n \"\"\"Huawei has no enable mode.\"\"\"\n pass\n\n def enable(self, *args, **kwargs):\n \"\"\"Huawei has no enable mode.\"\"\"\n return \"\"\n\n def exit_enable_mode(self, *args, **kwargs):\n \"\"\"Huawei has no enable mode.\"\"\"\n return \"\"\n\n def set_base_prompt(\n self, pri_prompt_terminator=\">\", alt_prompt_terminator=\"]\", delay_factor=1\n ):\n \"\"\"\n Sets self.base_prompt\n\n Used as delimiter for stripping of trailing prompt in output.\n\n Should be set to something that is general and applies in multiple contexts. For Comware\n this will be the router prompt with < > or [ ] stripped off.\n\n This will be set on logging in, but not when entering system-view\n \"\"\"\n log.debug(\"In set_base_prompt\")\n delay_factor = self.select_delay_factor(delay_factor)\n self.clear_buffer()\n self.write_channel(self.RETURN)\n time.sleep(0.5 * delay_factor)\n\n prompt = self.read_channel()\n prompt = self.normalize_linefeeds(prompt)\n\n # If multiple lines in the output take the last line\n prompt = prompt.split(self.RESPONSE_RETURN)[-1]\n prompt = prompt.strip()\n\n # Check that ends with a valid terminator character\n if not prompt[-1] in (pri_prompt_terminator, alt_prompt_terminator):\n raise ValueError(\"Router prompt not found: {0}\".format(prompt))\n\n # Strip off any leading HRP_. characters for USGv5 HA\n prompt = re.sub(r\"^HRP_.\", \"\", prompt, flags=re.M)\n\n # Strip off leading and trailing terminator\n prompt = prompt[1:-1]\n prompt = prompt.strip()\n self.base_prompt = prompt\n log.debug(\"prompt: {0}\".format(self.base_prompt))\n\n return self.base_prompt\n\n def save_config(self, cmd=\"save\", confirm=False, confirm_response=\"\"):\n \"\"\" Save Config for HuaweiSSH\"\"\"\n return super(HuaweiSSH, self).save_config(cmd=cmd, confirm=confirm)\n\n\nclass HuaweiVrpv8SSH(HuaweiSSH):\n def commit(self, comment=\"\", delay_factor=1):\n \"\"\"\n Commit the candidate configuration.\n\n Commit the entered configuration. Raise an error and return the failure\n if the commit fails.\n\n default:\n command_string = commit\n comment:\n command_string = commit comment <comment>\n\n \"\"\"\n delay_factor = self.select_delay_factor(delay_factor)\n error_marker = \"Failed to generate committed config\"\n command_string = \"commit\"\n\n if comment:\n command_string += ' comment \"{}\"'.format(comment)\n\n output = self.config_mode()\n output += self.send_command_expect(\n command_string,\n strip_prompt=False,\n strip_command=False,\n delay_factor=delay_factor,\n )\n output += self.exit_config_mode()\n\n if error_marker in output:\n raise ValueError(\n \"Commit failed with following errors:\\n\\n{}\".format(output)\n )\n return output\n\n def save_config(self, cmd=\"\", confirm=True, confirm_response=\"\"):\n \"\"\"Not Implemented\"\"\"\n raise NotImplementedError\n", "path": "netmiko/huawei/huawei_ssh.py"}], "after_files": [{"content": "from __future__ import print_function\nfrom __future__ import unicode_literals\nimport time\nimport re\nfrom netmiko.cisco_base_connection import CiscoSSHConnection\nfrom netmiko import log\n\n\nclass HuaweiSSH(CiscoSSHConnection):\n def session_preparation(self):\n \"\"\"Prepare the session after the connection has been established.\"\"\"\n self._test_channel_read()\n self.set_base_prompt()\n self.disable_paging(command=\"screen-length 0 temporary\")\n # Clear the read buffer\n time.sleep(0.3 * self.global_delay_factor)\n self.clear_buffer()\n\n def config_mode(self, config_command=\"system-view\"):\n \"\"\"Enter configuration mode.\"\"\"\n return super(HuaweiSSH, self).config_mode(config_command=config_command)\n\n def exit_config_mode(self, exit_config=\"return\", pattern=r\">\"):\n \"\"\"Exit configuration mode.\"\"\"\n return super(HuaweiSSH, self).exit_config_mode(\n exit_config=exit_config, pattern=pattern\n )\n\n def check_config_mode(self, check_string=\"]\"):\n \"\"\"Checks whether in configuration mode. Returns a boolean.\"\"\"\n return super(HuaweiSSH, self).check_config_mode(check_string=check_string)\n\n def check_enable_mode(self, *args, **kwargs):\n \"\"\"Huawei has no enable mode.\"\"\"\n pass\n\n def enable(self, *args, **kwargs):\n \"\"\"Huawei has no enable mode.\"\"\"\n return \"\"\n\n def exit_enable_mode(self, *args, **kwargs):\n \"\"\"Huawei has no enable mode.\"\"\"\n return \"\"\n\n def set_base_prompt(\n self, pri_prompt_terminator=\">\", alt_prompt_terminator=\"]\", delay_factor=1\n ):\n \"\"\"\n Sets self.base_prompt\n\n Used as delimiter for stripping of trailing prompt in output.\n\n Should be set to something that is general and applies in multiple contexts. For Comware\n this will be the router prompt with < > or [ ] stripped off.\n\n This will be set on logging in, but not when entering system-view\n \"\"\"\n log.debug(\"In set_base_prompt\")\n delay_factor = self.select_delay_factor(delay_factor)\n self.clear_buffer()\n self.write_channel(self.RETURN)\n time.sleep(0.5 * delay_factor)\n\n prompt = self.read_channel()\n prompt = self.normalize_linefeeds(prompt)\n\n # If multiple lines in the output take the last line\n prompt = prompt.split(self.RESPONSE_RETURN)[-1]\n prompt = prompt.strip()\n\n # Check that ends with a valid terminator character\n if not prompt[-1] in (pri_prompt_terminator, alt_prompt_terminator):\n raise ValueError(\"Router prompt not found: {0}\".format(prompt))\n\n # Strip off any leading HRP_. characters for USGv5 HA\n prompt = re.sub(r\"^HRP_.\", \"\", prompt, flags=re.M)\n\n # Strip off leading and trailing terminator\n prompt = prompt[1:-1]\n prompt = prompt.strip()\n self.base_prompt = prompt\n log.debug(\"prompt: {0}\".format(self.base_prompt))\n\n return self.base_prompt\n\n def save_config(self, cmd=\"save\", confirm=False, confirm_response=\"\"):\n \"\"\" Save Config for HuaweiSSH\"\"\"\n return super(HuaweiSSH, self).save_config(cmd=cmd, confirm=confirm)\n\n\nclass HuaweiVrpv8SSH(HuaweiSSH):\n def commit(self, comment=\"\", delay_factor=1):\n \"\"\"\n Commit the candidate configuration.\n\n Commit the entered configuration. Raise an error and return the failure\n if the commit fails.\n\n default:\n command_string = commit\n comment:\n command_string = commit comment <comment>\n\n \"\"\"\n delay_factor = self.select_delay_factor(delay_factor)\n error_marker = \"Failed to generate committed config\"\n command_string = \"commit\"\n\n if comment:\n command_string += ' comment \"{}\"'.format(comment)\n\n output = self.config_mode()\n output += self.send_command_expect(\n command_string,\n strip_prompt=False,\n strip_command=False,\n delay_factor=delay_factor,\n expect_string=r\"]\",\n )\n output += self.exit_config_mode()\n\n if error_marker in output:\n raise ValueError(\n \"Commit failed with following errors:\\n\\n{}\".format(output)\n )\n return output\n\n def save_config(self, cmd=\"\", confirm=True, confirm_response=\"\"):\n \"\"\"Not Implemented\"\"\"\n raise NotImplementedError\n", "path": "netmiko/huawei/huawei_ssh.py"}]} | 1,987 | 104 |
gh_patches_debug_17030 | rasdani/github-patches | git_diff | apache__tvm-6499 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[uTVM] Use an alternative CRC Library
The 3rdparty crc library introduced in https://github.com/apache/incubator-tvm/pull/6334 has a license problem.
We will need to replace it with a new impl or an alternative library
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/tvm/micro/build.py`
Content:
```
1 # Licensed to the Apache Software Foundation (ASF) under one
2 # or more contributor license agreements. See the NOTICE file
3 # distributed with this work for additional information
4 # regarding copyright ownership. The ASF licenses this file
5 # to you under the Apache License, Version 2.0 (the
6 # "License"); you may not use this file except in compliance
7 # with the License. You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing,
12 # software distributed under the License is distributed on an
13 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
14 # KIND, either express or implied. See the License for the
15 # specific language governing permissions and limitations
16 # under the License.
17
18 """Defines top-level glue functions for building microTVM artifacts."""
19
20 import copy
21 import logging
22 import os
23 import re
24 from tvm.contrib import util
25
26
27 _LOG = logging.getLogger(__name__)
28
29
30 class Workspace:
31 """Defines helper functions for manipulating temporary compilation workspaces."""
32
33 def __init__(self, root=None, debug=False):
34 if debug or root is not None:
35 with util.TempDirectory.set_keep_for_debug():
36 self.tempdir = util.tempdir(custom_path=root)
37 _LOG.info("Created debug mode workspace at: %s", self.tempdir.temp_dir)
38 else:
39 self.tempdir = util.tempdir()
40
41 def relpath(self, path):
42 return self.tempdir.relpath(path)
43
44 def listdir(self):
45 return self.tempdir.listdir()
46
47 @property
48 def path(self):
49 return self.tempdir.temp_dir
50
51
52 # Required C runtime libraries, in link order.
53 CRT_RUNTIME_LIB_NAMES = ["utvm_rpc_server", "utvm_rpc_common", "common"]
54
55
56 TVM_ROOT_DIR = os.path.realpath(os.path.join(os.path.dirname(__file__), "..", "..", ".."))
57
58
59 CRT_ROOT_DIR = os.path.join(TVM_ROOT_DIR, "src", "runtime", "crt")
60
61
62 RUNTIME_LIB_SRC_DIRS = [os.path.join(CRT_ROOT_DIR, n) for n in CRT_RUNTIME_LIB_NAMES] + [
63 os.path.join(
64 TVM_ROOT_DIR,
65 "3rdparty/mbed-os/targets/TARGET_NORDIC/TARGET_NRF5x/TARGET_SDK_11/" "libraries/crc16",
66 )
67 ]
68
69
70 RUNTIME_SRC_REGEX = re.compile(r"^.*\.cc?$", re.IGNORECASE)
71
72
73 _CRT_DEFAULT_OPTIONS = {
74 "ccflags": ["-std=c++11"],
75 "ldflags": ["-std=gnu++14"],
76 "include_dirs": [
77 f"{TVM_ROOT_DIR}/include",
78 f"{TVM_ROOT_DIR}/3rdparty/dlpack/include",
79 f"{TVM_ROOT_DIR}/3rdparty/mbed-os/targets/TARGET_NORDIC/TARGET_NRF5x/"
80 "TARGET_SDK_11/libraries/crc16/",
81 f"{TVM_ROOT_DIR}/3rdparty/dmlc-core/include",
82 f"{CRT_ROOT_DIR}/include",
83 ],
84 "profile": {"common": ["-Wno-unused-variable"]},
85 }
86
87
88 def default_options(target_include_dir):
89 """Return default opts passed to Compile commands."""
90 bin_opts = copy.deepcopy(_CRT_DEFAULT_OPTIONS)
91 bin_opts["include_dirs"].append(target_include_dir)
92 lib_opts = copy.deepcopy(bin_opts)
93 lib_opts["profile"]["common"].append("-Werror")
94 lib_opts["cflags"] = ["-Wno-error=incompatible-pointer-types"]
95 return {"bin_opts": bin_opts, "lib_opts": lib_opts}
96
97
98 def build_static_runtime(workspace, compiler, module, lib_opts=None, bin_opts=None):
99 """Build the on-device runtime, statically linking the given modules.
100
101 Parameters
102 ----------
103 compiler : tvm.micro.Compiler
104 Compiler instance used to build the runtime.
105
106 module : IRModule
107 Module to statically link.
108
109 lib_opts : dict
110 Extra kwargs passed to library(),
111
112 bin_opts : dict
113 Extra kwargs passed to binary(),
114
115 Returns
116 -------
117 MicroBinary :
118 The compiled runtime.
119 """
120 lib_opts = _CRT_DEFAULT_OPTIONS if lib_opts is None else lib_opts
121 bin_opts = _CRT_DEFAULT_OPTIONS if bin_opts is None else bin_opts
122
123 mod_build_dir = workspace.relpath(os.path.join("build", "module"))
124 os.makedirs(mod_build_dir)
125 mod_src_dir = workspace.relpath(os.path.join("src", "module"))
126 os.makedirs(mod_src_dir)
127 mod_src_path = os.path.join(mod_src_dir, "module.c")
128 module.save(mod_src_path, "cc")
129
130 libs = []
131 for lib_src_dir in RUNTIME_LIB_SRC_DIRS:
132 lib_name = os.path.basename(lib_src_dir)
133 lib_build_dir = workspace.relpath(f"build/{lib_name}")
134 os.makedirs(lib_build_dir)
135
136 lib_srcs = []
137 for p in os.listdir(lib_src_dir):
138 if RUNTIME_SRC_REGEX.match(p):
139 lib_srcs.append(os.path.join(lib_src_dir, p))
140
141 libs.append(compiler.library(lib_build_dir, lib_srcs, lib_opts))
142
143 libs.append(compiler.library(mod_build_dir, [mod_src_path], lib_opts))
144
145 runtime_build_dir = workspace.relpath(f"build/runtime")
146 os.makedirs(runtime_build_dir)
147 return compiler.binary(runtime_build_dir, libs, bin_opts)
148
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/python/tvm/micro/build.py b/python/tvm/micro/build.py
--- a/python/tvm/micro/build.py
+++ b/python/tvm/micro/build.py
@@ -60,10 +60,7 @@
RUNTIME_LIB_SRC_DIRS = [os.path.join(CRT_ROOT_DIR, n) for n in CRT_RUNTIME_LIB_NAMES] + [
- os.path.join(
- TVM_ROOT_DIR,
- "3rdparty/mbed-os/targets/TARGET_NORDIC/TARGET_NRF5x/TARGET_SDK_11/" "libraries/crc16",
- )
+ os.path.join(TVM_ROOT_DIR, "3rdparty/libcrc/src")
]
@@ -76,8 +73,7 @@
"include_dirs": [
f"{TVM_ROOT_DIR}/include",
f"{TVM_ROOT_DIR}/3rdparty/dlpack/include",
- f"{TVM_ROOT_DIR}/3rdparty/mbed-os/targets/TARGET_NORDIC/TARGET_NRF5x/"
- "TARGET_SDK_11/libraries/crc16/",
+ f"{TVM_ROOT_DIR}/3rdparty/libcrc/include",
f"{TVM_ROOT_DIR}/3rdparty/dmlc-core/include",
f"{CRT_ROOT_DIR}/include",
],
| {"golden_diff": "diff --git a/python/tvm/micro/build.py b/python/tvm/micro/build.py\n--- a/python/tvm/micro/build.py\n+++ b/python/tvm/micro/build.py\n@@ -60,10 +60,7 @@\n \n \n RUNTIME_LIB_SRC_DIRS = [os.path.join(CRT_ROOT_DIR, n) for n in CRT_RUNTIME_LIB_NAMES] + [\n- os.path.join(\n- TVM_ROOT_DIR,\n- \"3rdparty/mbed-os/targets/TARGET_NORDIC/TARGET_NRF5x/TARGET_SDK_11/\" \"libraries/crc16\",\n- )\n+ os.path.join(TVM_ROOT_DIR, \"3rdparty/libcrc/src\")\n ]\n \n \n@@ -76,8 +73,7 @@\n \"include_dirs\": [\n f\"{TVM_ROOT_DIR}/include\",\n f\"{TVM_ROOT_DIR}/3rdparty/dlpack/include\",\n- f\"{TVM_ROOT_DIR}/3rdparty/mbed-os/targets/TARGET_NORDIC/TARGET_NRF5x/\"\n- \"TARGET_SDK_11/libraries/crc16/\",\n+ f\"{TVM_ROOT_DIR}/3rdparty/libcrc/include\",\n f\"{TVM_ROOT_DIR}/3rdparty/dmlc-core/include\",\n f\"{CRT_ROOT_DIR}/include\",\n ],\n", "issue": "[uTVM] Use an alternative CRC Library\nThe 3rdparty crc library introduced in https://github.com/apache/incubator-tvm/pull/6334 has a license problem.\r\n\r\n\r\nWe will need to replace it with a new impl or an alternative library\r\n\n", "before_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n\"\"\"Defines top-level glue functions for building microTVM artifacts.\"\"\"\n\nimport copy\nimport logging\nimport os\nimport re\nfrom tvm.contrib import util\n\n\n_LOG = logging.getLogger(__name__)\n\n\nclass Workspace:\n \"\"\"Defines helper functions for manipulating temporary compilation workspaces.\"\"\"\n\n def __init__(self, root=None, debug=False):\n if debug or root is not None:\n with util.TempDirectory.set_keep_for_debug():\n self.tempdir = util.tempdir(custom_path=root)\n _LOG.info(\"Created debug mode workspace at: %s\", self.tempdir.temp_dir)\n else:\n self.tempdir = util.tempdir()\n\n def relpath(self, path):\n return self.tempdir.relpath(path)\n\n def listdir(self):\n return self.tempdir.listdir()\n\n @property\n def path(self):\n return self.tempdir.temp_dir\n\n\n# Required C runtime libraries, in link order.\nCRT_RUNTIME_LIB_NAMES = [\"utvm_rpc_server\", \"utvm_rpc_common\", \"common\"]\n\n\nTVM_ROOT_DIR = os.path.realpath(os.path.join(os.path.dirname(__file__), \"..\", \"..\", \"..\"))\n\n\nCRT_ROOT_DIR = os.path.join(TVM_ROOT_DIR, \"src\", \"runtime\", \"crt\")\n\n\nRUNTIME_LIB_SRC_DIRS = [os.path.join(CRT_ROOT_DIR, n) for n in CRT_RUNTIME_LIB_NAMES] + [\n os.path.join(\n TVM_ROOT_DIR,\n \"3rdparty/mbed-os/targets/TARGET_NORDIC/TARGET_NRF5x/TARGET_SDK_11/\" \"libraries/crc16\",\n )\n]\n\n\nRUNTIME_SRC_REGEX = re.compile(r\"^.*\\.cc?$\", re.IGNORECASE)\n\n\n_CRT_DEFAULT_OPTIONS = {\n \"ccflags\": [\"-std=c++11\"],\n \"ldflags\": [\"-std=gnu++14\"],\n \"include_dirs\": [\n f\"{TVM_ROOT_DIR}/include\",\n f\"{TVM_ROOT_DIR}/3rdparty/dlpack/include\",\n f\"{TVM_ROOT_DIR}/3rdparty/mbed-os/targets/TARGET_NORDIC/TARGET_NRF5x/\"\n \"TARGET_SDK_11/libraries/crc16/\",\n f\"{TVM_ROOT_DIR}/3rdparty/dmlc-core/include\",\n f\"{CRT_ROOT_DIR}/include\",\n ],\n \"profile\": {\"common\": [\"-Wno-unused-variable\"]},\n}\n\n\ndef default_options(target_include_dir):\n \"\"\"Return default opts passed to Compile commands.\"\"\"\n bin_opts = copy.deepcopy(_CRT_DEFAULT_OPTIONS)\n bin_opts[\"include_dirs\"].append(target_include_dir)\n lib_opts = copy.deepcopy(bin_opts)\n lib_opts[\"profile\"][\"common\"].append(\"-Werror\")\n lib_opts[\"cflags\"] = [\"-Wno-error=incompatible-pointer-types\"]\n return {\"bin_opts\": bin_opts, \"lib_opts\": lib_opts}\n\n\ndef build_static_runtime(workspace, compiler, module, lib_opts=None, bin_opts=None):\n \"\"\"Build the on-device runtime, statically linking the given modules.\n\n Parameters\n ----------\n compiler : tvm.micro.Compiler\n Compiler instance used to build the runtime.\n\n module : IRModule\n Module to statically link.\n\n lib_opts : dict\n Extra kwargs passed to library(),\n\n bin_opts : dict\n Extra kwargs passed to binary(),\n\n Returns\n -------\n MicroBinary :\n The compiled runtime.\n \"\"\"\n lib_opts = _CRT_DEFAULT_OPTIONS if lib_opts is None else lib_opts\n bin_opts = _CRT_DEFAULT_OPTIONS if bin_opts is None else bin_opts\n\n mod_build_dir = workspace.relpath(os.path.join(\"build\", \"module\"))\n os.makedirs(mod_build_dir)\n mod_src_dir = workspace.relpath(os.path.join(\"src\", \"module\"))\n os.makedirs(mod_src_dir)\n mod_src_path = os.path.join(mod_src_dir, \"module.c\")\n module.save(mod_src_path, \"cc\")\n\n libs = []\n for lib_src_dir in RUNTIME_LIB_SRC_DIRS:\n lib_name = os.path.basename(lib_src_dir)\n lib_build_dir = workspace.relpath(f\"build/{lib_name}\")\n os.makedirs(lib_build_dir)\n\n lib_srcs = []\n for p in os.listdir(lib_src_dir):\n if RUNTIME_SRC_REGEX.match(p):\n lib_srcs.append(os.path.join(lib_src_dir, p))\n\n libs.append(compiler.library(lib_build_dir, lib_srcs, lib_opts))\n\n libs.append(compiler.library(mod_build_dir, [mod_src_path], lib_opts))\n\n runtime_build_dir = workspace.relpath(f\"build/runtime\")\n os.makedirs(runtime_build_dir)\n return compiler.binary(runtime_build_dir, libs, bin_opts)\n", "path": "python/tvm/micro/build.py"}], "after_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n\"\"\"Defines top-level glue functions for building microTVM artifacts.\"\"\"\n\nimport copy\nimport logging\nimport os\nimport re\nfrom tvm.contrib import util\n\n\n_LOG = logging.getLogger(__name__)\n\n\nclass Workspace:\n \"\"\"Defines helper functions for manipulating temporary compilation workspaces.\"\"\"\n\n def __init__(self, root=None, debug=False):\n if debug or root is not None:\n with util.TempDirectory.set_keep_for_debug():\n self.tempdir = util.tempdir(custom_path=root)\n _LOG.info(\"Created debug mode workspace at: %s\", self.tempdir.temp_dir)\n else:\n self.tempdir = util.tempdir()\n\n def relpath(self, path):\n return self.tempdir.relpath(path)\n\n def listdir(self):\n return self.tempdir.listdir()\n\n @property\n def path(self):\n return self.tempdir.temp_dir\n\n\n# Required C runtime libraries, in link order.\nCRT_RUNTIME_LIB_NAMES = [\"utvm_rpc_server\", \"utvm_rpc_common\", \"common\"]\n\n\nTVM_ROOT_DIR = os.path.realpath(os.path.join(os.path.dirname(__file__), \"..\", \"..\", \"..\"))\n\n\nCRT_ROOT_DIR = os.path.join(TVM_ROOT_DIR, \"src\", \"runtime\", \"crt\")\n\n\nRUNTIME_LIB_SRC_DIRS = [os.path.join(CRT_ROOT_DIR, n) for n in CRT_RUNTIME_LIB_NAMES] + [\n os.path.join(TVM_ROOT_DIR, \"3rdparty/libcrc/src\")\n]\n\n\nRUNTIME_SRC_REGEX = re.compile(r\"^.*\\.cc?$\", re.IGNORECASE)\n\n\n_CRT_DEFAULT_OPTIONS = {\n \"ccflags\": [\"-std=c++11\"],\n \"ldflags\": [\"-std=gnu++14\"],\n \"include_dirs\": [\n f\"{TVM_ROOT_DIR}/include\",\n f\"{TVM_ROOT_DIR}/3rdparty/dlpack/include\",\n f\"{TVM_ROOT_DIR}/3rdparty/libcrc/include\",\n f\"{TVM_ROOT_DIR}/3rdparty/dmlc-core/include\",\n f\"{CRT_ROOT_DIR}/include\",\n ],\n \"profile\": {\"common\": [\"-Wno-unused-variable\"]},\n}\n\n\ndef default_options(target_include_dir):\n \"\"\"Return default opts passed to Compile commands.\"\"\"\n bin_opts = copy.deepcopy(_CRT_DEFAULT_OPTIONS)\n bin_opts[\"include_dirs\"].append(target_include_dir)\n lib_opts = copy.deepcopy(bin_opts)\n lib_opts[\"profile\"][\"common\"].append(\"-Werror\")\n lib_opts[\"cflags\"] = [\"-Wno-error=incompatible-pointer-types\"]\n return {\"bin_opts\": bin_opts, \"lib_opts\": lib_opts}\n\n\ndef build_static_runtime(workspace, compiler, module, lib_opts=None, bin_opts=None):\n \"\"\"Build the on-device runtime, statically linking the given modules.\n\n Parameters\n ----------\n compiler : tvm.micro.Compiler\n Compiler instance used to build the runtime.\n\n module : IRModule\n Module to statically link.\n\n lib_opts : dict\n Extra kwargs passed to library(),\n\n bin_opts : dict\n Extra kwargs passed to binary(),\n\n Returns\n -------\n MicroBinary :\n The compiled runtime.\n \"\"\"\n lib_opts = _CRT_DEFAULT_OPTIONS if lib_opts is None else lib_opts\n bin_opts = _CRT_DEFAULT_OPTIONS if bin_opts is None else bin_opts\n\n mod_build_dir = workspace.relpath(os.path.join(\"build\", \"module\"))\n os.makedirs(mod_build_dir)\n mod_src_dir = workspace.relpath(os.path.join(\"src\", \"module\"))\n os.makedirs(mod_src_dir)\n mod_src_path = os.path.join(mod_src_dir, \"module.c\")\n module.save(mod_src_path, \"cc\")\n\n libs = []\n for lib_src_dir in RUNTIME_LIB_SRC_DIRS:\n lib_name = os.path.basename(lib_src_dir)\n lib_build_dir = workspace.relpath(f\"build/{lib_name}\")\n os.makedirs(lib_build_dir)\n\n lib_srcs = []\n for p in os.listdir(lib_src_dir):\n if RUNTIME_SRC_REGEX.match(p):\n lib_srcs.append(os.path.join(lib_src_dir, p))\n\n libs.append(compiler.library(lib_build_dir, lib_srcs, lib_opts))\n\n libs.append(compiler.library(mod_build_dir, [mod_src_path], lib_opts))\n\n runtime_build_dir = workspace.relpath(f\"build/runtime\")\n os.makedirs(runtime_build_dir)\n return compiler.binary(runtime_build_dir, libs, bin_opts)\n", "path": "python/tvm/micro/build.py"}]} | 1,832 | 285 |
gh_patches_debug_34927 | rasdani/github-patches | git_diff | sunpy__sunpy-5738 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fido HECClient returns out-of-date results
When trying to access HEC data through Fido HECClient, the data returned is ~1 month out of date. Looking at the current web gui for the HEC API, it returns events up to today but through Fido, the latest data is from ~ 2021-11-03
```python
from sunpy.net import Fido, attrs as a
timerange = a.Time('2021/11/01 00:00:00', '2021/12/01 00:00:00')
results_hec = Fido.search(timerange,
a.helio.TableName('gevloc_sxr_flare'))
results_hec
<sunpy.net.fido_factory.UnifiedResponse object at 0x000002B89664B730>
Results from 1 Provider:
10 Results from the HECClient:
time_start ...
------------------- ...
2021-11-02T02:57:00 ...
2021-11-02T07:37:00 ...
2021-11-02T12:55:00 ...
2021-11-02T21:09:00 ...
2021-11-03T00:10:00 ...
2021-11-03T00:56:00 ...
2021-11-03T08:15:00 ...
2021-11-03T14:32:00 ...
2021-11-03T14:43:00 ...
2021-11-03T15:18:00 ...
```
I have also tested with 'goes_sxr_flare' and get roughly the same time for the latest event ( 2021-11-02T21:09:00 )
You can see the expected returns from this query for both gevloc and goes_sxr here:
http://hec.helio-vo.eu/hec/hec_gui_fetch.php?y_from=2021&mo_from=11&d_from=1&y_to=2021&mo_to=12&d_to=1&radioremote=on&titlesearch2=&goes_sxr_flare=istable&gevloc_sxr_flare=istable
- SunPy Version: 3.1
- Astropy Version: 4.3.1
- Python Version: 3.8.8
- OS information: Win & WSL same issue
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sunpy/net/helio/hec.py`
Content:
```
1 """
2 Access the Helio Event Catalogue
3 """
4 import io
5 import os
6
7 from lxml import etree
8 from requests import Session
9 from zeep import Client
10 from zeep.transports import Transport
11
12 from astropy.io.votable.table import parse_single_table
13
14 from sunpy.net import attrs as a
15 from sunpy.net.base_client import BaseClient, QueryResponseTable
16 from sunpy.net.helio import attrs as ha
17 from sunpy.net.helio import parser
18 from sunpy.time import parse_time
19 from sunpy.util.exceptions import warn_deprecated
20
21 __all__ = ['HECClient', 'HECResponse']
22
23
24 def votable_handler(xml_table):
25 """
26 Returns a VOtable object from a VOtable style xml string
27
28 In order to get a VOtable object, it has to be parsed from an xml file or
29 file-like object. This function creates a file-like object via the
30 StringIO module, writes the xml data to it, then passes the file-like
31 object to parse_single_table() from the astropy.io.votable.table module
32 and thereby creates a VOtable object.
33
34 Parameters
35 ----------
36 xml_table : `bytes`
37 Contains the VOtable style xml data
38
39 Returns
40 -------
41 votable : `astropy.io.votable.tree.Table`
42 A properly formatted VOtable object
43
44 """
45 fake_file = io.BytesIO()
46 fake_file.write(xml_table)
47 votable = parse_single_table(fake_file)
48 for i in range(len(votable.array)):
49 item = votable.array[i][0]
50 if isinstance(item, bytes):
51 votable.array[i] = (votable.array[i][0].decode(),)
52 fake_file.close()
53 return votable
54
55
56 class HECResponse(QueryResponseTable):
57 """
58 A container for data returned from HEC searches.
59 """
60
61
62 class HECClient(BaseClient):
63 """
64 Provides access to the HELIO webservices.
65 """
66
67 def __init__(self, link=None):
68 """
69 The constructor; establishes the webservice link for the client
70
71 Initializes the client with a weblink
72
73 Parameters
74 ----------
75 link : str
76 Contains URL to valid WSDL endpoint
77
78 Examples
79 --------
80 >>> from sunpy.net.helio import hec
81 >>> hc = hec.HECClient() # doctest: +REMOTE_DATA
82 """
83 if link is None:
84 # The default wsdl file
85 link = parser.wsdl_retriever()
86 session = Session()
87 # This is for use in our test suite.
88 session.verify = not(bool(os.environ.get("NO_VERIFY_HELIO_SSL", 0)))
89 transport = Transport(session=session)
90 self.hec_client = Client(link, transport=transport)
91
92 @classmethod
93 def _can_handle_query(cls, *query):
94 required = {a.Time}
95 optional = {ha.MaxRecords, ha.TableName}
96 return cls.check_attr_types_in_query(query, required, optional)
97
98 @classmethod
99 def _attrs_module(cls):
100 return 'helio', 'sunpy.net.helio.attrs'
101
102 def search(self, *args, **kwargs):
103 """
104 The simple interface to query the wsdl service.
105
106 Used to utilize the service's TimeQuery() method, this is a simple
107 interface between the sunpy module library and the web-service's API.
108
109 Examples
110 --------
111 >>> from sunpy.net.helio import attrs as ha
112 >>> from sunpy.net import attrs as a, Fido
113 >>> timerange = a.Time('2005/01/03', '2005/12/03')
114 >>> res = Fido.search(timerange, ha.MaxRecords(10),
115 ... ha.TableName('rhessi_hxr_flare')) # doctest: +REMOTE_DATA
116 >>> res #doctest: +REMOTE_DATA
117 <sunpy.net.fido_factory.UnifiedResponse object at ...>
118 Results from 1 Provider:
119 <BLANKLINE>
120 10 Results from the HECClient:
121 hec_id time_start time_peak ... energy_kev flare_number
122 ------ ------------------- ------------------- ... ---------- ------------
123 31463 2005-01-03T01:37:36 2005-01-03T01:37:54 ... 6 5010320
124 31464 2005-01-03T01:51:36 2005-01-03T01:59:18 ... 12 5010301
125 31465 2005-01-03T03:26:28 2005-01-03T03:42:50 ... 6 5010332
126 31466 2005-01-03T03:46:04 2005-01-03T04:07:10 ... 12 5010302
127 31467 2005-01-03T05:00:24 2005-01-03T05:00:30 ... 6 5010313
128 31468 2005-01-03T06:40:48 2005-01-03T06:42:46 ... 6 5010314
129 31469 2005-01-03T08:27:56 2005-01-03T08:28:26 ... 6 5010334
130 31470 2005-01-03T09:31:00 2005-01-03T09:33:34 ... 6 5010322
131 31471 2005-01-03T09:34:52 2005-01-03T09:59:46 ... 6 5010336
132 31472 2005-01-03T11:06:48 2005-01-03T11:07:18 ... 12 5010304
133 <BLANKLINE>
134 <BLANKLINE>
135 """
136 qrdict = {}
137 for elem in args:
138 if isinstance(elem, a.Time):
139 qrdict['Time'] = elem
140 elif isinstance(elem, ha.MaxRecords):
141 qrdict['max_records'] = elem.value
142 elif isinstance(elem, ha.TableName):
143 qrdict['table_name'] = elem.value
144 else:
145 raise ValueError(
146 f"{elem.__class__.__name__} should be a ``attrs.Time``, ``attrs.hek.MaxRecords`` or ``attrs.hek.TableName`` attribute.")
147 qrdict.update(kwargs)
148 table = qrdict.get('table_name', None)
149 if table:
150 if isinstance(table, bytes):
151 warn_deprecated('type `bytes` for table_name is deprecated, use `str` instead.')
152 table = str.encode(table)
153 start_time = qrdict['Time'].start
154 end_time = qrdict['Time'].end
155 max_records = qrdict.get('max_records', 10)
156 while table is None:
157 table = self.select_table()
158 start_time = parse_time(start_time)
159 end_time = parse_time(end_time)
160 results = self.hec_client.service.TimeQuery(STARTTIME=start_time.isot,
161 ENDTIME=end_time.isot,
162 FROM=table,
163 MAXRECORDS=max_records)
164 results = votable_handler(etree.tostring(results))
165 return HECResponse(results.to_table(), client=self)
166
167 def get_table_names(self):
168 """
169 Returns a list of the available tables to query.
170
171 Returns the names of all the tables that can be queried via the
172 webservice.
173
174 Returns
175 -------
176 tables.array: `numpy.ma.core.MaskedArray`
177 A VOtable table of available tables names.
178
179 Examples
180 --------
181 >>> from sunpy.net.helio import hec
182 >>> hc = hec.HECClient() # doctest: +REMOTE_DATA
183 >>> print(hc.get_table_names()) # doctest: +REMOTE_DATA
184 [('timed_see_flare',) ('hi_event',) ('yohkoh_flare_list',)
185 ('wind_mfi_bs_crossing_time',) ('seeds_soho',) ('seeds_stb',)
186 ...
187 ('rhessi_hxr_flare',) ('cactus_soho_flow',) ('cactus_soho_cme',)
188 ('stereob_het_sep',)]
189 """
190 results = self.hec_client.service.getTableNames()
191 tables = votable_handler(etree.tostring(results))
192 return tables.array
193
194 def select_table(self):
195 """
196 Creates a list of table names and prompts the user for a choice
197
198 This takes the table of table names from get_table_names(), creates a
199 list of the names, sorts them, then presents the tables in a
200 convenient menu for the user to choose from. It returns a string
201 containing the name of the table that the user picked.
202
203 Returns
204 -------
205 `str`
206 Contains the name of the table that the user picked.
207
208 Examples
209 --------
210 >>> from sunpy.net.helio import hec # doctest: +SKIP
211 >>> hc = hec.HECClient() # doctest: +SKIP
212 >>> hc.select_table() # doctest: +SKIP
213 """
214 tables = self.get_table_names()
215 table_list = [t[0] for t in tables if len(t[0]) > 0]
216 table_list.sort()
217 for index, table in enumerate(table_list):
218 print(f'{index + 1} - {table}')
219 while True:
220 user_input = input(f"\nPlease enter a table number between 1 and {len(table_list)} "
221 "('e' to exit): ")
222 if user_input.lower() == "e" or user_input.lower() == "exit":
223 return None
224 if user_input.isdigit() and 1 <= int(user_input) <= len(table_list):
225 table_no = int(user_input)
226 return table_list[table_no - 1]
227 else:
228 print(f"Input must be an integer between 1 and {len(table_list)}")
229
230 def fetch(self, *args, **kwargs):
231 """
232 This is a no operation function as this client does not download data.
233 """
234 return NotImplemented
235
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sunpy/net/helio/hec.py b/sunpy/net/helio/hec.py
--- a/sunpy/net/helio/hec.py
+++ b/sunpy/net/helio/hec.py
@@ -16,7 +16,7 @@
from sunpy.net.helio import attrs as ha
from sunpy.net.helio import parser
from sunpy.time import parse_time
-from sunpy.util.exceptions import warn_deprecated
+from sunpy.util.exceptions import warn_deprecated, warn_user
__all__ = ['HECClient', 'HECResponse']
@@ -106,6 +106,10 @@
Used to utilize the service's TimeQuery() method, this is a simple
interface between the sunpy module library and the web-service's API.
+ .. note::
+ By default the maximum records returned by the service are limited to 500.
+ To obtain more results ``a.helio.MaxRecords`` must be set to a higher value.
+
Examples
--------
>>> from sunpy.net.helio import attrs as ha
@@ -152,7 +156,7 @@
table = str.encode(table)
start_time = qrdict['Time'].start
end_time = qrdict['Time'].end
- max_records = qrdict.get('max_records', 10)
+ max_records = qrdict.get('max_records', 500)
while table is None:
table = self.select_table()
start_time = parse_time(start_time)
@@ -162,7 +166,12 @@
FROM=table,
MAXRECORDS=max_records)
results = votable_handler(etree.tostring(results))
- return HECResponse(results.to_table(), client=self)
+ table = HECResponse(results.to_table(), client=self)
+ if len(table) == max_records == 500:
+ warn_user("Number of results is the same as the default `max_records` of 500. "
+ "It is possible your query has been truncated. "
+ "If you want to change this, set `a.helio.MaxRecords` to a higher value.")
+ return table
def get_table_names(self):
"""
| {"golden_diff": "diff --git a/sunpy/net/helio/hec.py b/sunpy/net/helio/hec.py\n--- a/sunpy/net/helio/hec.py\n+++ b/sunpy/net/helio/hec.py\n@@ -16,7 +16,7 @@\n from sunpy.net.helio import attrs as ha\n from sunpy.net.helio import parser\n from sunpy.time import parse_time\n-from sunpy.util.exceptions import warn_deprecated\n+from sunpy.util.exceptions import warn_deprecated, warn_user\n \n __all__ = ['HECClient', 'HECResponse']\n \n@@ -106,6 +106,10 @@\n Used to utilize the service's TimeQuery() method, this is a simple\n interface between the sunpy module library and the web-service's API.\n \n+ .. note::\n+ By default the maximum records returned by the service are limited to 500.\n+ To obtain more results ``a.helio.MaxRecords`` must be set to a higher value.\n+\n Examples\n --------\n >>> from sunpy.net.helio import attrs as ha\n@@ -152,7 +156,7 @@\n table = str.encode(table)\n start_time = qrdict['Time'].start\n end_time = qrdict['Time'].end\n- max_records = qrdict.get('max_records', 10)\n+ max_records = qrdict.get('max_records', 500)\n while table is None:\n table = self.select_table()\n start_time = parse_time(start_time)\n@@ -162,7 +166,12 @@\n FROM=table,\n MAXRECORDS=max_records)\n results = votable_handler(etree.tostring(results))\n- return HECResponse(results.to_table(), client=self)\n+ table = HECResponse(results.to_table(), client=self)\n+ if len(table) == max_records == 500:\n+ warn_user(\"Number of results is the same as the default `max_records` of 500. \"\n+ \"It is possible your query has been truncated. \"\n+ \"If you want to change this, set `a.helio.MaxRecords` to a higher value.\")\n+ return table\n \n def get_table_names(self):\n \"\"\"\n", "issue": "Fido HECClient returns out-of-date results\nWhen trying to access HEC data through Fido HECClient, the data returned is ~1 month out of date. Looking at the current web gui for the HEC API, it returns events up to today but through Fido, the latest data is from ~ 2021-11-03\r\n\r\n```python\r\nfrom sunpy.net import Fido, attrs as a\r\n\r\n\r\ntimerange = a.Time('2021/11/01 00:00:00', '2021/12/01 00:00:00')\r\nresults_hec = Fido.search(timerange,\r\n a.helio.TableName('gevloc_sxr_flare'))\r\nresults_hec \r\n<sunpy.net.fido_factory.UnifiedResponse object at 0x000002B89664B730>\r\nResults from 1 Provider:\r\n\r\n10 Results from the HECClient:\r\n time_start ...\r\n------------------- ...\r\n2021-11-02T02:57:00 ...\r\n2021-11-02T07:37:00 ...\r\n2021-11-02T12:55:00 ...\r\n2021-11-02T21:09:00 ...\r\n2021-11-03T00:10:00 ...\r\n2021-11-03T00:56:00 ...\r\n2021-11-03T08:15:00 ...\r\n2021-11-03T14:32:00 ...\r\n2021-11-03T14:43:00 ...\r\n2021-11-03T15:18:00 ...\r\n\r\n```\r\nI have also tested with 'goes_sxr_flare' and get roughly the same time for the latest event ( 2021-11-02T21:09:00 )\r\nYou can see the expected returns from this query for both gevloc and goes_sxr here:\r\n\r\nhttp://hec.helio-vo.eu/hec/hec_gui_fetch.php?y_from=2021&mo_from=11&d_from=1&y_to=2021&mo_to=12&d_to=1&radioremote=on&titlesearch2=&goes_sxr_flare=istable&gevloc_sxr_flare=istable\r\n\r\n\r\n- SunPy Version: 3.1\r\n- Astropy Version: 4.3.1\r\n- Python Version: 3.8.8\r\n- OS information: Win & WSL same issue\r\n\n", "before_files": [{"content": "\"\"\"\nAccess the Helio Event Catalogue\n\"\"\"\nimport io\nimport os\n\nfrom lxml import etree\nfrom requests import Session\nfrom zeep import Client\nfrom zeep.transports import Transport\n\nfrom astropy.io.votable.table import parse_single_table\n\nfrom sunpy.net import attrs as a\nfrom sunpy.net.base_client import BaseClient, QueryResponseTable\nfrom sunpy.net.helio import attrs as ha\nfrom sunpy.net.helio import parser\nfrom sunpy.time import parse_time\nfrom sunpy.util.exceptions import warn_deprecated\n\n__all__ = ['HECClient', 'HECResponse']\n\n\ndef votable_handler(xml_table):\n \"\"\"\n Returns a VOtable object from a VOtable style xml string\n\n In order to get a VOtable object, it has to be parsed from an xml file or\n file-like object. This function creates a file-like object via the\n StringIO module, writes the xml data to it, then passes the file-like\n object to parse_single_table() from the astropy.io.votable.table module\n and thereby creates a VOtable object.\n\n Parameters\n ----------\n xml_table : `bytes`\n Contains the VOtable style xml data\n\n Returns\n -------\n votable : `astropy.io.votable.tree.Table`\n A properly formatted VOtable object\n\n \"\"\"\n fake_file = io.BytesIO()\n fake_file.write(xml_table)\n votable = parse_single_table(fake_file)\n for i in range(len(votable.array)):\n item = votable.array[i][0]\n if isinstance(item, bytes):\n votable.array[i] = (votable.array[i][0].decode(),)\n fake_file.close()\n return votable\n\n\nclass HECResponse(QueryResponseTable):\n \"\"\"\n A container for data returned from HEC searches.\n \"\"\"\n\n\nclass HECClient(BaseClient):\n \"\"\"\n Provides access to the HELIO webservices.\n \"\"\"\n\n def __init__(self, link=None):\n \"\"\"\n The constructor; establishes the webservice link for the client\n\n Initializes the client with a weblink\n\n Parameters\n ----------\n link : str\n Contains URL to valid WSDL endpoint\n\n Examples\n --------\n >>> from sunpy.net.helio import hec\n >>> hc = hec.HECClient() # doctest: +REMOTE_DATA\n \"\"\"\n if link is None:\n # The default wsdl file\n link = parser.wsdl_retriever()\n session = Session()\n # This is for use in our test suite.\n session.verify = not(bool(os.environ.get(\"NO_VERIFY_HELIO_SSL\", 0)))\n transport = Transport(session=session)\n self.hec_client = Client(link, transport=transport)\n\n @classmethod\n def _can_handle_query(cls, *query):\n required = {a.Time}\n optional = {ha.MaxRecords, ha.TableName}\n return cls.check_attr_types_in_query(query, required, optional)\n\n @classmethod\n def _attrs_module(cls):\n return 'helio', 'sunpy.net.helio.attrs'\n\n def search(self, *args, **kwargs):\n \"\"\"\n The simple interface to query the wsdl service.\n\n Used to utilize the service's TimeQuery() method, this is a simple\n interface between the sunpy module library and the web-service's API.\n\n Examples\n --------\n >>> from sunpy.net.helio import attrs as ha\n >>> from sunpy.net import attrs as a, Fido\n >>> timerange = a.Time('2005/01/03', '2005/12/03')\n >>> res = Fido.search(timerange, ha.MaxRecords(10),\n ... ha.TableName('rhessi_hxr_flare')) # doctest: +REMOTE_DATA\n >>> res #doctest: +REMOTE_DATA\n <sunpy.net.fido_factory.UnifiedResponse object at ...>\n Results from 1 Provider:\n <BLANKLINE>\n 10 Results from the HECClient:\n hec_id time_start time_peak ... energy_kev flare_number\n ------ ------------------- ------------------- ... ---------- ------------\n 31463 2005-01-03T01:37:36 2005-01-03T01:37:54 ... 6 5010320\n 31464 2005-01-03T01:51:36 2005-01-03T01:59:18 ... 12 5010301\n 31465 2005-01-03T03:26:28 2005-01-03T03:42:50 ... 6 5010332\n 31466 2005-01-03T03:46:04 2005-01-03T04:07:10 ... 12 5010302\n 31467 2005-01-03T05:00:24 2005-01-03T05:00:30 ... 6 5010313\n 31468 2005-01-03T06:40:48 2005-01-03T06:42:46 ... 6 5010314\n 31469 2005-01-03T08:27:56 2005-01-03T08:28:26 ... 6 5010334\n 31470 2005-01-03T09:31:00 2005-01-03T09:33:34 ... 6 5010322\n 31471 2005-01-03T09:34:52 2005-01-03T09:59:46 ... 6 5010336\n 31472 2005-01-03T11:06:48 2005-01-03T11:07:18 ... 12 5010304\n <BLANKLINE>\n <BLANKLINE>\n \"\"\"\n qrdict = {}\n for elem in args:\n if isinstance(elem, a.Time):\n qrdict['Time'] = elem\n elif isinstance(elem, ha.MaxRecords):\n qrdict['max_records'] = elem.value\n elif isinstance(elem, ha.TableName):\n qrdict['table_name'] = elem.value\n else:\n raise ValueError(\n f\"{elem.__class__.__name__} should be a ``attrs.Time``, ``attrs.hek.MaxRecords`` or ``attrs.hek.TableName`` attribute.\")\n qrdict.update(kwargs)\n table = qrdict.get('table_name', None)\n if table:\n if isinstance(table, bytes):\n warn_deprecated('type `bytes` for table_name is deprecated, use `str` instead.')\n table = str.encode(table)\n start_time = qrdict['Time'].start\n end_time = qrdict['Time'].end\n max_records = qrdict.get('max_records', 10)\n while table is None:\n table = self.select_table()\n start_time = parse_time(start_time)\n end_time = parse_time(end_time)\n results = self.hec_client.service.TimeQuery(STARTTIME=start_time.isot,\n ENDTIME=end_time.isot,\n FROM=table,\n MAXRECORDS=max_records)\n results = votable_handler(etree.tostring(results))\n return HECResponse(results.to_table(), client=self)\n\n def get_table_names(self):\n \"\"\"\n Returns a list of the available tables to query.\n\n Returns the names of all the tables that can be queried via the\n webservice.\n\n Returns\n -------\n tables.array: `numpy.ma.core.MaskedArray`\n A VOtable table of available tables names.\n\n Examples\n --------\n >>> from sunpy.net.helio import hec\n >>> hc = hec.HECClient() # doctest: +REMOTE_DATA\n >>> print(hc.get_table_names()) # doctest: +REMOTE_DATA\n [('timed_see_flare',) ('hi_event',) ('yohkoh_flare_list',)\n ('wind_mfi_bs_crossing_time',) ('seeds_soho',) ('seeds_stb',)\n ...\n ('rhessi_hxr_flare',) ('cactus_soho_flow',) ('cactus_soho_cme',)\n ('stereob_het_sep',)]\n \"\"\"\n results = self.hec_client.service.getTableNames()\n tables = votable_handler(etree.tostring(results))\n return tables.array\n\n def select_table(self):\n \"\"\"\n Creates a list of table names and prompts the user for a choice\n\n This takes the table of table names from get_table_names(), creates a\n list of the names, sorts them, then presents the tables in a\n convenient menu for the user to choose from. It returns a string\n containing the name of the table that the user picked.\n\n Returns\n -------\n `str`\n Contains the name of the table that the user picked.\n\n Examples\n --------\n >>> from sunpy.net.helio import hec # doctest: +SKIP\n >>> hc = hec.HECClient() # doctest: +SKIP\n >>> hc.select_table() # doctest: +SKIP\n \"\"\"\n tables = self.get_table_names()\n table_list = [t[0] for t in tables if len(t[0]) > 0]\n table_list.sort()\n for index, table in enumerate(table_list):\n print(f'{index + 1} - {table}')\n while True:\n user_input = input(f\"\\nPlease enter a table number between 1 and {len(table_list)} \"\n \"('e' to exit): \")\n if user_input.lower() == \"e\" or user_input.lower() == \"exit\":\n return None\n if user_input.isdigit() and 1 <= int(user_input) <= len(table_list):\n table_no = int(user_input)\n return table_list[table_no - 1]\n else:\n print(f\"Input must be an integer between 1 and {len(table_list)}\")\n\n def fetch(self, *args, **kwargs):\n \"\"\"\n This is a no operation function as this client does not download data.\n \"\"\"\n return NotImplemented\n", "path": "sunpy/net/helio/hec.py"}], "after_files": [{"content": "\"\"\"\nAccess the Helio Event Catalogue\n\"\"\"\nimport io\nimport os\n\nfrom lxml import etree\nfrom requests import Session\nfrom zeep import Client\nfrom zeep.transports import Transport\n\nfrom astropy.io.votable.table import parse_single_table\n\nfrom sunpy.net import attrs as a\nfrom sunpy.net.base_client import BaseClient, QueryResponseTable\nfrom sunpy.net.helio import attrs as ha\nfrom sunpy.net.helio import parser\nfrom sunpy.time import parse_time\nfrom sunpy.util.exceptions import warn_deprecated, warn_user\n\n__all__ = ['HECClient', 'HECResponse']\n\n\ndef votable_handler(xml_table):\n \"\"\"\n Returns a VOtable object from a VOtable style xml string\n\n In order to get a VOtable object, it has to be parsed from an xml file or\n file-like object. This function creates a file-like object via the\n StringIO module, writes the xml data to it, then passes the file-like\n object to parse_single_table() from the astropy.io.votable.table module\n and thereby creates a VOtable object.\n\n Parameters\n ----------\n xml_table : `bytes`\n Contains the VOtable style xml data\n\n Returns\n -------\n votable : `astropy.io.votable.tree.Table`\n A properly formatted VOtable object\n\n \"\"\"\n fake_file = io.BytesIO()\n fake_file.write(xml_table)\n votable = parse_single_table(fake_file)\n for i in range(len(votable.array)):\n item = votable.array[i][0]\n if isinstance(item, bytes):\n votable.array[i] = (votable.array[i][0].decode(),)\n fake_file.close()\n return votable\n\n\nclass HECResponse(QueryResponseTable):\n \"\"\"\n A container for data returned from HEC searches.\n \"\"\"\n\n\nclass HECClient(BaseClient):\n \"\"\"\n Provides access to the HELIO webservices.\n \"\"\"\n\n def __init__(self, link=None):\n \"\"\"\n The constructor; establishes the webservice link for the client\n\n Initializes the client with a weblink\n\n Parameters\n ----------\n link : str\n Contains URL to valid WSDL endpoint\n\n Examples\n --------\n >>> from sunpy.net.helio import hec\n >>> hc = hec.HECClient() # doctest: +REMOTE_DATA\n \"\"\"\n if link is None:\n # The default wsdl file\n link = parser.wsdl_retriever()\n session = Session()\n # This is for use in our test suite.\n session.verify = not(bool(os.environ.get(\"NO_VERIFY_HELIO_SSL\", 0)))\n transport = Transport(session=session)\n self.hec_client = Client(link, transport=transport)\n\n @classmethod\n def _can_handle_query(cls, *query):\n required = {a.Time}\n optional = {ha.MaxRecords, ha.TableName}\n return cls.check_attr_types_in_query(query, required, optional)\n\n @classmethod\n def _attrs_module(cls):\n return 'helio', 'sunpy.net.helio.attrs'\n\n def search(self, *args, **kwargs):\n \"\"\"\n The simple interface to query the wsdl service.\n\n Used to utilize the service's TimeQuery() method, this is a simple\n interface between the sunpy module library and the web-service's API.\n\n .. note::\n By default the maximum records returned by the service are limited to 500.\n To obtain more results ``a.helio.MaxRecords`` must be set to a higher value.\n\n Examples\n --------\n >>> from sunpy.net.helio import attrs as ha\n >>> from sunpy.net import attrs as a, Fido\n >>> timerange = a.Time('2005/01/03', '2005/12/03')\n >>> res = Fido.search(timerange, ha.MaxRecords(10),\n ... ha.TableName('rhessi_hxr_flare')) # doctest: +REMOTE_DATA\n >>> res #doctest: +REMOTE_DATA\n <sunpy.net.fido_factory.UnifiedResponse object at ...>\n Results from 1 Provider:\n <BLANKLINE>\n 10 Results from the HECClient:\n hec_id time_start time_peak ... energy_kev flare_number\n ------ ------------------- ------------------- ... ---------- ------------\n 31463 2005-01-03T01:37:36 2005-01-03T01:37:54 ... 6 5010320\n 31464 2005-01-03T01:51:36 2005-01-03T01:59:18 ... 12 5010301\n 31465 2005-01-03T03:26:28 2005-01-03T03:42:50 ... 6 5010332\n 31466 2005-01-03T03:46:04 2005-01-03T04:07:10 ... 12 5010302\n 31467 2005-01-03T05:00:24 2005-01-03T05:00:30 ... 6 5010313\n 31468 2005-01-03T06:40:48 2005-01-03T06:42:46 ... 6 5010314\n 31469 2005-01-03T08:27:56 2005-01-03T08:28:26 ... 6 5010334\n 31470 2005-01-03T09:31:00 2005-01-03T09:33:34 ... 6 5010322\n 31471 2005-01-03T09:34:52 2005-01-03T09:59:46 ... 6 5010336\n 31472 2005-01-03T11:06:48 2005-01-03T11:07:18 ... 12 5010304\n <BLANKLINE>\n <BLANKLINE>\n \"\"\"\n qrdict = {}\n for elem in args:\n if isinstance(elem, a.Time):\n qrdict['Time'] = elem\n elif isinstance(elem, ha.MaxRecords):\n qrdict['max_records'] = elem.value\n elif isinstance(elem, ha.TableName):\n qrdict['table_name'] = elem.value\n else:\n raise ValueError(\n f\"{elem.__class__.__name__} should be a ``attrs.Time``, ``attrs.hek.MaxRecords`` or ``attrs.hek.TableName`` attribute.\")\n qrdict.update(kwargs)\n table = qrdict.get('table_name', None)\n if table:\n if isinstance(table, bytes):\n warn_deprecated('type `bytes` for table_name is deprecated, use `str` instead.')\n table = str.encode(table)\n start_time = qrdict['Time'].start\n end_time = qrdict['Time'].end\n max_records = qrdict.get('max_records', 500)\n while table is None:\n table = self.select_table()\n start_time = parse_time(start_time)\n end_time = parse_time(end_time)\n results = self.hec_client.service.TimeQuery(STARTTIME=start_time.isot,\n ENDTIME=end_time.isot,\n FROM=table,\n MAXRECORDS=max_records)\n results = votable_handler(etree.tostring(results))\n table = HECResponse(results.to_table(), client=self)\n if len(table) == max_records == 500:\n warn_user(\"Number of results is the same as the default `max_records` of 500. \"\n \"It is possible your query has been truncated. \"\n \"If you want to change this, set `a.helio.MaxRecords` to a higher value.\")\n return table\n\n def get_table_names(self):\n \"\"\"\n Returns a list of the available tables to query.\n\n Returns the names of all the tables that can be queried via the\n webservice.\n\n Returns\n -------\n tables.array: `numpy.ma.core.MaskedArray`\n A VOtable table of available tables names.\n\n Examples\n --------\n >>> from sunpy.net.helio import hec\n >>> hc = hec.HECClient() # doctest: +REMOTE_DATA\n >>> print(hc.get_table_names()) # doctest: +REMOTE_DATA\n [('timed_see_flare',) ('hi_event',) ('yohkoh_flare_list',)\n ('wind_mfi_bs_crossing_time',) ('seeds_soho',) ('seeds_stb',)\n ...\n ('rhessi_hxr_flare',) ('cactus_soho_flow',) ('cactus_soho_cme',)\n ('stereob_het_sep',)]\n \"\"\"\n results = self.hec_client.service.getTableNames()\n tables = votable_handler(etree.tostring(results))\n return tables.array\n\n def select_table(self):\n \"\"\"\n Creates a list of table names and prompts the user for a choice\n\n This takes the table of table names from get_table_names(), creates a\n list of the names, sorts them, then presents the tables in a\n convenient menu for the user to choose from. It returns a string\n containing the name of the table that the user picked.\n\n Returns\n -------\n `str`\n Contains the name of the table that the user picked.\n\n Examples\n --------\n >>> from sunpy.net.helio import hec # doctest: +SKIP\n >>> hc = hec.HECClient() # doctest: +SKIP\n >>> hc.select_table() # doctest: +SKIP\n \"\"\"\n tables = self.get_table_names()\n table_list = [t[0] for t in tables if len(t[0]) > 0]\n table_list.sort()\n for index, table in enumerate(table_list):\n print(f'{index + 1} - {table}')\n while True:\n user_input = input(f\"\\nPlease enter a table number between 1 and {len(table_list)} \"\n \"('e' to exit): \")\n if user_input.lower() == \"e\" or user_input.lower() == \"exit\":\n return None\n if user_input.isdigit() and 1 <= int(user_input) <= len(table_list):\n table_no = int(user_input)\n return table_list[table_no - 1]\n else:\n print(f\"Input must be an integer between 1 and {len(table_list)}\")\n\n def fetch(self, *args, **kwargs):\n \"\"\"\n This is a no operation function as this client does not download data.\n \"\"\"\n return NotImplemented\n", "path": "sunpy/net/helio/hec.py"}]} | 3,948 | 507 |
gh_patches_debug_28035 | rasdani/github-patches | git_diff | scikit-image__scikit-image-6293 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Build image pyramids not always working with other images
## Description
Using the *[Build image pyramids](https://scikit-image.org/docs/dev/auto_examples/transform/plot_pyramid.html)* example with a random image is not always working.
## Way to reproduce
### hand.jpg

```python
import numpy as np
import matplotlib.pyplot as plt
from skimage import data
from skimage.transform import pyramid_gaussian
import imageio as io
image = io.imread('hand.jpg') # data.astronaut()
rows, cols, dim = image.shape
pyramid = tuple(pyramid_gaussian(image, downscale=2, multichannel=True))
composite_image = np.zeros((rows, cols + cols // 2, 3), dtype=np.double)
composite_image[:rows, :cols, :] = pyramid[0]
i_row = 0
for p in pyramid[1:]:
n_rows, n_cols = p.shape[:2]
composite_image[i_row:i_row + n_rows, cols:cols + n_cols] = p
i_row += n_rows
fig, ax = plt.subplots()
ax.imshow(composite_image)
plt.show()
```
## Version information
```python
3.7.4 (tags/v3.7.4:e09359112e, Jul 8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
Windows-10-10.0.18362-SP0
scikit-image version: 0.16.1
numpy version: 1.17.2
```
```python
Traceback (most recent call last):
File "D:\Vincent\Bureau\Patern recongnition and image analysis\Patern recognition and patern analysis\LAB_1\plot_pyramid.py", line 44, in <module>
composite_image[i_row:i_row + n_rows, cols:cols + n_cols] = p
ValueError: could not broadcast input array from shape (2,2,3) into shape (1,2,3)
```
## Possible solution
I was able to make it works for the same RGB image but this code is not adapted for BW and RGBA.
```python
import numpy as np
import matplotlib.pyplot as plt
from skimage import data
from skimage.transform import pyramid_gaussian
import imageio as io
image = io.imread('hand.jpg') # data.astronaut()
rows, cols, dim = image.shape
pyramid = tuple(pyramid_gaussian(image, downscale=2, multichannel=True))
composite_image = np.zeros((rows, cols + cols // 2, dim), dtype=np.double)
composite_image[:rows, :cols, :] = pyramid[0]
i_row = 0
for p in pyramid[1:]:
n_rows, n_cols = p.shape[:2]
# Check the dimension before assignement
if(composite_image[i_row:i_row + n_rows, cols:cols + n_cols].shape==p.shape):
composite_image[i_row:i_row + n_rows, cols:cols + n_cols] = p
i_row += n_rows
else:
break
fig, ax = plt.subplots()
ax.imshow(composite_image)
plt.show()
```
### Result

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `doc/examples/transform/plot_pyramid.py`
Content:
```
1 """
2 ====================
3 Build image pyramids
4 ====================
5
6 The ``pyramid_gaussian`` function takes an image and yields successive images
7 shrunk by a constant scale factor. Image pyramids are often used, e.g., to
8 implement algorithms for denoising, texture discrimination, and scale-invariant
9 detection.
10
11 """
12 import numpy as np
13 import matplotlib.pyplot as plt
14
15 from skimage import data
16 from skimage.transform import pyramid_gaussian
17
18
19 image = data.astronaut()
20 rows, cols, dim = image.shape
21 pyramid = tuple(pyramid_gaussian(image, downscale=2, channel_axis=-1))
22
23 composite_image = np.zeros((rows, cols + cols // 2, 3), dtype=np.double)
24
25 composite_image[:rows, :cols, :] = pyramid[0]
26
27 i_row = 0
28 for p in pyramid[1:]:
29 n_rows, n_cols = p.shape[:2]
30 composite_image[i_row:i_row + n_rows, cols:cols + n_cols] = p
31 i_row += n_rows
32
33 fig, ax = plt.subplots()
34 ax.imshow(composite_image)
35 plt.show()
36
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/doc/examples/transform/plot_pyramid.py b/doc/examples/transform/plot_pyramid.py
--- a/doc/examples/transform/plot_pyramid.py
+++ b/doc/examples/transform/plot_pyramid.py
@@ -9,6 +9,8 @@
detection.
"""
+import math
+
import numpy as np
import matplotlib.pyplot as plt
@@ -20,10 +22,31 @@
rows, cols, dim = image.shape
pyramid = tuple(pyramid_gaussian(image, downscale=2, channel_axis=-1))
-composite_image = np.zeros((rows, cols + cols // 2, 3), dtype=np.double)
-
+#####################################################################
+# Generate a composite image for visualization
+# ============================================
+#
+# For visualization, we generate a composite image with the same number of rows
+# as the source image but with ``cols + pyramid[1].shape[1]`` columns. We then
+# have space to stack all of the dowsampled images to the right of the
+# original.
+#
+# Note: The sum of the number of rows in all dowsampled images in the pyramid
+# may sometimes exceed the original image size in cases when image.shape[0] is
+# not a power of two. We expand the number of rows in the composite slightly as
+# necessary to account for this. Expansion beyond the number of rows in the
+# original will also be necessary to cover cases where downscale < 2.
+
+# determine the total number of rows and columns for the composite
+composite_rows = max(rows, sum(p.shape[0] for p in pyramid[1:]))
+composite_cols = cols + pyramid[1].shape[1]
+composite_image = np.zeros((composite_rows, composite_cols, 3),
+ dtype=np.double)
+
+# store the original to the left
composite_image[:rows, :cols, :] = pyramid[0]
+# stack all downsampled images in a column to the right of the original
i_row = 0
for p in pyramid[1:]:
n_rows, n_cols = p.shape[:2]
| {"golden_diff": "diff --git a/doc/examples/transform/plot_pyramid.py b/doc/examples/transform/plot_pyramid.py\n--- a/doc/examples/transform/plot_pyramid.py\n+++ b/doc/examples/transform/plot_pyramid.py\n@@ -9,6 +9,8 @@\n detection.\n \n \"\"\"\n+import math\n+\n import numpy as np\n import matplotlib.pyplot as plt\n \n@@ -20,10 +22,31 @@\n rows, cols, dim = image.shape\n pyramid = tuple(pyramid_gaussian(image, downscale=2, channel_axis=-1))\n \n-composite_image = np.zeros((rows, cols + cols // 2, 3), dtype=np.double)\n-\n+#####################################################################\n+# Generate a composite image for visualization\n+# ============================================\n+#\n+# For visualization, we generate a composite image with the same number of rows\n+# as the source image but with ``cols + pyramid[1].shape[1]`` columns. We then\n+# have space to stack all of the dowsampled images to the right of the\n+# original.\n+#\n+# Note: The sum of the number of rows in all dowsampled images in the pyramid\n+# may sometimes exceed the original image size in cases when image.shape[0] is\n+# not a power of two. We expand the number of rows in the composite slightly as\n+# necessary to account for this. Expansion beyond the number of rows in the\n+# original will also be necessary to cover cases where downscale < 2.\n+\n+# determine the total number of rows and columns for the composite\n+composite_rows = max(rows, sum(p.shape[0] for p in pyramid[1:]))\n+composite_cols = cols + pyramid[1].shape[1]\n+composite_image = np.zeros((composite_rows, composite_cols, 3),\n+ dtype=np.double)\n+\n+# store the original to the left\n composite_image[:rows, :cols, :] = pyramid[0]\n \n+# stack all downsampled images in a column to the right of the original\n i_row = 0\n for p in pyramid[1:]:\n n_rows, n_cols = p.shape[:2]\n", "issue": "Build image pyramids not always working with other images\n## Description\r\nUsing the *[Build image pyramids](https://scikit-image.org/docs/dev/auto_examples/transform/plot_pyramid.html)* example with a random image is not always working.\r\n\r\n## Way to reproduce\r\n### hand.jpg\r\n\r\n```python\r\nimport numpy as np\r\nimport matplotlib.pyplot as plt\r\n\r\nfrom skimage import data\r\nfrom skimage.transform import pyramid_gaussian\r\n\r\nimport imageio as io\r\n\r\nimage = io.imread('hand.jpg') # data.astronaut()\r\nrows, cols, dim = image.shape\r\npyramid = tuple(pyramid_gaussian(image, downscale=2, multichannel=True))\r\n\r\ncomposite_image = np.zeros((rows, cols + cols // 2, 3), dtype=np.double)\r\n\r\ncomposite_image[:rows, :cols, :] = pyramid[0]\r\n\r\ni_row = 0\r\nfor p in pyramid[1:]:\r\n n_rows, n_cols = p.shape[:2]\r\n composite_image[i_row:i_row + n_rows, cols:cols + n_cols] = p\r\n i_row += n_rows\r\n\r\nfig, ax = plt.subplots()\r\nax.imshow(composite_image)\r\nplt.show()\r\n```\r\n\r\n\r\n## Version information\r\n```python\r\n3.7.4 (tags/v3.7.4:e09359112e, Jul 8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]\r\nWindows-10-10.0.18362-SP0\r\nscikit-image version: 0.16.1\r\nnumpy version: 1.17.2\r\n```\r\n\r\n```python\r\nTraceback (most recent call last):\r\n File \"D:\\Vincent\\Bureau\\Patern recongnition and image analysis\\Patern recognition and patern analysis\\LAB_1\\plot_pyramid.py\", line 44, in <module>\r\n composite_image[i_row:i_row + n_rows, cols:cols + n_cols] = p\r\nValueError: could not broadcast input array from shape (2,2,3) into shape (1,2,3)\r\n```\r\n## Possible solution\r\nI was able to make it works for the same RGB image but this code is not adapted for BW and RGBA.\r\n\r\n```python\r\nimport numpy as np\r\nimport matplotlib.pyplot as plt\r\n\r\nfrom skimage import data\r\nfrom skimage.transform import pyramid_gaussian\r\nimport imageio as io\r\n\r\nimage = io.imread('hand.jpg') # data.astronaut()\r\n\r\nrows, cols, dim = image.shape\r\npyramid = tuple(pyramid_gaussian(image, downscale=2, multichannel=True))\r\n\r\ncomposite_image = np.zeros((rows, cols + cols // 2, dim), dtype=np.double)\r\n\r\ncomposite_image[:rows, :cols, :] = pyramid[0]\r\n\r\ni_row = 0\r\nfor p in pyramid[1:]:\r\n n_rows, n_cols = p.shape[:2]\r\n # Check the dimension before assignement\r\n if(composite_image[i_row:i_row + n_rows, cols:cols + n_cols].shape==p.shape):\r\n composite_image[i_row:i_row + n_rows, cols:cols + n_cols] = p\r\n i_row += n_rows\r\n else:\r\n break\r\n \r\nfig, ax = plt.subplots()\r\nax.imshow(composite_image)\r\nplt.show()\r\n```\r\n### Result\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"\n====================\nBuild image pyramids\n====================\n\nThe ``pyramid_gaussian`` function takes an image and yields successive images\nshrunk by a constant scale factor. Image pyramids are often used, e.g., to\nimplement algorithms for denoising, texture discrimination, and scale-invariant\ndetection.\n\n\"\"\"\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom skimage import data\nfrom skimage.transform import pyramid_gaussian\n\n\nimage = data.astronaut()\nrows, cols, dim = image.shape\npyramid = tuple(pyramid_gaussian(image, downscale=2, channel_axis=-1))\n\ncomposite_image = np.zeros((rows, cols + cols // 2, 3), dtype=np.double)\n\ncomposite_image[:rows, :cols, :] = pyramid[0]\n\ni_row = 0\nfor p in pyramid[1:]:\n n_rows, n_cols = p.shape[:2]\n composite_image[i_row:i_row + n_rows, cols:cols + n_cols] = p\n i_row += n_rows\n\nfig, ax = plt.subplots()\nax.imshow(composite_image)\nplt.show()\n", "path": "doc/examples/transform/plot_pyramid.py"}], "after_files": [{"content": "\"\"\"\n====================\nBuild image pyramids\n====================\n\nThe ``pyramid_gaussian`` function takes an image and yields successive images\nshrunk by a constant scale factor. Image pyramids are often used, e.g., to\nimplement algorithms for denoising, texture discrimination, and scale-invariant\ndetection.\n\n\"\"\"\nimport math\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom skimage import data\nfrom skimage.transform import pyramid_gaussian\n\n\nimage = data.astronaut()\nrows, cols, dim = image.shape\npyramid = tuple(pyramid_gaussian(image, downscale=2, channel_axis=-1))\n\n#####################################################################\n# Generate a composite image for visualization\n# ============================================\n#\n# For visualization, we generate a composite image with the same number of rows\n# as the source image but with ``cols + pyramid[1].shape[1]`` columns. We then\n# have space to stack all of the dowsampled images to the right of the\n# original.\n#\n# Note: The sum of the number of rows in all dowsampled images in the pyramid\n# may sometimes exceed the original image size in cases when image.shape[0] is\n# not a power of two. We expand the number of rows in the composite slightly as\n# necessary to account for this. Expansion beyond the number of rows in the\n# original will also be necessary to cover cases where downscale < 2.\n\n# determine the total number of rows and columns for the composite\ncomposite_rows = max(rows, sum(p.shape[0] for p in pyramid[1:]))\ncomposite_cols = cols + pyramid[1].shape[1]\ncomposite_image = np.zeros((composite_rows, composite_cols, 3),\n dtype=np.double)\n\n# store the original to the left\ncomposite_image[:rows, :cols, :] = pyramid[0]\n\n# stack all downsampled images in a column to the right of the original\ni_row = 0\nfor p in pyramid[1:]:\n n_rows, n_cols = p.shape[:2]\n composite_image[i_row:i_row + n_rows, cols:cols + n_cols] = p\n i_row += n_rows\n\nfig, ax = plt.subplots()\nax.imshow(composite_image)\nplt.show()\n", "path": "doc/examples/transform/plot_pyramid.py"}]} | 1,404 | 450 |
gh_patches_debug_21479 | rasdani/github-patches | git_diff | ansible__molecule-2063 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Extra vars are not passed to the playbook anymore with converge.
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Please also check https://molecule.readthedocs.io/en/latest/faq.html --->
<!--- Please use https://groups.google.com/forum/#!forum/molecule-users for usage questions -->
# Issue Type
- Bug report
# Molecule and Ansible details
```
ansible 2.8.0
config file = None
configured module search path = ['/home/olcla/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0]
molecule (downgraded to 2.20 to fix but problem related to master branch)
```
Molecule installation method (one of):
- pip from git master
Ansible installation method (one of):
- pip
# Desired Behavior
When running `molecule converge -s my_scenario -- -e somevar=value` I expect molecule to pass the extra vars when running playbook.yml
# Actual Behaviour
I was trying to fix deprecations related to ansible 2.8 and had to install molecule from git master to overcome problems with testinfra. When running the above command with the latest molecule master, the extra vars are not passed to the playbook anymore. This is working as expected when downgrading to molecule 2.20.1
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `molecule/command/converge.py`
Content:
```
1 # Copyright (c) 2015-2018 Cisco Systems, Inc.
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a copy
4 # of this software and associated documentation files (the "Software"), to
5 # deal in the Software without restriction, including without limitation the
6 # rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
7 # sell copies of the Software, and to permit persons to whom the Software is
8 # furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
18 # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
19 # DEALINGS IN THE SOFTWARE.
20
21 import click
22
23 from molecule import logger
24 from molecule.command import base
25
26 LOG = logger.get_logger(__name__)
27
28
29 class Converge(base.Base):
30 """
31 .. program:: molecule converge
32
33 .. option:: molecule converge
34
35 Target the default scenario.
36
37 .. program:: molecule converge --scenario-name foo
38
39 .. option:: molecule converge --scenario-name foo
40
41 Targeting a specific scenario.
42
43 .. program:: molecule converge -- -vvv --tags foo,bar
44
45 .. option:: molecule converge -- -vvv --tags foo,bar
46
47 Providing additional command line arguments to the `ansible-playbook`
48 command. Use this option with care, as there is no sanitation or
49 validation of input. Options passed on the CLI override options
50 provided in provisioner's `options` section of `molecule.yml`.
51
52 .. program:: molecule --debug converge
53
54 .. option:: molecule --debug converge
55
56 Executing with `debug`.
57
58 .. program:: molecule --base-config base.yml converge
59
60 .. option:: molecule --base-config base.yml converge
61
62 Executing with a `base-config`.
63
64 .. program:: molecule --env-file foo.yml converge
65
66 .. option:: molecule --env-file foo.yml converge
67
68 Load an env file to read variables from when rendering
69 molecule.yml.
70 """
71
72 def execute(self):
73 """
74 Execute the actions necessary to perform a `molecule converge` and
75 returns None.
76
77 :return: None
78 """
79 self.print_info()
80 self._config.provisioner.converge()
81 self._config.state.change_state('converged', True)
82
83
84 @click.command()
85 @click.pass_context
86 @click.option(
87 '--scenario-name',
88 '-s',
89 default=base.MOLECULE_DEFAULT_SCENARIO_NAME,
90 help='Name of the scenario to target. ({})'.format(
91 base.MOLECULE_DEFAULT_SCENARIO_NAME))
92 @click.argument('ansible_args', nargs=-1, type=click.UNPROCESSED)
93 def converge(ctx, scenario_name, ansible_args): # pragma: no cover
94 """
95 Use the provisioner to configure instances (dependency, create, prepare
96 converge).
97 """
98
99 args = ctx.obj.get('args')
100 subcommand = base._get_subcommand(__name__)
101 command_args = {
102 'subcommand': subcommand,
103 }
104
105 base.execute_cmdline_scenarios(scenario_name, args, command_args)
106
```
Path: `molecule/command/base.py`
Content:
```
1 # Copyright (c) 2015-2018 Cisco Systems, Inc.
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a copy
4 # of this software and associated documentation files (the "Software"), to
5 # deal in the Software without restriction, including without limitation the
6 # rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
7 # sell copies of the Software, and to permit persons to whom the Software is
8 # furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
18 # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
19 # DEALINGS IN THE SOFTWARE.
20
21 import abc
22 import collections
23 import glob
24 import os
25
26 import six
27
28 import molecule.command
29 import molecule.scenarios
30 from molecule import config
31 from molecule import logger
32 from molecule import util
33
34 LOG = logger.get_logger(__name__)
35 MOLECULE_GLOB = os.environ.get('MOLECULE_GLOB', 'molecule/*/molecule.yml')
36 MOLECULE_DEFAULT_SCENARIO_NAME = 'default'
37
38
39 @six.add_metaclass(abc.ABCMeta)
40 class Base(object):
41 """
42 An abstract base class used to define the command interface.
43 """
44
45 def __init__(self, c):
46 """
47 Base initializer for all :ref:`Command` classes.
48
49 :param c: An instance of a Molecule config.
50 :returns: None
51 """
52 self._config = c
53 self._setup()
54
55 @abc.abstractmethod
56 def execute(self): # pragma: no cover
57 pass
58
59 def print_info(self):
60 msg = "Scenario: '{}'".format(self._config.scenario.name)
61 LOG.info(msg)
62 msg = "Action: '{}'".format(util.underscore(self.__class__.__name__))
63 LOG.info(msg)
64
65 def _setup(self):
66 """
67 Prepare Molecule's provisioner and returns None.
68
69 :return: None
70 """
71 self._config.provisioner.write_config()
72 self._config.provisioner.manage_inventory()
73
74
75 def execute_cmdline_scenarios(scenario_name, args, command_args):
76 """
77 Execute scenario sequences based on parsed command-line arguments.
78
79 This is useful for subcommands that run scenario sequences, which
80 excludes subcommands such as ``list``, ``login``, and ``matrix``.
81
82 ``args`` and ``command_args`` are combined using :func:`get_configs`
83 to generate the scenario(s) configuration.
84
85 :param scenario_name: Name of scenario to run, or ``None`` to run all.
86 :param args: ``args`` dict from ``click`` command context
87 :param command_args: dict of command argumentss, including the target
88 subcommand to execute
89 :returns: None
90
91 """
92 scenarios = molecule.scenarios.Scenarios(
93 get_configs(args, command_args), scenario_name)
94 scenarios.print_matrix()
95 for scenario in scenarios:
96 try:
97 execute_scenario(scenario)
98 except SystemExit:
99 # if the command has a 'destroy' arg, like test does,
100 # handle that behavior here.
101 if command_args.get('destroy') == 'always':
102 msg = ('An error occurred during the {} sequence action: '
103 "'{}'. Cleaning up.").format(scenario.config.subcommand,
104 scenario.config.action)
105 LOG.warning(msg)
106 execute_subcommand(scenario.config, 'cleanup')
107 execute_subcommand(scenario.config, 'destroy')
108 # always prune ephemeral dir if destroying on failure
109 scenario.prune()
110 util.sysexit()
111 else:
112 raise
113
114
115 def execute_subcommand(config, subcommand):
116 command_module = getattr(molecule.command, subcommand)
117 command = getattr(command_module, util.camelize(subcommand))
118 # knowledge of the current action is used by some provisioners
119 # to ensure they behave correctly during certain sequence steps,
120 # particulary the setting of ansible options in create/destroy,
121 # and is also used for reporting in execute_cmdline_scenarios
122 config.action = subcommand
123
124 return command(config).execute()
125
126
127 def execute_scenario(scenario):
128 """
129 Execute each command in the given scenario's configured sequence.
130
131 :param scenario: The scenario to execute.
132 :returns: None
133
134 """
135
136 for action in scenario.sequence:
137 execute_subcommand(scenario.config, action)
138
139 # pruning only if a 'destroy' step was in the sequence allows for normal
140 # debugging by manually stepping through a scenario sequence
141 if 'destroy' in scenario.sequence:
142 scenario.prune()
143
144
145 def get_configs(args, command_args, ansible_args=()):
146 """
147 Glob the current directory for Molecule config files, instantiate config
148 objects, and returns a list.
149
150 :param args: A dict of options, arguments and commands from the CLI.
151 :param command_args: A dict of options passed to the subcommand from
152 the CLI.
153 :param ansible_args: An optional tuple of arguments provided to the
154 `ansible-playbook` command.
155 :return: list
156 """
157 configs = [
158 config.Config(
159 molecule_file=util.abs_path(c),
160 args=args,
161 command_args=command_args,
162 ansible_args=ansible_args,
163 ) for c in glob.glob(MOLECULE_GLOB)
164 ]
165 _verify_configs(configs)
166
167 return configs
168
169
170 def _verify_configs(configs):
171 """
172 Verify a Molecule config was found and returns None.
173
174 :param configs: A list containing absolute paths to Molecule config files.
175 :return: None
176 """
177 if configs:
178 scenario_names = [c.scenario.name for c in configs]
179 for scenario_name, n in collections.Counter(scenario_names).items():
180 if n > 1:
181 msg = ("Duplicate scenario name '{}' found. "
182 'Exiting.').format(scenario_name)
183 util.sysexit_with_message(msg)
184
185 else:
186 msg = "'{}' glob failed. Exiting.".format(MOLECULE_GLOB)
187 util.sysexit_with_message(msg)
188
189
190 def _get_subcommand(string):
191 return string.split('.')[-1]
192
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/molecule/command/base.py b/molecule/command/base.py
--- a/molecule/command/base.py
+++ b/molecule/command/base.py
@@ -72,7 +72,10 @@
self._config.provisioner.manage_inventory()
-def execute_cmdline_scenarios(scenario_name, args, command_args):
+def execute_cmdline_scenarios(scenario_name,
+ args,
+ command_args,
+ ansible_args=()):
"""
Execute scenario sequences based on parsed command-line arguments.
@@ -90,7 +93,7 @@
"""
scenarios = molecule.scenarios.Scenarios(
- get_configs(args, command_args), scenario_name)
+ get_configs(args, command_args, ansible_args), scenario_name)
scenarios.print_matrix()
for scenario in scenarios:
try:
diff --git a/molecule/command/converge.py b/molecule/command/converge.py
--- a/molecule/command/converge.py
+++ b/molecule/command/converge.py
@@ -102,4 +102,5 @@
'subcommand': subcommand,
}
- base.execute_cmdline_scenarios(scenario_name, args, command_args)
+ base.execute_cmdline_scenarios(scenario_name, args, command_args,
+ ansible_args)
| {"golden_diff": "diff --git a/molecule/command/base.py b/molecule/command/base.py\n--- a/molecule/command/base.py\n+++ b/molecule/command/base.py\n@@ -72,7 +72,10 @@\n self._config.provisioner.manage_inventory()\n \n \n-def execute_cmdline_scenarios(scenario_name, args, command_args):\n+def execute_cmdline_scenarios(scenario_name,\n+ args,\n+ command_args,\n+ ansible_args=()):\n \"\"\"\n Execute scenario sequences based on parsed command-line arguments.\n \n@@ -90,7 +93,7 @@\n \n \"\"\"\n scenarios = molecule.scenarios.Scenarios(\n- get_configs(args, command_args), scenario_name)\n+ get_configs(args, command_args, ansible_args), scenario_name)\n scenarios.print_matrix()\n for scenario in scenarios:\n try:\ndiff --git a/molecule/command/converge.py b/molecule/command/converge.py\n--- a/molecule/command/converge.py\n+++ b/molecule/command/converge.py\n@@ -102,4 +102,5 @@\n 'subcommand': subcommand,\n }\n \n- base.execute_cmdline_scenarios(scenario_name, args, command_args)\n+ base.execute_cmdline_scenarios(scenario_name, args, command_args,\n+ ansible_args)\n", "issue": "Extra vars are not passed to the playbook anymore with converge.\n<!--- Verify first that your issue is not already reported on GitHub -->\r\n<!--- Please also check https://molecule.readthedocs.io/en/latest/faq.html --->\r\n<!--- Please use https://groups.google.com/forum/#!forum/molecule-users for usage questions -->\r\n\r\n# Issue Type\r\n\r\n- Bug report\r\n\r\n# Molecule and Ansible details\r\n\r\n```\r\nansible 2.8.0\r\n config file = None\r\n configured module search path = ['/home/olcla/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible\r\n executable location = /usr/local/bin/ansible\r\n python version = 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0]\r\n\r\nmolecule (downgraded to 2.20 to fix but problem related to master branch)\r\n```\r\n\r\nMolecule installation method (one of):\r\n\r\n- pip from git master\r\n\r\nAnsible installation method (one of):\r\n\r\n- pip\r\n\r\n# Desired Behavior\r\n\r\nWhen running `molecule converge -s my_scenario -- -e somevar=value` I expect molecule to pass the extra vars when running playbook.yml\r\n\r\n# Actual Behaviour\r\n\r\nI was trying to fix deprecations related to ansible 2.8 and had to install molecule from git master to overcome problems with testinfra. When running the above command with the latest molecule master, the extra vars are not passed to the playbook anymore. This is working as expected when downgrading to molecule 2.20.1 \r\n\r\n\n", "before_files": [{"content": "# Copyright (c) 2015-2018 Cisco Systems, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to\n# deal in the Software without restriction, including without limitation the\n# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n# sell copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nimport click\n\nfrom molecule import logger\nfrom molecule.command import base\n\nLOG = logger.get_logger(__name__)\n\n\nclass Converge(base.Base):\n \"\"\"\n .. program:: molecule converge\n\n .. option:: molecule converge\n\n Target the default scenario.\n\n .. program:: molecule converge --scenario-name foo\n\n .. option:: molecule converge --scenario-name foo\n\n Targeting a specific scenario.\n\n .. program:: molecule converge -- -vvv --tags foo,bar\n\n .. option:: molecule converge -- -vvv --tags foo,bar\n\n Providing additional command line arguments to the `ansible-playbook`\n command. Use this option with care, as there is no sanitation or\n validation of input. Options passed on the CLI override options\n provided in provisioner's `options` section of `molecule.yml`.\n\n .. program:: molecule --debug converge\n\n .. option:: molecule --debug converge\n\n Executing with `debug`.\n\n .. program:: molecule --base-config base.yml converge\n\n .. option:: molecule --base-config base.yml converge\n\n Executing with a `base-config`.\n\n .. program:: molecule --env-file foo.yml converge\n\n .. option:: molecule --env-file foo.yml converge\n\n Load an env file to read variables from when rendering\n molecule.yml.\n \"\"\"\n\n def execute(self):\n \"\"\"\n Execute the actions necessary to perform a `molecule converge` and\n returns None.\n\n :return: None\n \"\"\"\n self.print_info()\n self._config.provisioner.converge()\n self._config.state.change_state('converged', True)\n\n\[email protected]()\[email protected]_context\[email protected](\n '--scenario-name',\n '-s',\n default=base.MOLECULE_DEFAULT_SCENARIO_NAME,\n help='Name of the scenario to target. ({})'.format(\n base.MOLECULE_DEFAULT_SCENARIO_NAME))\[email protected]('ansible_args', nargs=-1, type=click.UNPROCESSED)\ndef converge(ctx, scenario_name, ansible_args): # pragma: no cover\n \"\"\"\n Use the provisioner to configure instances (dependency, create, prepare\n converge).\n \"\"\"\n\n args = ctx.obj.get('args')\n subcommand = base._get_subcommand(__name__)\n command_args = {\n 'subcommand': subcommand,\n }\n\n base.execute_cmdline_scenarios(scenario_name, args, command_args)\n", "path": "molecule/command/converge.py"}, {"content": "# Copyright (c) 2015-2018 Cisco Systems, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to\n# deal in the Software without restriction, including without limitation the\n# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n# sell copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nimport abc\nimport collections\nimport glob\nimport os\n\nimport six\n\nimport molecule.command\nimport molecule.scenarios\nfrom molecule import config\nfrom molecule import logger\nfrom molecule import util\n\nLOG = logger.get_logger(__name__)\nMOLECULE_GLOB = os.environ.get('MOLECULE_GLOB', 'molecule/*/molecule.yml')\nMOLECULE_DEFAULT_SCENARIO_NAME = 'default'\n\n\[email protected]_metaclass(abc.ABCMeta)\nclass Base(object):\n \"\"\"\n An abstract base class used to define the command interface.\n \"\"\"\n\n def __init__(self, c):\n \"\"\"\n Base initializer for all :ref:`Command` classes.\n\n :param c: An instance of a Molecule config.\n :returns: None\n \"\"\"\n self._config = c\n self._setup()\n\n @abc.abstractmethod\n def execute(self): # pragma: no cover\n pass\n\n def print_info(self):\n msg = \"Scenario: '{}'\".format(self._config.scenario.name)\n LOG.info(msg)\n msg = \"Action: '{}'\".format(util.underscore(self.__class__.__name__))\n LOG.info(msg)\n\n def _setup(self):\n \"\"\"\n Prepare Molecule's provisioner and returns None.\n\n :return: None\n \"\"\"\n self._config.provisioner.write_config()\n self._config.provisioner.manage_inventory()\n\n\ndef execute_cmdline_scenarios(scenario_name, args, command_args):\n \"\"\"\n Execute scenario sequences based on parsed command-line arguments.\n\n This is useful for subcommands that run scenario sequences, which\n excludes subcommands such as ``list``, ``login``, and ``matrix``.\n\n ``args`` and ``command_args`` are combined using :func:`get_configs`\n to generate the scenario(s) configuration.\n\n :param scenario_name: Name of scenario to run, or ``None`` to run all.\n :param args: ``args`` dict from ``click`` command context\n :param command_args: dict of command argumentss, including the target\n subcommand to execute\n :returns: None\n\n \"\"\"\n scenarios = molecule.scenarios.Scenarios(\n get_configs(args, command_args), scenario_name)\n scenarios.print_matrix()\n for scenario in scenarios:\n try:\n execute_scenario(scenario)\n except SystemExit:\n # if the command has a 'destroy' arg, like test does,\n # handle that behavior here.\n if command_args.get('destroy') == 'always':\n msg = ('An error occurred during the {} sequence action: '\n \"'{}'. Cleaning up.\").format(scenario.config.subcommand,\n scenario.config.action)\n LOG.warning(msg)\n execute_subcommand(scenario.config, 'cleanup')\n execute_subcommand(scenario.config, 'destroy')\n # always prune ephemeral dir if destroying on failure\n scenario.prune()\n util.sysexit()\n else:\n raise\n\n\ndef execute_subcommand(config, subcommand):\n command_module = getattr(molecule.command, subcommand)\n command = getattr(command_module, util.camelize(subcommand))\n # knowledge of the current action is used by some provisioners\n # to ensure they behave correctly during certain sequence steps,\n # particulary the setting of ansible options in create/destroy,\n # and is also used for reporting in execute_cmdline_scenarios\n config.action = subcommand\n\n return command(config).execute()\n\n\ndef execute_scenario(scenario):\n \"\"\"\n Execute each command in the given scenario's configured sequence.\n\n :param scenario: The scenario to execute.\n :returns: None\n\n \"\"\"\n\n for action in scenario.sequence:\n execute_subcommand(scenario.config, action)\n\n # pruning only if a 'destroy' step was in the sequence allows for normal\n # debugging by manually stepping through a scenario sequence\n if 'destroy' in scenario.sequence:\n scenario.prune()\n\n\ndef get_configs(args, command_args, ansible_args=()):\n \"\"\"\n Glob the current directory for Molecule config files, instantiate config\n objects, and returns a list.\n\n :param args: A dict of options, arguments and commands from the CLI.\n :param command_args: A dict of options passed to the subcommand from\n the CLI.\n :param ansible_args: An optional tuple of arguments provided to the\n `ansible-playbook` command.\n :return: list\n \"\"\"\n configs = [\n config.Config(\n molecule_file=util.abs_path(c),\n args=args,\n command_args=command_args,\n ansible_args=ansible_args,\n ) for c in glob.glob(MOLECULE_GLOB)\n ]\n _verify_configs(configs)\n\n return configs\n\n\ndef _verify_configs(configs):\n \"\"\"\n Verify a Molecule config was found and returns None.\n\n :param configs: A list containing absolute paths to Molecule config files.\n :return: None\n \"\"\"\n if configs:\n scenario_names = [c.scenario.name for c in configs]\n for scenario_name, n in collections.Counter(scenario_names).items():\n if n > 1:\n msg = (\"Duplicate scenario name '{}' found. \"\n 'Exiting.').format(scenario_name)\n util.sysexit_with_message(msg)\n\n else:\n msg = \"'{}' glob failed. Exiting.\".format(MOLECULE_GLOB)\n util.sysexit_with_message(msg)\n\n\ndef _get_subcommand(string):\n return string.split('.')[-1]\n", "path": "molecule/command/base.py"}], "after_files": [{"content": "# Copyright (c) 2015-2018 Cisco Systems, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to\n# deal in the Software without restriction, including without limitation the\n# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n# sell copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nimport click\n\nfrom molecule import logger\nfrom molecule.command import base\n\nLOG = logger.get_logger(__name__)\n\n\nclass Converge(base.Base):\n \"\"\"\n .. program:: molecule converge\n\n .. option:: molecule converge\n\n Target the default scenario.\n\n .. program:: molecule converge --scenario-name foo\n\n .. option:: molecule converge --scenario-name foo\n\n Targeting a specific scenario.\n\n .. program:: molecule converge -- -vvv --tags foo,bar\n\n .. option:: molecule converge -- -vvv --tags foo,bar\n\n Providing additional command line arguments to the `ansible-playbook`\n command. Use this option with care, as there is no sanitation or\n validation of input. Options passed on the CLI override options\n provided in provisioner's `options` section of `molecule.yml`.\n\n .. program:: molecule --debug converge\n\n .. option:: molecule --debug converge\n\n Executing with `debug`.\n\n .. program:: molecule --base-config base.yml converge\n\n .. option:: molecule --base-config base.yml converge\n\n Executing with a `base-config`.\n\n .. program:: molecule --env-file foo.yml converge\n\n .. option:: molecule --env-file foo.yml converge\n\n Load an env file to read variables from when rendering\n molecule.yml.\n \"\"\"\n\n def execute(self):\n \"\"\"\n Execute the actions necessary to perform a `molecule converge` and\n returns None.\n\n :return: None\n \"\"\"\n self.print_info()\n self._config.provisioner.converge()\n self._config.state.change_state('converged', True)\n\n\[email protected]()\[email protected]_context\[email protected](\n '--scenario-name',\n '-s',\n default=base.MOLECULE_DEFAULT_SCENARIO_NAME,\n help='Name of the scenario to target. ({})'.format(\n base.MOLECULE_DEFAULT_SCENARIO_NAME))\[email protected]('ansible_args', nargs=-1, type=click.UNPROCESSED)\ndef converge(ctx, scenario_name, ansible_args): # pragma: no cover\n \"\"\"\n Use the provisioner to configure instances (dependency, create, prepare\n converge).\n \"\"\"\n\n args = ctx.obj.get('args')\n subcommand = base._get_subcommand(__name__)\n command_args = {\n 'subcommand': subcommand,\n }\n\n base.execute_cmdline_scenarios(scenario_name, args, command_args,\n ansible_args)\n", "path": "molecule/command/converge.py"}, {"content": "# Copyright (c) 2015-2018 Cisco Systems, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to\n# deal in the Software without restriction, including without limitation the\n# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n# sell copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nimport abc\nimport collections\nimport glob\nimport os\n\nimport six\n\nimport molecule.command\nimport molecule.scenarios\nfrom molecule import config\nfrom molecule import logger\nfrom molecule import util\n\nLOG = logger.get_logger(__name__)\nMOLECULE_GLOB = os.environ.get('MOLECULE_GLOB', 'molecule/*/molecule.yml')\nMOLECULE_DEFAULT_SCENARIO_NAME = 'default'\n\n\[email protected]_metaclass(abc.ABCMeta)\nclass Base(object):\n \"\"\"\n An abstract base class used to define the command interface.\n \"\"\"\n\n def __init__(self, c):\n \"\"\"\n Base initializer for all :ref:`Command` classes.\n\n :param c: An instance of a Molecule config.\n :returns: None\n \"\"\"\n self._config = c\n self._setup()\n\n @abc.abstractmethod\n def execute(self): # pragma: no cover\n pass\n\n def print_info(self):\n msg = \"Scenario: '{}'\".format(self._config.scenario.name)\n LOG.info(msg)\n msg = \"Action: '{}'\".format(util.underscore(self.__class__.__name__))\n LOG.info(msg)\n\n def _setup(self):\n \"\"\"\n Prepare Molecule's provisioner and returns None.\n\n :return: None\n \"\"\"\n self._config.provisioner.write_config()\n self._config.provisioner.manage_inventory()\n\n\ndef execute_cmdline_scenarios(scenario_name,\n args,\n command_args,\n ansible_args=()):\n \"\"\"\n Execute scenario sequences based on parsed command-line arguments.\n\n This is useful for subcommands that run scenario sequences, which\n excludes subcommands such as ``list``, ``login``, and ``matrix``.\n\n ``args`` and ``command_args`` are combined using :func:`get_configs`\n to generate the scenario(s) configuration.\n\n :param scenario_name: Name of scenario to run, or ``None`` to run all.\n :param args: ``args`` dict from ``click`` command context\n :param command_args: dict of command argumentss, including the target\n subcommand to execute\n :returns: None\n\n \"\"\"\n scenarios = molecule.scenarios.Scenarios(\n get_configs(args, command_args, ansible_args), scenario_name)\n scenarios.print_matrix()\n for scenario in scenarios:\n try:\n execute_scenario(scenario)\n except SystemExit:\n # if the command has a 'destroy' arg, like test does,\n # handle that behavior here.\n if command_args.get('destroy') == 'always':\n msg = ('An error occurred during the {} sequence action: '\n \"'{}'. Cleaning up.\").format(scenario.config.subcommand,\n scenario.config.action)\n LOG.warning(msg)\n execute_subcommand(scenario.config, 'cleanup')\n execute_subcommand(scenario.config, 'destroy')\n # always prune ephemeral dir if destroying on failure\n scenario.prune()\n util.sysexit()\n else:\n raise\n\n\ndef execute_subcommand(config, subcommand):\n command_module = getattr(molecule.command, subcommand)\n command = getattr(command_module, util.camelize(subcommand))\n # knowledge of the current action is used by some provisioners\n # to ensure they behave correctly during certain sequence steps,\n # particulary the setting of ansible options in create/destroy,\n # and is also used for reporting in execute_cmdline_scenarios\n config.action = subcommand\n\n return command(config).execute()\n\n\ndef execute_scenario(scenario):\n \"\"\"\n Execute each command in the given scenario's configured sequence.\n\n :param scenario: The scenario to execute.\n :returns: None\n\n \"\"\"\n\n for action in scenario.sequence:\n execute_subcommand(scenario.config, action)\n\n # pruning only if a 'destroy' step was in the sequence allows for normal\n # debugging by manually stepping through a scenario sequence\n if 'destroy' in scenario.sequence:\n scenario.prune()\n\n\ndef get_configs(args, command_args, ansible_args=()):\n \"\"\"\n Glob the current directory for Molecule config files, instantiate config\n objects, and returns a list.\n\n :param args: A dict of options, arguments and commands from the CLI.\n :param command_args: A dict of options passed to the subcommand from\n the CLI.\n :param ansible_args: An optional tuple of arguments provided to the\n `ansible-playbook` command.\n :return: list\n \"\"\"\n configs = [\n config.Config(\n molecule_file=util.abs_path(c),\n args=args,\n command_args=command_args,\n ansible_args=ansible_args,\n ) for c in glob.glob(MOLECULE_GLOB)\n ]\n _verify_configs(configs)\n\n return configs\n\n\ndef _verify_configs(configs):\n \"\"\"\n Verify a Molecule config was found and returns None.\n\n :param configs: A list containing absolute paths to Molecule config files.\n :return: None\n \"\"\"\n if configs:\n scenario_names = [c.scenario.name for c in configs]\n for scenario_name, n in collections.Counter(scenario_names).items():\n if n > 1:\n msg = (\"Duplicate scenario name '{}' found. \"\n 'Exiting.').format(scenario_name)\n util.sysexit_with_message(msg)\n\n else:\n msg = \"'{}' glob failed. Exiting.\".format(MOLECULE_GLOB)\n util.sysexit_with_message(msg)\n\n\ndef _get_subcommand(string):\n return string.split('.')[-1]\n", "path": "molecule/command/base.py"}]} | 3,530 | 280 |
gh_patches_debug_25326 | rasdani/github-patches | git_diff | mlflow__mlflow-12224 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] uc_volume_dataset_source only validates file paths, not folder paths
### Issues Policy acknowledgement
- [X] I have read and agree to submit bug reports in accordance with the [issues policy](https://www.github.com/mlflow/mlflow/blob/master/ISSUE_POLICY.md)
### Where did you encounter this bug?
Local machine
### Willingness to contribute
Yes. I would be willing to contribute a fix for this bug with guidance from the MLflow community.
### MLflow version
mlflow-2.12.2
### System information
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**:
- **Python version**:
- **yarn version, if running the dev UI**:
### Describe the problem
https://github.com/mlflow/mlflow/blob/72df4a2a0f44c52179dfbdc7d47ad10f58ceec39/mlflow/data/uc_volume_dataset_source.py#L28 doesn't verify folder paths, only file paths
### Tracking information
<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->
```shell
REPLACE_ME
```
### Code to reproduce issue
<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->
```
REPLACE_ME
```
### Stack trace
<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->
```
REPLACE_ME
```
### Other info / logs
<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->
```
REPLACE_ME
```
### What component(s) does this bug affect?
- [ ] `area/artifacts`: Artifact stores and artifact logging
- [ ] `area/build`: Build and test infrastructure for MLflow
- [ ] `area/deployments`: MLflow Deployments client APIs, server, and third-party Deployments integrations
- [ ] `area/docs`: MLflow documentation pages
- [ ] `area/examples`: Example code
- [ ] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry
- [ ] `area/models`: MLmodel format, model serialization/deserialization, flavors
- [ ] `area/recipes`: Recipes, Recipe APIs, Recipe configs, Recipe Templates
- [ ] `area/projects`: MLproject format, project running backends
- [ ] `area/scoring`: MLflow Model server, model deployment tools, Spark UDFs
- [ ] `area/server-infra`: MLflow Tracking server backend
- [ ] `area/tracking`: Tracking Service, tracking client APIs, autologging
### What interface(s) does this bug affect?
- [ ] `area/uiux`: Front-end, user experience, plotting, JavaScript, JavaScript dev server
- [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
- [ ] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry
- [ ] `area/windows`: Windows support
### What language(s) does this bug affect?
- [ ] `language/r`: R APIs and clients
- [ ] `language/java`: Java APIs and clients
- [ ] `language/new`: Proposals for new client languages
### What integration(s) does this bug affect?
- [ ] `integrations/azure`: Azure and Azure ML integrations
- [ ] `integrations/sagemaker`: SageMaker integrations
- [ ] `integrations/databricks`: Databricks integrations
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mlflow/data/uc_volume_dataset_source.py`
Content:
```
1 import logging
2 from typing import Any, Dict
3
4 from mlflow.data.dataset_source import DatasetSource
5 from mlflow.exceptions import MlflowException
6
7 _logger = logging.getLogger(__name__)
8
9
10 class UCVolumeDatasetSource(DatasetSource):
11 """Represents the source of a dataset stored in Databricks Unified Catalog Volume.
12
13 If you are using a delta table, please use `mlflow.data.delta_dataset_source.DeltaDatasetSource`
14 instead. This `UCVolumeDatasetSource` does not provide loading function, and is mostly useful
15 when you are logging a `mlflow.data.meta_dataset.MetaDataset` to MLflow, i.e., you want
16 to log the source of dataset to MLflow without loading the dataset.
17
18 Args:
19 path: the UC path of your data. It should be a valid UC path following the pattern
20 "/Volumes/{catalog}/{schema}/{volume}/{file_path}". For example,
21 "/Volumes/MyCatalog/MySchema/MyVolume/MyFile.json".
22 """
23
24 def __init__(self, path: str):
25 self._verify_uc_path_is_valid(path)
26 self.path = path
27
28 def _verify_uc_path_is_valid(self, path):
29 """Verify if the path exists in Databricks Unified Catalog."""
30 try:
31 from databricks.sdk import WorkspaceClient
32
33 w = WorkspaceClient()
34 except ImportError:
35 _logger.warning(
36 "Cannot verify the path of `UCVolumeDatasetSource` because of missing"
37 "`databricks-sdk`. Please install `databricks-sdk` via "
38 "`pip install -U databricks-sdk`. This does not block creating "
39 "`UCVolumeDatasetSource`, but your `UCVolumeDatasetSource` might be invalid."
40 )
41 return
42 except Exception:
43 _logger.warning(
44 "Cannot verify the path of `UCVolumeDatasetSource` due to a connection failure "
45 "with Databricks workspace. Please run `mlflow.login()` to log in to Databricks. "
46 "This does not block creating `UCVolumeDatasetSource`, but your "
47 "`UCVolumeDatasetSource` might be invalid."
48 )
49 return
50
51 try:
52 w.files.get_metadata(path)
53 except Exception:
54 raise MlflowException(f"{path} does not exist in Databricks Unified Catalog.")
55
56 @staticmethod
57 def _get_source_type() -> str:
58 return "uc_volume"
59
60 @staticmethod
61 def _can_resolve(raw_source: Any):
62 raise NotImplementedError
63
64 @classmethod
65 def _resolve(cls, raw_source: str):
66 raise NotImplementedError
67
68 def to_dict(self) -> Dict[Any, Any]:
69 return {"path": self.path}
70
71 @classmethod
72 def from_dict(cls, source_dict: Dict[Any, Any]) -> "UCVolumeDatasetSource":
73 return cls(**source_dict)
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mlflow/data/uc_volume_dataset_source.py b/mlflow/data/uc_volume_dataset_source.py
--- a/mlflow/data/uc_volume_dataset_source.py
+++ b/mlflow/data/uc_volume_dataset_source.py
@@ -22,10 +22,10 @@
"""
def __init__(self, path: str):
- self._verify_uc_path_is_valid(path)
self.path = path
+ self._verify_uc_path_is_valid()
- def _verify_uc_path_is_valid(self, path):
+ def _verify_uc_path_is_valid(self):
"""Verify if the path exists in Databricks Unified Catalog."""
try:
from databricks.sdk import WorkspaceClient
@@ -49,9 +49,17 @@
return
try:
- w.files.get_metadata(path)
+ # Check if `self.path` points to a valid UC file.
+ w.files.get_metadata(self.path)
except Exception:
- raise MlflowException(f"{path} does not exist in Databricks Unified Catalog.")
+ try:
+ # Check if `self.path` points to a valid UC directory.
+ w.files.get_directory_metadata(self.path)
+ # Append a slash to `self.path` to indicate it's a directory.
+ self.path += "/" if not self.path.endswith("/") else ""
+ except Exception:
+ # Neither file nor directory exists, we throw an exception.
+ raise MlflowException(f"{self.path} does not exist in Databricks Unified Catalog.")
@staticmethod
def _get_source_type() -> str:
| {"golden_diff": "diff --git a/mlflow/data/uc_volume_dataset_source.py b/mlflow/data/uc_volume_dataset_source.py\n--- a/mlflow/data/uc_volume_dataset_source.py\n+++ b/mlflow/data/uc_volume_dataset_source.py\n@@ -22,10 +22,10 @@\n \"\"\"\n \n def __init__(self, path: str):\n- self._verify_uc_path_is_valid(path)\n self.path = path\n+ self._verify_uc_path_is_valid()\n \n- def _verify_uc_path_is_valid(self, path):\n+ def _verify_uc_path_is_valid(self):\n \"\"\"Verify if the path exists in Databricks Unified Catalog.\"\"\"\n try:\n from databricks.sdk import WorkspaceClient\n@@ -49,9 +49,17 @@\n return\n \n try:\n- w.files.get_metadata(path)\n+ # Check if `self.path` points to a valid UC file.\n+ w.files.get_metadata(self.path)\n except Exception:\n- raise MlflowException(f\"{path} does not exist in Databricks Unified Catalog.\")\n+ try:\n+ # Check if `self.path` points to a valid UC directory.\n+ w.files.get_directory_metadata(self.path)\n+ # Append a slash to `self.path` to indicate it's a directory.\n+ self.path += \"/\" if not self.path.endswith(\"/\") else \"\"\n+ except Exception:\n+ # Neither file nor directory exists, we throw an exception.\n+ raise MlflowException(f\"{self.path} does not exist in Databricks Unified Catalog.\")\n \n @staticmethod\n def _get_source_type() -> str:\n", "issue": "[BUG] uc_volume_dataset_source only validates file paths, not folder paths\n### Issues Policy acknowledgement\n\n- [X] I have read and agree to submit bug reports in accordance with the [issues policy](https://www.github.com/mlflow/mlflow/blob/master/ISSUE_POLICY.md)\n\n### Where did you encounter this bug?\n\nLocal machine\n\n### Willingness to contribute\n\nYes. I would be willing to contribute a fix for this bug with guidance from the MLflow community.\n\n### MLflow version\n\nmlflow-2.12.2\n\n### System information\n\n- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**:\r\n- **Python version**:\r\n- **yarn version, if running the dev UI**:\r\n\n\n### Describe the problem\n\nhttps://github.com/mlflow/mlflow/blob/72df4a2a0f44c52179dfbdc7d47ad10f58ceec39/mlflow/data/uc_volume_dataset_source.py#L28 doesn't verify folder paths, only file paths\n\n### Tracking information\n\n<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->\r\n```shell\r\nREPLACE_ME\r\n```\r\n\n\n### Code to reproduce issue\n\n<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->\r\n```\r\nREPLACE_ME\r\n```\r\n\n\n### Stack trace\n\n<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->\r\n```\r\nREPLACE_ME\r\n```\r\n\n\n### Other info / logs\n\n<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->\r\n```\r\nREPLACE_ME\r\n```\r\n\n\n### What component(s) does this bug affect?\n\n- [ ] `area/artifacts`: Artifact stores and artifact logging\n- [ ] `area/build`: Build and test infrastructure for MLflow\n- [ ] `area/deployments`: MLflow Deployments client APIs, server, and third-party Deployments integrations\n- [ ] `area/docs`: MLflow documentation pages\n- [ ] `area/examples`: Example code\n- [ ] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry\n- [ ] `area/models`: MLmodel format, model serialization/deserialization, flavors\n- [ ] `area/recipes`: Recipes, Recipe APIs, Recipe configs, Recipe Templates\n- [ ] `area/projects`: MLproject format, project running backends\n- [ ] `area/scoring`: MLflow Model server, model deployment tools, Spark UDFs\n- [ ] `area/server-infra`: MLflow Tracking server backend\n- [ ] `area/tracking`: Tracking Service, tracking client APIs, autologging\n\n### What interface(s) does this bug affect?\n\n- [ ] `area/uiux`: Front-end, user experience, plotting, JavaScript, JavaScript dev server\n- [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models\n- [ ] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry\n- [ ] `area/windows`: Windows support\n\n### What language(s) does this bug affect?\n\n- [ ] `language/r`: R APIs and clients\n- [ ] `language/java`: Java APIs and clients\n- [ ] `language/new`: Proposals for new client languages\n\n### What integration(s) does this bug affect?\n\n- [ ] `integrations/azure`: Azure and Azure ML integrations\n- [ ] `integrations/sagemaker`: SageMaker integrations\n- [ ] `integrations/databricks`: Databricks integrations\n", "before_files": [{"content": "import logging\nfrom typing import Any, Dict\n\nfrom mlflow.data.dataset_source import DatasetSource\nfrom mlflow.exceptions import MlflowException\n\n_logger = logging.getLogger(__name__)\n\n\nclass UCVolumeDatasetSource(DatasetSource):\n \"\"\"Represents the source of a dataset stored in Databricks Unified Catalog Volume.\n\n If you are using a delta table, please use `mlflow.data.delta_dataset_source.DeltaDatasetSource`\n instead. This `UCVolumeDatasetSource` does not provide loading function, and is mostly useful\n when you are logging a `mlflow.data.meta_dataset.MetaDataset` to MLflow, i.e., you want\n to log the source of dataset to MLflow without loading the dataset.\n\n Args:\n path: the UC path of your data. It should be a valid UC path following the pattern\n \"/Volumes/{catalog}/{schema}/{volume}/{file_path}\". For example,\n \"/Volumes/MyCatalog/MySchema/MyVolume/MyFile.json\".\n \"\"\"\n\n def __init__(self, path: str):\n self._verify_uc_path_is_valid(path)\n self.path = path\n\n def _verify_uc_path_is_valid(self, path):\n \"\"\"Verify if the path exists in Databricks Unified Catalog.\"\"\"\n try:\n from databricks.sdk import WorkspaceClient\n\n w = WorkspaceClient()\n except ImportError:\n _logger.warning(\n \"Cannot verify the path of `UCVolumeDatasetSource` because of missing\"\n \"`databricks-sdk`. Please install `databricks-sdk` via \"\n \"`pip install -U databricks-sdk`. This does not block creating \"\n \"`UCVolumeDatasetSource`, but your `UCVolumeDatasetSource` might be invalid.\"\n )\n return\n except Exception:\n _logger.warning(\n \"Cannot verify the path of `UCVolumeDatasetSource` due to a connection failure \"\n \"with Databricks workspace. Please run `mlflow.login()` to log in to Databricks. \"\n \"This does not block creating `UCVolumeDatasetSource`, but your \"\n \"`UCVolumeDatasetSource` might be invalid.\"\n )\n return\n\n try:\n w.files.get_metadata(path)\n except Exception:\n raise MlflowException(f\"{path} does not exist in Databricks Unified Catalog.\")\n\n @staticmethod\n def _get_source_type() -> str:\n return \"uc_volume\"\n\n @staticmethod\n def _can_resolve(raw_source: Any):\n raise NotImplementedError\n\n @classmethod\n def _resolve(cls, raw_source: str):\n raise NotImplementedError\n\n def to_dict(self) -> Dict[Any, Any]:\n return {\"path\": self.path}\n\n @classmethod\n def from_dict(cls, source_dict: Dict[Any, Any]) -> \"UCVolumeDatasetSource\":\n return cls(**source_dict)\n", "path": "mlflow/data/uc_volume_dataset_source.py"}], "after_files": [{"content": "import logging\nfrom typing import Any, Dict\n\nfrom mlflow.data.dataset_source import DatasetSource\nfrom mlflow.exceptions import MlflowException\n\n_logger = logging.getLogger(__name__)\n\n\nclass UCVolumeDatasetSource(DatasetSource):\n \"\"\"Represents the source of a dataset stored in Databricks Unified Catalog Volume.\n\n If you are using a delta table, please use `mlflow.data.delta_dataset_source.DeltaDatasetSource`\n instead. This `UCVolumeDatasetSource` does not provide loading function, and is mostly useful\n when you are logging a `mlflow.data.meta_dataset.MetaDataset` to MLflow, i.e., you want\n to log the source of dataset to MLflow without loading the dataset.\n\n Args:\n path: the UC path of your data. It should be a valid UC path following the pattern\n \"/Volumes/{catalog}/{schema}/{volume}/{file_path}\". For example,\n \"/Volumes/MyCatalog/MySchema/MyVolume/MyFile.json\".\n \"\"\"\n\n def __init__(self, path: str):\n self.path = path\n self._verify_uc_path_is_valid()\n\n def _verify_uc_path_is_valid(self):\n \"\"\"Verify if the path exists in Databricks Unified Catalog.\"\"\"\n try:\n from databricks.sdk import WorkspaceClient\n\n w = WorkspaceClient()\n except ImportError:\n _logger.warning(\n \"Cannot verify the path of `UCVolumeDatasetSource` because of missing\"\n \"`databricks-sdk`. Please install `databricks-sdk` via \"\n \"`pip install -U databricks-sdk`. This does not block creating \"\n \"`UCVolumeDatasetSource`, but your `UCVolumeDatasetSource` might be invalid.\"\n )\n return\n except Exception:\n _logger.warning(\n \"Cannot verify the path of `UCVolumeDatasetSource` due to a connection failure \"\n \"with Databricks workspace. Please run `mlflow.login()` to log in to Databricks. \"\n \"This does not block creating `UCVolumeDatasetSource`, but your \"\n \"`UCVolumeDatasetSource` might be invalid.\"\n )\n return\n\n try:\n # Check if `self.path` points to a valid UC file.\n w.files.get_metadata(self.path)\n except Exception:\n try:\n # Check if `self.path` points to a valid UC directory.\n w.files.get_directory_metadata(self.path)\n # Append a slash to `self.path` to indicate it's a directory.\n self.path += \"/\" if not self.path.endswith(\"/\") else \"\"\n except Exception:\n # Neither file nor directory exists, we throw an exception.\n raise MlflowException(f\"{self.path} does not exist in Databricks Unified Catalog.\")\n\n @staticmethod\n def _get_source_type() -> str:\n return \"uc_volume\"\n\n @staticmethod\n def _can_resolve(raw_source: Any):\n raise NotImplementedError\n\n @classmethod\n def _resolve(cls, raw_source: str):\n raise NotImplementedError\n\n def to_dict(self) -> Dict[Any, Any]:\n return {\"path\": self.path}\n\n @classmethod\n def from_dict(cls, source_dict: Dict[Any, Any]) -> \"UCVolumeDatasetSource\":\n return cls(**source_dict)\n", "path": "mlflow/data/uc_volume_dataset_source.py"}]} | 1,752 | 349 |
gh_patches_debug_24966 | rasdani/github-patches | git_diff | chainer__chainer-2721 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
resuming issue of LinearShift
Same issue as #2680
```
import chainer
from chainer import iterators
from chainer import optimizers
from chainer import training
from chainer.training import extensions
from chainer import serializers
class DummyModel(chainer.Chain):
def __call__(self, x):
return x
def setup_trainer(iteration):
model = DummyModel()
optimizer = optimizers.SGD()
optimizer.setup(model)
iterator = iterators.SerialIterator([1, 2, 3], 1)
updater = training.StandardUpdater(iterator, optimizer)
trainer = training.Trainer(updater, (iteration, 'iteration'), out='.')
trainer.extend(extensions.LogReport(trigger=(1, 'iteration')))
trainer.extend(extensions.observe_lr(), trigger=(1, 'iteration'))
trainer.extend(
extensions.PrintReport(['iteration', 'lr']),
trigger=(1, 'iteration'))
trainer.extend(
extensions.LinearShift('lr', (2, 1), (5, 15)),
trigger=(1, 'iteration'))
return trainer
trainer = setup_trainer(10)
trainer.run()
serializers.save_npz('tmp', trainer)
# iteration lr
# 1 2
# 2 2
# 3 2
# 4 2
# 5 2
# 6 2
# 7 1.9
# 8 1.8
# 9 1.7
# 10 1.6
resumed_trainer = setup_trainer(20)
serializers.load_npz('tmp', resumed_trainer)
resumed_trainer.run()
# iteration lr
# 1 2
# 2 2
# 3 2
# 4 2
# 5 2
# 6 2
# 7 1.9
# 8 1.8
# 9 1.7
# 10 1.6
# 11 1.4 (lr = 1.5 is skipped)
# 12 1.3
# 13 1.2
# 14 1.1
# 15 1
# 16 1
# 17 1
# 18 1
# 19 1
# 20 1
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `chainer/training/extensions/linear_shift.py`
Content:
```
1 from __future__ import division
2
3 from chainer.training import extension
4
5
6 class LinearShift(extension.Extension):
7
8 """Trainer extension to change an optimizer attribute linearly.
9
10 This extension changes an optimizer attribute from the first value to the
11 last value linearly within a specified duration. The typical use case is
12 warming up of the momentum coefficient.
13
14 For example, suppose that this extension is called at every iteration, and
15 ``value_range == (x, y)`` and ``time_range == (i, j)``. Then, this
16 extension keeps the attribute to be ``x`` up to the ``i``-th iteration,
17 linearly shifts the value to ``y`` by the ``j``-th iteration, and then
18 keeps the value to be ``y`` after the ``j``-th iteration.
19
20 This extension is also called before the training loop starts by default.
21
22 Args:
23 attr (str): Name of the optimizer attribute to adjust.
24 value_range (tuple of float): The first and the last values of the
25 attribute.
26 time_range (tuple of ints): The first and last counts of calls in which
27 the attribute is adjusted.
28 optimizer (~chainer.Optimizer): Target optimizer object. If it is None,
29 the main optimizer of the trainer is used.
30
31 """
32 invoke_before_training = True
33
34 def __init__(self, attr, value_range, time_range, optimizer=None):
35 self._attr = attr
36 self._value_range = value_range
37 self._time_range = time_range
38 self._optimizer = optimizer
39 self._t = 0
40
41 def __call__(self, trainer):
42 optimizer = self._optimizer or trainer.updater.get_optimizer('main')
43 t1, t2 = self._time_range
44 v1, v2 = self._value_range
45
46 if self._t <= t1:
47 value = v1
48 elif self._t >= t2:
49 value = v2
50 else:
51 rate = (self._t - t1) / (t2 - t1)
52 value = v1 + rate * (v2 - v1)
53 setattr(optimizer, self._attr, value)
54
55 self._t += 1
56
57 def serialize(self, serializer):
58 self._t = serializer('_t', self._t)
59
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/chainer/training/extensions/linear_shift.py b/chainer/training/extensions/linear_shift.py
--- a/chainer/training/extensions/linear_shift.py
+++ b/chainer/training/extensions/linear_shift.py
@@ -36,23 +36,34 @@
self._value_range = value_range
self._time_range = time_range
self._optimizer = optimizer
- self._t = 0
+ self._t = 1
+ self._before_training = True
def __call__(self, trainer):
optimizer = self._optimizer or trainer.updater.get_optimizer('main')
+
+ if self._before_training:
+ self._before_training = False
+ value = self._compute_value(self._t - 1)
+ else:
+ value = self._compute_value(self._t)
+ self._t += 1
+
+ setattr(optimizer, self._attr, value)
+
+ def serialize(self, serializer):
+ self._t = serializer('_t', self._t)
+
+ def _compute_value(self, t):
t1, t2 = self._time_range
v1, v2 = self._value_range
- if self._t <= t1:
+ if t <= t1:
value = v1
- elif self._t >= t2:
+ elif t >= t2:
value = v2
else:
- rate = (self._t - t1) / (t2 - t1)
+ rate = (t - t1) / (t2 - t1)
value = v1 + rate * (v2 - v1)
- setattr(optimizer, self._attr, value)
- self._t += 1
-
- def serialize(self, serializer):
- self._t = serializer('_t', self._t)
+ return value
| {"golden_diff": "diff --git a/chainer/training/extensions/linear_shift.py b/chainer/training/extensions/linear_shift.py\n--- a/chainer/training/extensions/linear_shift.py\n+++ b/chainer/training/extensions/linear_shift.py\n@@ -36,23 +36,34 @@\n self._value_range = value_range\n self._time_range = time_range\n self._optimizer = optimizer\n- self._t = 0\n+ self._t = 1\n+ self._before_training = True\n \n def __call__(self, trainer):\n optimizer = self._optimizer or trainer.updater.get_optimizer('main')\n+\n+ if self._before_training:\n+ self._before_training = False\n+ value = self._compute_value(self._t - 1)\n+ else:\n+ value = self._compute_value(self._t)\n+ self._t += 1\n+\n+ setattr(optimizer, self._attr, value)\n+\n+ def serialize(self, serializer):\n+ self._t = serializer('_t', self._t)\n+\n+ def _compute_value(self, t):\n t1, t2 = self._time_range\n v1, v2 = self._value_range\n \n- if self._t <= t1:\n+ if t <= t1:\n value = v1\n- elif self._t >= t2:\n+ elif t >= t2:\n value = v2\n else:\n- rate = (self._t - t1) / (t2 - t1)\n+ rate = (t - t1) / (t2 - t1)\n value = v1 + rate * (v2 - v1)\n- setattr(optimizer, self._attr, value)\n \n- self._t += 1\n-\n- def serialize(self, serializer):\n- self._t = serializer('_t', self._t)\n+ return value\n", "issue": "resuming issue of LinearShift\nSame issue as #2680\r\n\r\n```\r\nimport chainer\r\nfrom chainer import iterators\r\nfrom chainer import optimizers\r\nfrom chainer import training\r\nfrom chainer.training import extensions\r\nfrom chainer import serializers\r\n\r\n\r\nclass DummyModel(chainer.Chain):\r\n\r\n def __call__(self, x):\r\n return x\r\n\r\n\r\ndef setup_trainer(iteration):\r\n model = DummyModel()\r\n optimizer = optimizers.SGD()\r\n optimizer.setup(model)\r\n\r\n iterator = iterators.SerialIterator([1, 2, 3], 1)\r\n\r\n updater = training.StandardUpdater(iterator, optimizer)\r\n trainer = training.Trainer(updater, (iteration, 'iteration'), out='.')\r\n\r\n trainer.extend(extensions.LogReport(trigger=(1, 'iteration')))\r\n trainer.extend(extensions.observe_lr(), trigger=(1, 'iteration'))\r\n trainer.extend(\r\n extensions.PrintReport(['iteration', 'lr']),\r\n trigger=(1, 'iteration'))\r\n\r\n trainer.extend(\r\n extensions.LinearShift('lr', (2, 1), (5, 15)),\r\n trigger=(1, 'iteration'))\r\n\r\n return trainer\r\n\r\n\r\ntrainer = setup_trainer(10)\r\ntrainer.run()\r\nserializers.save_npz('tmp', trainer)\r\n# iteration lr\r\n# 1 2\r\n# 2 2\r\n# 3 2\r\n# 4 2\r\n# 5 2\r\n# 6 2\r\n# 7 1.9\r\n# 8 1.8\r\n# 9 1.7\r\n# 10 1.6\r\n\r\nresumed_trainer = setup_trainer(20)\r\nserializers.load_npz('tmp', resumed_trainer)\r\nresumed_trainer.run()\r\n# iteration lr\r\n# 1 2\r\n# 2 2\r\n# 3 2\r\n# 4 2\r\n# 5 2\r\n# 6 2\r\n# 7 1.9\r\n# 8 1.8\r\n# 9 1.7\r\n# 10 1.6\r\n# 11 1.4 (lr = 1.5 is skipped)\r\n# 12 1.3\r\n# 13 1.2\r\n# 14 1.1\r\n# 15 1\r\n# 16 1\r\n# 17 1\r\n# 18 1\r\n# 19 1\r\n# 20 1\r\n```\n", "before_files": [{"content": "from __future__ import division\n\nfrom chainer.training import extension\n\n\nclass LinearShift(extension.Extension):\n\n \"\"\"Trainer extension to change an optimizer attribute linearly.\n\n This extension changes an optimizer attribute from the first value to the\n last value linearly within a specified duration. The typical use case is\n warming up of the momentum coefficient.\n\n For example, suppose that this extension is called at every iteration, and\n ``value_range == (x, y)`` and ``time_range == (i, j)``. Then, this\n extension keeps the attribute to be ``x`` up to the ``i``-th iteration,\n linearly shifts the value to ``y`` by the ``j``-th iteration, and then\n keeps the value to be ``y`` after the ``j``-th iteration.\n\n This extension is also called before the training loop starts by default.\n\n Args:\n attr (str): Name of the optimizer attribute to adjust.\n value_range (tuple of float): The first and the last values of the\n attribute.\n time_range (tuple of ints): The first and last counts of calls in which\n the attribute is adjusted.\n optimizer (~chainer.Optimizer): Target optimizer object. If it is None,\n the main optimizer of the trainer is used.\n\n \"\"\"\n invoke_before_training = True\n\n def __init__(self, attr, value_range, time_range, optimizer=None):\n self._attr = attr\n self._value_range = value_range\n self._time_range = time_range\n self._optimizer = optimizer\n self._t = 0\n\n def __call__(self, trainer):\n optimizer = self._optimizer or trainer.updater.get_optimizer('main')\n t1, t2 = self._time_range\n v1, v2 = self._value_range\n\n if self._t <= t1:\n value = v1\n elif self._t >= t2:\n value = v2\n else:\n rate = (self._t - t1) / (t2 - t1)\n value = v1 + rate * (v2 - v1)\n setattr(optimizer, self._attr, value)\n\n self._t += 1\n\n def serialize(self, serializer):\n self._t = serializer('_t', self._t)\n", "path": "chainer/training/extensions/linear_shift.py"}], "after_files": [{"content": "from __future__ import division\n\nfrom chainer.training import extension\n\n\nclass LinearShift(extension.Extension):\n\n \"\"\"Trainer extension to change an optimizer attribute linearly.\n\n This extension changes an optimizer attribute from the first value to the\n last value linearly within a specified duration. The typical use case is\n warming up of the momentum coefficient.\n\n For example, suppose that this extension is called at every iteration, and\n ``value_range == (x, y)`` and ``time_range == (i, j)``. Then, this\n extension keeps the attribute to be ``x`` up to the ``i``-th iteration,\n linearly shifts the value to ``y`` by the ``j``-th iteration, and then\n keeps the value to be ``y`` after the ``j``-th iteration.\n\n This extension is also called before the training loop starts by default.\n\n Args:\n attr (str): Name of the optimizer attribute to adjust.\n value_range (tuple of float): The first and the last values of the\n attribute.\n time_range (tuple of ints): The first and last counts of calls in which\n the attribute is adjusted.\n optimizer (~chainer.Optimizer): Target optimizer object. If it is None,\n the main optimizer of the trainer is used.\n\n \"\"\"\n invoke_before_training = True\n\n def __init__(self, attr, value_range, time_range, optimizer=None):\n self._attr = attr\n self._value_range = value_range\n self._time_range = time_range\n self._optimizer = optimizer\n self._t = 1\n self._before_training = True\n\n def __call__(self, trainer):\n optimizer = self._optimizer or trainer.updater.get_optimizer('main')\n\n if self._before_training:\n self._before_training = False\n value = self._compute_value(self._t - 1)\n else:\n value = self._compute_value(self._t)\n self._t += 1\n\n setattr(optimizer, self._attr, value)\n\n def serialize(self, serializer):\n self._t = serializer('_t', self._t)\n\n def _compute_value(self, t):\n t1, t2 = self._time_range\n v1, v2 = self._value_range\n\n if t <= t1:\n value = v1\n elif t >= t2:\n value = v2\n else:\n rate = (t - t1) / (t2 - t1)\n value = v1 + rate * (v2 - v1)\n\n return value\n", "path": "chainer/training/extensions/linear_shift.py"}]} | 1,438 | 419 |
gh_patches_debug_30842 | rasdani/github-patches | git_diff | jupyterhub__zero-to-jupyterhub-k8s-416 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Block access to cloud metadata endpoint by default
Currently, we expect users to do extra steps to secure their clusters from users accessing the cloud metadata endpoints (https://zero-to-jupyterhub.readthedocs.io/en/v0.5-doc/security.html#audit-cloud-metadata-server-security). IMO, we should instead do that by default and allow users to opt out of it. Most users won't actually be doing the blocking right now, and run insecure clusters...
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `images/hub/jupyterhub_config.py`
Content:
```
1 import os
2 import glob
3 from tornado.httpclient import AsyncHTTPClient
4
5 from z2jh import get_config, get_secret
6
7 # Configure JupyterHub to use the curl backend for making HTTP requests,
8 # rather than the pure-python implementations. The default one starts
9 # being too slow to make a large number of requests to the proxy API
10 # at the rate required.
11 AsyncHTTPClient.configure("tornado.curl_httpclient.CurlAsyncHTTPClient")
12
13 c.JupyterHub.spawner_class = 'kubespawner.KubeSpawner'
14
15 # Connect to a proxy running in a different pod
16 c.ConfigurableHTTPProxy.api_url = 'http://{}:{}'.format(os.environ['PROXY_API_SERVICE_HOST'], int(os.environ['PROXY_API_SERVICE_PORT']))
17 c.ConfigurableHTTPProxy.should_start = False
18
19 # Do not shut down user pods when hub is restarted
20 c.JupyterHub.cleanup_servers = False
21
22 # Check that the proxy has routes appropriately setup
23 # This isn't the best named setting :D
24 c.JupyterHub.last_activity_interval = 60
25
26 # Max number of servers that can be spawning at any one time
27 c.JupyterHub.concurrent_spawn_limit = get_config('hub.concurrent-spawn-limit')
28
29 active_server_limit = get_config('hub.active-server-limit', None)
30
31 if active_server_limit is not None:
32 c.JupyterHub.active_server_limit = int(active_server_limit)
33
34 c.JupyterHub.ip = os.environ['PROXY_PUBLIC_SERVICE_HOST']
35 c.JupyterHub.port = int(os.environ['PROXY_PUBLIC_SERVICE_PORT'])
36
37 # the hub should listen on all interfaces, so the proxy can access it
38 c.JupyterHub.hub_ip = '0.0.0.0'
39
40 c.KubeSpawner.namespace = os.environ.get('POD_NAMESPACE', 'default')
41
42 c.KubeSpawner.start_timeout = get_config('singleuser.start-timeout')
43
44 # Use env var for this, since we want hub to restart when this changes
45 c.KubeSpawner.singleuser_image_spec = os.environ['SINGLEUSER_IMAGE']
46
47 c.KubeSpawner.singleuser_extra_labels = get_config('singleuser.extra-labels', {})
48
49 c.KubeSpawner.singleuser_uid = get_config('singleuser.uid')
50 c.KubeSpawner.singleuser_fs_gid = get_config('singleuser.fs-gid')
51
52 service_account_name = get_config('singleuser.service-account-name', None)
53 if service_account_name:
54 c.KubeSpawner.singleuser_service_account = service_account_name
55
56 c.KubeSpawner.singleuser_node_selector = get_config('singleuser.node-selector')
57 # Configure dynamically provisioning pvc
58 storage_type = get_config('singleuser.storage.type')
59 if storage_type == 'dynamic':
60 c.KubeSpawner.pvc_name_template = 'claim-{username}{servername}'
61 c.KubeSpawner.user_storage_pvc_ensure = True
62 storage_class = get_config('singleuser.storage.dynamic.storage-class', None)
63 if storage_class:
64 c.KubeSpawner.user_storage_class = storage_class
65 c.KubeSpawner.user_storage_access_modes = ['ReadWriteOnce']
66 c.KubeSpawner.user_storage_capacity = get_config('singleuser.storage.capacity')
67
68 # Add volumes to singleuser pods
69 c.KubeSpawner.volumes = [
70 {
71 'name': 'volume-{username}{servername}',
72 'persistentVolumeClaim': {
73 'claimName': 'claim-{username}{servername}'
74 }
75 }
76 ]
77 c.KubeSpawner.volume_mounts = [
78 {
79 'mountPath': get_config('singleuser.storage.home_mount_path'),
80 'name': 'volume-{username}{servername}'
81 }
82 ]
83 elif storage_type == 'static':
84 pvc_claim_name = get_config('singleuser.storage.static.pvc-name')
85 c.KubeSpawner.volumes = [{
86 'name': 'home',
87 'persistentVolumeClaim': {
88 'claimName': pvc_claim_name
89 }
90 }]
91
92 c.KubeSpawner.volume_mounts = [{
93 'mountPath': get_config('singleuser.storage.home_mount_path'),
94 'name': 'home',
95 'subPath': get_config('singleuser.storage.static.sub-path')
96 }]
97
98 c.KubeSpawner.volumes.extend(get_config('singleuser.storage.extra-volumes', []))
99 c.KubeSpawner.volume_mounts.extend(get_config('singleuser.storage.extra-volume-mounts', []))
100
101 lifecycle_hooks = get_config('singleuser.lifecycle-hooks')
102 if lifecycle_hooks:
103 c.KubeSpawner.singleuser_lifecycle_hooks = lifecycle_hooks
104
105 init_containers = get_config('singleuser.init-containers')
106 if init_containers:
107 c.KubeSpawner.singleuser_init_containers = init_containers
108
109 # Gives spawned containers access to the API of the hub
110 c.KubeSpawner.hub_connect_ip = os.environ['HUB_SERVICE_HOST']
111 c.KubeSpawner.hub_connect_port = int(os.environ['HUB_SERVICE_PORT'])
112
113 c.JupyterHub.hub_connect_ip = os.environ['HUB_SERVICE_HOST']
114 c.JupyterHub.hub_connect_port = int(os.environ['HUB_SERVICE_PORT'])
115
116 c.KubeSpawner.mem_limit = get_config('singleuser.memory.limit')
117 c.KubeSpawner.mem_guarantee = get_config('singleuser.memory.guarantee')
118 c.KubeSpawner.cpu_limit = get_config('singleuser.cpu.limit')
119 c.KubeSpawner.cpu_guarantee = get_config('singleuser.cpu.guarantee')
120
121 # Allow switching authenticators easily
122 auth_type = get_config('auth.type')
123 email_domain = 'local'
124
125 if auth_type == 'google':
126 c.JupyterHub.authenticator_class = 'oauthenticator.GoogleOAuthenticator'
127 c.GoogleOAuthenticator.client_id = get_config('auth.google.client-id')
128 c.GoogleOAuthenticator.client_secret = get_config('auth.google.client-secret')
129 c.GoogleOAuthenticator.oauth_callback_url = get_config('auth.google.callback-url')
130 c.GoogleOAuthenticator.hosted_domain = get_config('auth.google.hosted-domain')
131 c.GoogleOAuthenticator.login_service = get_config('auth.google.login-service')
132 email_domain = get_config('auth.google.hosted-domain')
133 elif auth_type == 'github':
134 c.JupyterHub.authenticator_class = 'oauthenticator.GitHubOAuthenticator'
135 c.GitHubOAuthenticator.oauth_callback_url = get_config('auth.github.callback-url')
136 c.GitHubOAuthenticator.client_id = get_config('auth.github.client-id')
137 c.GitHubOAuthenticator.client_secret = get_config('auth.github.client-secret')
138 elif auth_type == 'cilogon':
139 c.JupyterHub.authenticator_class = 'oauthenticator.CILogonOAuthenticator'
140 c.CILogonOAuthenticator.oauth_callback_url = get_config('auth.cilogon.callback-url')
141 c.CILogonOAuthenticator.client_id = get_config('auth.cilogon.client-id')
142 c.CILogonOAuthenticator.client_secret = get_config('auth.cilogon.client-secret')
143 elif auth_type == 'gitlab':
144 c.JupyterHub.authenticator_class = 'oauthenticator.gitlab.GitLabOAuthenticator'
145 c.GitLabOAuthenticator.oauth_callback_url = get_config('auth.gitlab.callback-url')
146 c.GitLabOAuthenticator.client_id = get_config('auth.gitlab.client-id')
147 c.GitLabOAuthenticator.client_secret = get_config('auth.gitlab.client-secret')
148 elif auth_type == 'mediawiki':
149 c.JupyterHub.authenticator_class = 'oauthenticator.mediawiki.MWOAuthenticator'
150 c.MWOAuthenticator.client_id = get_config('auth.mediawiki.client-id')
151 c.MWOAuthenticator.client_secret = get_config('auth.mediawiki.client-secret')
152 c.MWOAuthenticator.index_url = get_config('auth.mediawiki.index-url')
153 elif auth_type == 'globus':
154 c.JupyterHub.authenticator_class = 'oauthenticator.globus.GlobusOAuthenticator'
155 c.GlobusOAuthenticator.oauth_callback_url = get_config('auth.globus.callback-url')
156 c.GlobusOAuthenticator.client_id = get_config('auth.globus.client-id')
157 c.GlobusOAuthenticator.client_secret = get_config('auth.globus.client-secret')
158 c.GlobusOAuthenticator.identity_provider = get_config('auth.globus.identity-provider', '')
159 elif auth_type == 'hmac':
160 c.JupyterHub.authenticator_class = 'hmacauthenticator.HMACAuthenticator'
161 c.HMACAuthenticator.secret_key = bytes.fromhex(get_config('auth.hmac.secret-key'))
162 elif auth_type == 'dummy':
163 c.JupyterHub.authenticator_class = 'dummyauthenticator.DummyAuthenticator'
164 c.DummyAuthenticator.password = get_config('auth.dummy.password', None)
165 elif auth_type == 'tmp':
166 c.JupyterHub.authenticator_class = 'tmpauthenticator.TmpAuthenticator'
167 elif auth_type == 'lti':
168 c.JupyterHub.authenticator_class = 'ltiauthenticator.LTIAuthenticator'
169 c.LTIAuthenticator.consumers = get_config('auth.lti.consumers')
170 elif auth_type == 'custom':
171 # full_class_name looks like "myauthenticator.MyAuthenticator".
172 # To create a docker image with this class availabe, you can just have the
173 # following Dockerifle:
174 # FROM jupyterhub/k8s-hub:v0.4
175 # RUN pip3 install myauthenticator
176 full_class_name = get_config('auth.custom.class-name')
177 c.JupyterHub.authenticator_class = full_class_name
178 auth_class_name = full_class_name.rsplit('.', 1)[-1]
179 auth_config = c[auth_class_name]
180 auth_config.update(get_config('auth.custom.config') or {})
181 else:
182 raise ValueError("Unhandled auth type: %r" % auth_type)
183
184 c.Authenticator.enable_auth_state = get_config('auth.state.enabled', False)
185
186 def generate_user_email(spawner):
187 """
188 Used as the EMAIL environment variable
189 """
190 return '{username}@{domain}'.format(
191 username=spawner.user.name, domain=email_domain
192 )
193
194 def generate_user_name(spawner):
195 """
196 Used as GIT_AUTHOR_NAME and GIT_COMMITTER_NAME environment variables
197 """
198 return spawner.user.name
199
200 c.KubeSpawner.environment = {
201 'EMAIL': generate_user_email,
202 # git requires these committer attributes
203 'GIT_AUTHOR_NAME': generate_user_name,
204 'GIT_COMMITTER_NAME': generate_user_name
205 }
206
207 c.KubeSpawner.environment.update(get_config('singleuser.extra-env', {}))
208
209 # Enable admins to access user servers
210 c.JupyterHub.admin_access = get_config('auth.admin.access')
211 c.Authenticator.admin_users = get_config('auth.admin.users', [])
212 c.Authenticator.whitelist = get_config('auth.whitelist.users', [])
213
214 c.JupyterHub.base_url = get_config('hub.base_url')
215
216 c.JupyterHub.services = []
217
218 if get_config('cull.enabled', False):
219 cull_timeout = get_config('cull.timeout')
220 cull_every = get_config('cull.every')
221 cull_cmd = [
222 '/usr/local/bin/cull_idle_servers.py',
223 '--timeout=%s' % cull_timeout,
224 '--cull-every=%s' % cull_every,
225 '--url=http://127.0.0.1:8081' + c.JupyterHub.base_url + 'hub/api'
226 ]
227 if get_config('cull.users'):
228 cull_cmd.append('--cull-users')
229 c.JupyterHub.services.append({
230 'name': 'cull-idle',
231 'admin': True,
232 'command': cull_cmd,
233 })
234
235 for name, service in get_config('hub.services', {}).items():
236 api_token = get_secret('services.token.%s' % name)
237 # jupyterhub.services is a list of dicts, but
238 # in the helm chart it is a dict of dicts for easier merged-config
239 service.setdefault('name', name)
240 if api_token:
241 service['api_token'] = api_token
242 c.JupyterHub.services.append(service)
243
244
245 c.JupyterHub.db_url = get_config('hub.db_url')
246
247 cmd = get_config('singleuser.cmd', None)
248 if cmd:
249 c.Spawner.cmd = cmd
250
251 default_url = get_config('singleuser.default-url', None)
252 if default_url:
253 c.Spawner.default_url = default_url
254
255 scheduler_strategy = get_config('singleuser.scheduler-strategy', 'spread')
256
257 if scheduler_strategy == 'pack':
258 # FIXME: Support setting affinity directly in KubeSpawner
259 c.KubeSpawner.singleuser_extra_pod_config = {
260 'affinity': {
261 'podAffinity': {
262 'preferredDuringSchedulingIgnoredDuringExecution': [{
263 'weight': 100,
264 'podAffinityTerm': {
265 'labelSelector': {
266 'matchExpressions': [{
267 'key': 'component',
268 'operator': 'In',
269 'values': ['singleuser-server']
270 }]
271 },
272 'topologyKey': 'kubernetes.io/hostname'
273 }
274 }],
275 }
276 }
277 }
278 else:
279 # Set default to {} so subconfigs can easily update it
280 c.KubeSpawner.singleuser_extra_pod_config = {}
281
282 extra_configs = sorted(glob.glob('/etc/jupyterhub/config/hub.extra-config.*.py'))
283 for ec in extra_configs:
284 load_subconfig(ec)
285
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/images/hub/jupyterhub_config.py b/images/hub/jupyterhub_config.py
--- a/images/hub/jupyterhub_config.py
+++ b/images/hub/jupyterhub_config.py
@@ -1,6 +1,7 @@
import os
import glob
from tornado.httpclient import AsyncHTTPClient
+from kubernetes import client
from z2jh import get_config, get_secret
@@ -104,7 +105,7 @@
init_containers = get_config('singleuser.init-containers')
if init_containers:
- c.KubeSpawner.singleuser_init_containers = init_containers
+ c.KubeSpawner.singleuser_init_containers.extend(init_containers)
# Gives spawned containers access to the API of the hub
c.KubeSpawner.hub_connect_ip = os.environ['HUB_SERVICE_HOST']
@@ -252,6 +253,30 @@
if default_url:
c.Spawner.default_url = default_url
+cloud_metadata = get_config('singleuser.cloud-metadata', {})
+
+if not cloud_metadata.get('enabled', False):
+ # Use iptables to block access to cloud metadata by default
+ network_tools_image_name = get_config('singleuser.network-tools.image.name')
+ network_tools_image_tag = get_config('singleuser.network-tools.image.tag')
+ ip_block_container = client.V1Container(
+ name="block-cloud-metadata",
+ image=f"{network_tools_image_name}:{network_tools_image_tag}",
+ command=[
+ 'iptables',
+ '-A', 'OUTPUT',
+ '-d', cloud_metadata.get('ip', '169.254.169.254'),
+ '-j', 'DROP'
+ ],
+ security_context=client.V1SecurityContext(
+ privileged=True,
+ run_as_user=0,
+ capabilities=client.V1Capabilities(add=['NET_ADMIN'])
+ )
+ )
+
+ c.KubeSpawner.singleuser_init_containers.append(ip_block_container)
+
scheduler_strategy = get_config('singleuser.scheduler-strategy', 'spread')
if scheduler_strategy == 'pack':
| {"golden_diff": "diff --git a/images/hub/jupyterhub_config.py b/images/hub/jupyterhub_config.py\n--- a/images/hub/jupyterhub_config.py\n+++ b/images/hub/jupyterhub_config.py\n@@ -1,6 +1,7 @@\n import os\n import glob\n from tornado.httpclient import AsyncHTTPClient\n+from kubernetes import client\n \n from z2jh import get_config, get_secret\n \n@@ -104,7 +105,7 @@\n \n init_containers = get_config('singleuser.init-containers')\n if init_containers:\n- c.KubeSpawner.singleuser_init_containers = init_containers\n+ c.KubeSpawner.singleuser_init_containers.extend(init_containers)\n \n # Gives spawned containers access to the API of the hub\n c.KubeSpawner.hub_connect_ip = os.environ['HUB_SERVICE_HOST']\n@@ -252,6 +253,30 @@\n if default_url:\n c.Spawner.default_url = default_url\n \n+cloud_metadata = get_config('singleuser.cloud-metadata', {})\n+\n+if not cloud_metadata.get('enabled', False):\n+ # Use iptables to block access to cloud metadata by default\n+ network_tools_image_name = get_config('singleuser.network-tools.image.name')\n+ network_tools_image_tag = get_config('singleuser.network-tools.image.tag')\n+ ip_block_container = client.V1Container(\n+ name=\"block-cloud-metadata\",\n+ image=f\"{network_tools_image_name}:{network_tools_image_tag}\",\n+ command=[\n+ 'iptables',\n+ '-A', 'OUTPUT',\n+ '-d', cloud_metadata.get('ip', '169.254.169.254'),\n+ '-j', 'DROP'\n+ ],\n+ security_context=client.V1SecurityContext(\n+ privileged=True,\n+ run_as_user=0,\n+ capabilities=client.V1Capabilities(add=['NET_ADMIN'])\n+ )\n+ )\n+\n+ c.KubeSpawner.singleuser_init_containers.append(ip_block_container)\n+\n scheduler_strategy = get_config('singleuser.scheduler-strategy', 'spread')\n \n if scheduler_strategy == 'pack':\n", "issue": "Block access to cloud metadata endpoint by default\nCurrently, we expect users to do extra steps to secure their clusters from users accessing the cloud metadata endpoints (https://zero-to-jupyterhub.readthedocs.io/en/v0.5-doc/security.html#audit-cloud-metadata-server-security). IMO, we should instead do that by default and allow users to opt out of it. Most users won't actually be doing the blocking right now, and run insecure clusters...\n", "before_files": [{"content": "import os\nimport glob\nfrom tornado.httpclient import AsyncHTTPClient\n\nfrom z2jh import get_config, get_secret\n\n# Configure JupyterHub to use the curl backend for making HTTP requests,\n# rather than the pure-python implementations. The default one starts\n# being too slow to make a large number of requests to the proxy API\n# at the rate required.\nAsyncHTTPClient.configure(\"tornado.curl_httpclient.CurlAsyncHTTPClient\")\n\nc.JupyterHub.spawner_class = 'kubespawner.KubeSpawner'\n\n# Connect to a proxy running in a different pod\nc.ConfigurableHTTPProxy.api_url = 'http://{}:{}'.format(os.environ['PROXY_API_SERVICE_HOST'], int(os.environ['PROXY_API_SERVICE_PORT']))\nc.ConfigurableHTTPProxy.should_start = False\n\n# Do not shut down user pods when hub is restarted\nc.JupyterHub.cleanup_servers = False\n\n# Check that the proxy has routes appropriately setup\n# This isn't the best named setting :D\nc.JupyterHub.last_activity_interval = 60\n\n# Max number of servers that can be spawning at any one time\nc.JupyterHub.concurrent_spawn_limit = get_config('hub.concurrent-spawn-limit')\n\nactive_server_limit = get_config('hub.active-server-limit', None)\n\nif active_server_limit is not None:\n c.JupyterHub.active_server_limit = int(active_server_limit)\n\nc.JupyterHub.ip = os.environ['PROXY_PUBLIC_SERVICE_HOST']\nc.JupyterHub.port = int(os.environ['PROXY_PUBLIC_SERVICE_PORT'])\n\n# the hub should listen on all interfaces, so the proxy can access it\nc.JupyterHub.hub_ip = '0.0.0.0'\n\nc.KubeSpawner.namespace = os.environ.get('POD_NAMESPACE', 'default')\n\nc.KubeSpawner.start_timeout = get_config('singleuser.start-timeout')\n\n# Use env var for this, since we want hub to restart when this changes\nc.KubeSpawner.singleuser_image_spec = os.environ['SINGLEUSER_IMAGE']\n\nc.KubeSpawner.singleuser_extra_labels = get_config('singleuser.extra-labels', {})\n\nc.KubeSpawner.singleuser_uid = get_config('singleuser.uid')\nc.KubeSpawner.singleuser_fs_gid = get_config('singleuser.fs-gid')\n\nservice_account_name = get_config('singleuser.service-account-name', None)\nif service_account_name:\n c.KubeSpawner.singleuser_service_account = service_account_name\n\nc.KubeSpawner.singleuser_node_selector = get_config('singleuser.node-selector')\n# Configure dynamically provisioning pvc\nstorage_type = get_config('singleuser.storage.type')\nif storage_type == 'dynamic':\n c.KubeSpawner.pvc_name_template = 'claim-{username}{servername}'\n c.KubeSpawner.user_storage_pvc_ensure = True\n storage_class = get_config('singleuser.storage.dynamic.storage-class', None)\n if storage_class:\n c.KubeSpawner.user_storage_class = storage_class\n c.KubeSpawner.user_storage_access_modes = ['ReadWriteOnce']\n c.KubeSpawner.user_storage_capacity = get_config('singleuser.storage.capacity')\n\n # Add volumes to singleuser pods\n c.KubeSpawner.volumes = [\n {\n 'name': 'volume-{username}{servername}',\n 'persistentVolumeClaim': {\n 'claimName': 'claim-{username}{servername}'\n }\n }\n ]\n c.KubeSpawner.volume_mounts = [\n {\n 'mountPath': get_config('singleuser.storage.home_mount_path'),\n 'name': 'volume-{username}{servername}'\n }\n ]\nelif storage_type == 'static':\n pvc_claim_name = get_config('singleuser.storage.static.pvc-name')\n c.KubeSpawner.volumes = [{\n 'name': 'home',\n 'persistentVolumeClaim': {\n 'claimName': pvc_claim_name\n }\n }]\n\n c.KubeSpawner.volume_mounts = [{\n 'mountPath': get_config('singleuser.storage.home_mount_path'),\n 'name': 'home',\n 'subPath': get_config('singleuser.storage.static.sub-path')\n }]\n\nc.KubeSpawner.volumes.extend(get_config('singleuser.storage.extra-volumes', []))\nc.KubeSpawner.volume_mounts.extend(get_config('singleuser.storage.extra-volume-mounts', []))\n\nlifecycle_hooks = get_config('singleuser.lifecycle-hooks')\nif lifecycle_hooks:\n c.KubeSpawner.singleuser_lifecycle_hooks = lifecycle_hooks\n\ninit_containers = get_config('singleuser.init-containers')\nif init_containers:\n c.KubeSpawner.singleuser_init_containers = init_containers\n\n# Gives spawned containers access to the API of the hub\nc.KubeSpawner.hub_connect_ip = os.environ['HUB_SERVICE_HOST']\nc.KubeSpawner.hub_connect_port = int(os.environ['HUB_SERVICE_PORT'])\n\nc.JupyterHub.hub_connect_ip = os.environ['HUB_SERVICE_HOST']\nc.JupyterHub.hub_connect_port = int(os.environ['HUB_SERVICE_PORT'])\n\nc.KubeSpawner.mem_limit = get_config('singleuser.memory.limit')\nc.KubeSpawner.mem_guarantee = get_config('singleuser.memory.guarantee')\nc.KubeSpawner.cpu_limit = get_config('singleuser.cpu.limit')\nc.KubeSpawner.cpu_guarantee = get_config('singleuser.cpu.guarantee')\n\n# Allow switching authenticators easily\nauth_type = get_config('auth.type')\nemail_domain = 'local'\n\nif auth_type == 'google':\n c.JupyterHub.authenticator_class = 'oauthenticator.GoogleOAuthenticator'\n c.GoogleOAuthenticator.client_id = get_config('auth.google.client-id')\n c.GoogleOAuthenticator.client_secret = get_config('auth.google.client-secret')\n c.GoogleOAuthenticator.oauth_callback_url = get_config('auth.google.callback-url')\n c.GoogleOAuthenticator.hosted_domain = get_config('auth.google.hosted-domain')\n c.GoogleOAuthenticator.login_service = get_config('auth.google.login-service')\n email_domain = get_config('auth.google.hosted-domain')\nelif auth_type == 'github':\n c.JupyterHub.authenticator_class = 'oauthenticator.GitHubOAuthenticator'\n c.GitHubOAuthenticator.oauth_callback_url = get_config('auth.github.callback-url')\n c.GitHubOAuthenticator.client_id = get_config('auth.github.client-id')\n c.GitHubOAuthenticator.client_secret = get_config('auth.github.client-secret')\nelif auth_type == 'cilogon':\n c.JupyterHub.authenticator_class = 'oauthenticator.CILogonOAuthenticator'\n c.CILogonOAuthenticator.oauth_callback_url = get_config('auth.cilogon.callback-url')\n c.CILogonOAuthenticator.client_id = get_config('auth.cilogon.client-id')\n c.CILogonOAuthenticator.client_secret = get_config('auth.cilogon.client-secret')\nelif auth_type == 'gitlab':\n c.JupyterHub.authenticator_class = 'oauthenticator.gitlab.GitLabOAuthenticator'\n c.GitLabOAuthenticator.oauth_callback_url = get_config('auth.gitlab.callback-url')\n c.GitLabOAuthenticator.client_id = get_config('auth.gitlab.client-id')\n c.GitLabOAuthenticator.client_secret = get_config('auth.gitlab.client-secret')\nelif auth_type == 'mediawiki':\n c.JupyterHub.authenticator_class = 'oauthenticator.mediawiki.MWOAuthenticator'\n c.MWOAuthenticator.client_id = get_config('auth.mediawiki.client-id')\n c.MWOAuthenticator.client_secret = get_config('auth.mediawiki.client-secret')\n c.MWOAuthenticator.index_url = get_config('auth.mediawiki.index-url')\nelif auth_type == 'globus':\n c.JupyterHub.authenticator_class = 'oauthenticator.globus.GlobusOAuthenticator'\n c.GlobusOAuthenticator.oauth_callback_url = get_config('auth.globus.callback-url')\n c.GlobusOAuthenticator.client_id = get_config('auth.globus.client-id')\n c.GlobusOAuthenticator.client_secret = get_config('auth.globus.client-secret')\n c.GlobusOAuthenticator.identity_provider = get_config('auth.globus.identity-provider', '')\nelif auth_type == 'hmac':\n c.JupyterHub.authenticator_class = 'hmacauthenticator.HMACAuthenticator'\n c.HMACAuthenticator.secret_key = bytes.fromhex(get_config('auth.hmac.secret-key'))\nelif auth_type == 'dummy':\n c.JupyterHub.authenticator_class = 'dummyauthenticator.DummyAuthenticator'\n c.DummyAuthenticator.password = get_config('auth.dummy.password', None)\nelif auth_type == 'tmp':\n c.JupyterHub.authenticator_class = 'tmpauthenticator.TmpAuthenticator'\nelif auth_type == 'lti':\n c.JupyterHub.authenticator_class = 'ltiauthenticator.LTIAuthenticator'\n c.LTIAuthenticator.consumers = get_config('auth.lti.consumers')\nelif auth_type == 'custom':\n # full_class_name looks like \"myauthenticator.MyAuthenticator\".\n # To create a docker image with this class availabe, you can just have the\n # following Dockerifle:\n # FROM jupyterhub/k8s-hub:v0.4\n # RUN pip3 install myauthenticator\n full_class_name = get_config('auth.custom.class-name')\n c.JupyterHub.authenticator_class = full_class_name\n auth_class_name = full_class_name.rsplit('.', 1)[-1]\n auth_config = c[auth_class_name]\n auth_config.update(get_config('auth.custom.config') or {})\nelse:\n raise ValueError(\"Unhandled auth type: %r\" % auth_type)\n\nc.Authenticator.enable_auth_state = get_config('auth.state.enabled', False)\n\ndef generate_user_email(spawner):\n \"\"\"\n Used as the EMAIL environment variable\n \"\"\"\n return '{username}@{domain}'.format(\n username=spawner.user.name, domain=email_domain\n )\n\ndef generate_user_name(spawner):\n \"\"\"\n Used as GIT_AUTHOR_NAME and GIT_COMMITTER_NAME environment variables\n \"\"\"\n return spawner.user.name\n\nc.KubeSpawner.environment = {\n 'EMAIL': generate_user_email,\n # git requires these committer attributes\n 'GIT_AUTHOR_NAME': generate_user_name,\n 'GIT_COMMITTER_NAME': generate_user_name\n}\n\nc.KubeSpawner.environment.update(get_config('singleuser.extra-env', {}))\n\n# Enable admins to access user servers\nc.JupyterHub.admin_access = get_config('auth.admin.access')\nc.Authenticator.admin_users = get_config('auth.admin.users', [])\nc.Authenticator.whitelist = get_config('auth.whitelist.users', [])\n\nc.JupyterHub.base_url = get_config('hub.base_url')\n\nc.JupyterHub.services = []\n\nif get_config('cull.enabled', False):\n cull_timeout = get_config('cull.timeout')\n cull_every = get_config('cull.every')\n cull_cmd = [\n '/usr/local/bin/cull_idle_servers.py',\n '--timeout=%s' % cull_timeout,\n '--cull-every=%s' % cull_every,\n '--url=http://127.0.0.1:8081' + c.JupyterHub.base_url + 'hub/api'\n ]\n if get_config('cull.users'):\n cull_cmd.append('--cull-users')\n c.JupyterHub.services.append({\n 'name': 'cull-idle',\n 'admin': True,\n 'command': cull_cmd,\n })\n\nfor name, service in get_config('hub.services', {}).items():\n api_token = get_secret('services.token.%s' % name)\n # jupyterhub.services is a list of dicts, but\n # in the helm chart it is a dict of dicts for easier merged-config\n service.setdefault('name', name)\n if api_token:\n service['api_token'] = api_token\n c.JupyterHub.services.append(service)\n\n\nc.JupyterHub.db_url = get_config('hub.db_url')\n\ncmd = get_config('singleuser.cmd', None)\nif cmd:\n c.Spawner.cmd = cmd\n\ndefault_url = get_config('singleuser.default-url', None)\nif default_url:\n c.Spawner.default_url = default_url\n\nscheduler_strategy = get_config('singleuser.scheduler-strategy', 'spread')\n\nif scheduler_strategy == 'pack':\n # FIXME: Support setting affinity directly in KubeSpawner\n c.KubeSpawner.singleuser_extra_pod_config = {\n 'affinity': {\n 'podAffinity': {\n 'preferredDuringSchedulingIgnoredDuringExecution': [{\n 'weight': 100,\n 'podAffinityTerm': {\n 'labelSelector': {\n 'matchExpressions': [{\n 'key': 'component',\n 'operator': 'In',\n 'values': ['singleuser-server']\n }]\n },\n 'topologyKey': 'kubernetes.io/hostname'\n }\n }],\n }\n }\n }\nelse:\n # Set default to {} so subconfigs can easily update it\n c.KubeSpawner.singleuser_extra_pod_config = {}\n\nextra_configs = sorted(glob.glob('/etc/jupyterhub/config/hub.extra-config.*.py'))\nfor ec in extra_configs:\n load_subconfig(ec)\n", "path": "images/hub/jupyterhub_config.py"}], "after_files": [{"content": "import os\nimport glob\nfrom tornado.httpclient import AsyncHTTPClient\nfrom kubernetes import client\n\nfrom z2jh import get_config, get_secret\n\n# Configure JupyterHub to use the curl backend for making HTTP requests,\n# rather than the pure-python implementations. The default one starts\n# being too slow to make a large number of requests to the proxy API\n# at the rate required.\nAsyncHTTPClient.configure(\"tornado.curl_httpclient.CurlAsyncHTTPClient\")\n\nc.JupyterHub.spawner_class = 'kubespawner.KubeSpawner'\n\n# Connect to a proxy running in a different pod\nc.ConfigurableHTTPProxy.api_url = 'http://{}:{}'.format(os.environ['PROXY_API_SERVICE_HOST'], int(os.environ['PROXY_API_SERVICE_PORT']))\nc.ConfigurableHTTPProxy.should_start = False\n\n# Do not shut down user pods when hub is restarted\nc.JupyterHub.cleanup_servers = False\n\n# Check that the proxy has routes appropriately setup\n# This isn't the best named setting :D\nc.JupyterHub.last_activity_interval = 60\n\n# Max number of servers that can be spawning at any one time\nc.JupyterHub.concurrent_spawn_limit = get_config('hub.concurrent-spawn-limit')\n\nactive_server_limit = get_config('hub.active-server-limit', None)\n\nif active_server_limit is not None:\n c.JupyterHub.active_server_limit = int(active_server_limit)\n\nc.JupyterHub.ip = os.environ['PROXY_PUBLIC_SERVICE_HOST']\nc.JupyterHub.port = int(os.environ['PROXY_PUBLIC_SERVICE_PORT'])\n\n# the hub should listen on all interfaces, so the proxy can access it\nc.JupyterHub.hub_ip = '0.0.0.0'\n\nc.KubeSpawner.namespace = os.environ.get('POD_NAMESPACE', 'default')\n\nc.KubeSpawner.start_timeout = get_config('singleuser.start-timeout')\n\n# Use env var for this, since we want hub to restart when this changes\nc.KubeSpawner.singleuser_image_spec = os.environ['SINGLEUSER_IMAGE']\n\nc.KubeSpawner.singleuser_extra_labels = get_config('singleuser.extra-labels', {})\n\nc.KubeSpawner.singleuser_uid = get_config('singleuser.uid')\nc.KubeSpawner.singleuser_fs_gid = get_config('singleuser.fs-gid')\n\nservice_account_name = get_config('singleuser.service-account-name', None)\nif service_account_name:\n c.KubeSpawner.singleuser_service_account = service_account_name\n\nc.KubeSpawner.singleuser_node_selector = get_config('singleuser.node-selector')\n# Configure dynamically provisioning pvc\nstorage_type = get_config('singleuser.storage.type')\nif storage_type == 'dynamic':\n c.KubeSpawner.pvc_name_template = 'claim-{username}{servername}'\n c.KubeSpawner.user_storage_pvc_ensure = True\n storage_class = get_config('singleuser.storage.dynamic.storage-class', None)\n if storage_class:\n c.KubeSpawner.user_storage_class = storage_class\n c.KubeSpawner.user_storage_access_modes = ['ReadWriteOnce']\n c.KubeSpawner.user_storage_capacity = get_config('singleuser.storage.capacity')\n\n # Add volumes to singleuser pods\n c.KubeSpawner.volumes = [\n {\n 'name': 'volume-{username}{servername}',\n 'persistentVolumeClaim': {\n 'claimName': 'claim-{username}{servername}'\n }\n }\n ]\n c.KubeSpawner.volume_mounts = [\n {\n 'mountPath': get_config('singleuser.storage.home_mount_path'),\n 'name': 'volume-{username}{servername}'\n }\n ]\nelif storage_type == 'static':\n pvc_claim_name = get_config('singleuser.storage.static.pvc-name')\n c.KubeSpawner.volumes = [{\n 'name': 'home',\n 'persistentVolumeClaim': {\n 'claimName': pvc_claim_name\n }\n }]\n\n c.KubeSpawner.volume_mounts = [{\n 'mountPath': get_config('singleuser.storage.home_mount_path'),\n 'name': 'home',\n 'subPath': get_config('singleuser.storage.static.sub-path')\n }]\n\nc.KubeSpawner.volumes.extend(get_config('singleuser.storage.extra-volumes', []))\nc.KubeSpawner.volume_mounts.extend(get_config('singleuser.storage.extra-volume-mounts', []))\n\nlifecycle_hooks = get_config('singleuser.lifecycle-hooks')\nif lifecycle_hooks:\n c.KubeSpawner.singleuser_lifecycle_hooks = lifecycle_hooks\n\ninit_containers = get_config('singleuser.init-containers')\nif init_containers:\n c.KubeSpawner.singleuser_init_containers.extend(init_containers)\n\n# Gives spawned containers access to the API of the hub\nc.KubeSpawner.hub_connect_ip = os.environ['HUB_SERVICE_HOST']\nc.KubeSpawner.hub_connect_port = int(os.environ['HUB_SERVICE_PORT'])\n\nc.JupyterHub.hub_connect_ip = os.environ['HUB_SERVICE_HOST']\nc.JupyterHub.hub_connect_port = int(os.environ['HUB_SERVICE_PORT'])\n\nc.KubeSpawner.mem_limit = get_config('singleuser.memory.limit')\nc.KubeSpawner.mem_guarantee = get_config('singleuser.memory.guarantee')\nc.KubeSpawner.cpu_limit = get_config('singleuser.cpu.limit')\nc.KubeSpawner.cpu_guarantee = get_config('singleuser.cpu.guarantee')\n\n# Allow switching authenticators easily\nauth_type = get_config('auth.type')\nemail_domain = 'local'\n\nif auth_type == 'google':\n c.JupyterHub.authenticator_class = 'oauthenticator.GoogleOAuthenticator'\n c.GoogleOAuthenticator.client_id = get_config('auth.google.client-id')\n c.GoogleOAuthenticator.client_secret = get_config('auth.google.client-secret')\n c.GoogleOAuthenticator.oauth_callback_url = get_config('auth.google.callback-url')\n c.GoogleOAuthenticator.hosted_domain = get_config('auth.google.hosted-domain')\n c.GoogleOAuthenticator.login_service = get_config('auth.google.login-service')\n email_domain = get_config('auth.google.hosted-domain')\nelif auth_type == 'github':\n c.JupyterHub.authenticator_class = 'oauthenticator.GitHubOAuthenticator'\n c.GitHubOAuthenticator.oauth_callback_url = get_config('auth.github.callback-url')\n c.GitHubOAuthenticator.client_id = get_config('auth.github.client-id')\n c.GitHubOAuthenticator.client_secret = get_config('auth.github.client-secret')\nelif auth_type == 'cilogon':\n c.JupyterHub.authenticator_class = 'oauthenticator.CILogonOAuthenticator'\n c.CILogonOAuthenticator.oauth_callback_url = get_config('auth.cilogon.callback-url')\n c.CILogonOAuthenticator.client_id = get_config('auth.cilogon.client-id')\n c.CILogonOAuthenticator.client_secret = get_config('auth.cilogon.client-secret')\nelif auth_type == 'gitlab':\n c.JupyterHub.authenticator_class = 'oauthenticator.gitlab.GitLabOAuthenticator'\n c.GitLabOAuthenticator.oauth_callback_url = get_config('auth.gitlab.callback-url')\n c.GitLabOAuthenticator.client_id = get_config('auth.gitlab.client-id')\n c.GitLabOAuthenticator.client_secret = get_config('auth.gitlab.client-secret')\nelif auth_type == 'mediawiki':\n c.JupyterHub.authenticator_class = 'oauthenticator.mediawiki.MWOAuthenticator'\n c.MWOAuthenticator.client_id = get_config('auth.mediawiki.client-id')\n c.MWOAuthenticator.client_secret = get_config('auth.mediawiki.client-secret')\n c.MWOAuthenticator.index_url = get_config('auth.mediawiki.index-url')\nelif auth_type == 'globus':\n c.JupyterHub.authenticator_class = 'oauthenticator.globus.GlobusOAuthenticator'\n c.GlobusOAuthenticator.oauth_callback_url = get_config('auth.globus.callback-url')\n c.GlobusOAuthenticator.client_id = get_config('auth.globus.client-id')\n c.GlobusOAuthenticator.client_secret = get_config('auth.globus.client-secret')\n c.GlobusOAuthenticator.identity_provider = get_config('auth.globus.identity-provider', '')\nelif auth_type == 'hmac':\n c.JupyterHub.authenticator_class = 'hmacauthenticator.HMACAuthenticator'\n c.HMACAuthenticator.secret_key = bytes.fromhex(get_config('auth.hmac.secret-key'))\nelif auth_type == 'dummy':\n c.JupyterHub.authenticator_class = 'dummyauthenticator.DummyAuthenticator'\n c.DummyAuthenticator.password = get_config('auth.dummy.password', None)\nelif auth_type == 'tmp':\n c.JupyterHub.authenticator_class = 'tmpauthenticator.TmpAuthenticator'\nelif auth_type == 'lti':\n c.JupyterHub.authenticator_class = 'ltiauthenticator.LTIAuthenticator'\n c.LTIAuthenticator.consumers = get_config('auth.lti.consumers')\nelif auth_type == 'custom':\n # full_class_name looks like \"myauthenticator.MyAuthenticator\".\n # To create a docker image with this class availabe, you can just have the\n # following Dockerifle:\n # FROM jupyterhub/k8s-hub:v0.4\n # RUN pip3 install myauthenticator\n full_class_name = get_config('auth.custom.class-name')\n c.JupyterHub.authenticator_class = full_class_name\n auth_class_name = full_class_name.rsplit('.', 1)[-1]\n auth_config = c[auth_class_name]\n auth_config.update(get_config('auth.custom.config') or {})\nelse:\n raise ValueError(\"Unhandled auth type: %r\" % auth_type)\n\nc.Authenticator.enable_auth_state = get_config('auth.state.enabled', False)\n\ndef generate_user_email(spawner):\n \"\"\"\n Used as the EMAIL environment variable\n \"\"\"\n return '{username}@{domain}'.format(\n username=spawner.user.name, domain=email_domain\n )\n\ndef generate_user_name(spawner):\n \"\"\"\n Used as GIT_AUTHOR_NAME and GIT_COMMITTER_NAME environment variables\n \"\"\"\n return spawner.user.name\n\nc.KubeSpawner.environment = {\n 'EMAIL': generate_user_email,\n # git requires these committer attributes\n 'GIT_AUTHOR_NAME': generate_user_name,\n 'GIT_COMMITTER_NAME': generate_user_name\n}\n\nc.KubeSpawner.environment.update(get_config('singleuser.extra-env', {}))\n\n# Enable admins to access user servers\nc.JupyterHub.admin_access = get_config('auth.admin.access')\nc.Authenticator.admin_users = get_config('auth.admin.users', [])\nc.Authenticator.whitelist = get_config('auth.whitelist.users', [])\n\nc.JupyterHub.base_url = get_config('hub.base_url')\n\nc.JupyterHub.services = []\n\nif get_config('cull.enabled', False):\n cull_timeout = get_config('cull.timeout')\n cull_every = get_config('cull.every')\n cull_cmd = [\n '/usr/local/bin/cull_idle_servers.py',\n '--timeout=%s' % cull_timeout,\n '--cull-every=%s' % cull_every,\n '--url=http://127.0.0.1:8081' + c.JupyterHub.base_url + 'hub/api'\n ]\n if get_config('cull.users'):\n cull_cmd.append('--cull-users')\n c.JupyterHub.services.append({\n 'name': 'cull-idle',\n 'admin': True,\n 'command': cull_cmd,\n })\n\nfor name, service in get_config('hub.services', {}).items():\n api_token = get_secret('services.token.%s' % name)\n # jupyterhub.services is a list of dicts, but\n # in the helm chart it is a dict of dicts for easier merged-config\n service.setdefault('name', name)\n if api_token:\n service['api_token'] = api_token\n c.JupyterHub.services.append(service)\n\n\nc.JupyterHub.db_url = get_config('hub.db_url')\n\ncmd = get_config('singleuser.cmd', None)\nif cmd:\n c.Spawner.cmd = cmd\n\ndefault_url = get_config('singleuser.default-url', None)\nif default_url:\n c.Spawner.default_url = default_url\n\ncloud_metadata = get_config('singleuser.cloud-metadata', {})\n\nif not cloud_metadata.get('enabled', False):\n # Use iptables to block access to cloud metadata by default\n network_tools_image_name = get_config('singleuser.network-tools.image.name')\n network_tools_image_tag = get_config('singleuser.network-tools.image.tag')\n ip_block_container = client.V1Container(\n name=\"block-cloud-metadata\",\n image=f\"{network_tools_image_name}:{network_tools_image_tag}\",\n command=[\n 'iptables',\n '-A', 'OUTPUT',\n '-d', cloud_metadata.get('ip', '169.254.169.254'),\n '-j', 'DROP'\n ],\n security_context=client.V1SecurityContext(\n privileged=True,\n run_as_user=0,\n capabilities=client.V1Capabilities(add=['NET_ADMIN'])\n )\n )\n\n c.KubeSpawner.singleuser_init_containers.append(ip_block_container)\n\nscheduler_strategy = get_config('singleuser.scheduler-strategy', 'spread')\n\nif scheduler_strategy == 'pack':\n # FIXME: Support setting affinity directly in KubeSpawner\n c.KubeSpawner.singleuser_extra_pod_config = {\n 'affinity': {\n 'podAffinity': {\n 'preferredDuringSchedulingIgnoredDuringExecution': [{\n 'weight': 100,\n 'podAffinityTerm': {\n 'labelSelector': {\n 'matchExpressions': [{\n 'key': 'component',\n 'operator': 'In',\n 'values': ['singleuser-server']\n }]\n },\n 'topologyKey': 'kubernetes.io/hostname'\n }\n }],\n }\n }\n }\nelse:\n # Set default to {} so subconfigs can easily update it\n c.KubeSpawner.singleuser_extra_pod_config = {}\n\nextra_configs = sorted(glob.glob('/etc/jupyterhub/config/hub.extra-config.*.py'))\nfor ec in extra_configs:\n load_subconfig(ec)\n", "path": "images/hub/jupyterhub_config.py"}]} | 3,944 | 463 |
gh_patches_debug_33052 | rasdani/github-patches | git_diff | ansible__ansible-modules-extras-2818 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
vmware_local_user_manager error: 'module' not defined
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_local_user_manager
##### ANSIBLE VERSION
```
ansible 2.2.0 (devel 321d2e8cee) last updated 2016/08/21 14:29:27 (GMT +200)
lib/ansible/modules/core: (detached HEAD 91a839f1e3) last updated 2016/08/21 14:32:29 (GMT +200)
lib/ansible/modules/extras: (detached HEAD 1aeb9f8a8c) last updated 2016/08/21 14:32:44 (GMT +200)
```
##### CONFIGURATION
None
##### OS / ENVIRONMENT
Execution from: Ubuntu 14.04
Execution to: VMware ESXi 6.0 U2
##### SUMMARY
Execution of module vmware_local_user_manager fails with error
##### STEPS TO REPRODUCE
```
- name: "vSphere ESXi: Add users"
vmware_local_user_manager: validate_certs=False hostname={{ inventory_hostname }} username=root password=mypassword local_user_name=foo local_user_password=bar
```
##### EXPECTED RESULTS
Create user on target system
##### ACTUAL RESULTS
Error:
```
TASK [vSphere ESXi: Add users] *************************************************
task path: /home/devel/ansible-configuration/vmware.yml:15
Using module file /home/devel/ansible/lib/ansible/modules/extras/cloud/vmware/vmware_local_user_manager.py
<esxi> ESTABLISH LOCAL CONNECTION FOR USER: devel
<esxi> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1471786926.92-121489380863382 `" && echo ansible-tmp-1471786926.92-121489380863382="` echo $HOME/.ansible/tmp/ansible-tmp-1471786926.92-121489380863382 `" ) && sleep 0'
<esxi> PUT /tmp/tmpHVuXHh TO /home/devel/.ansible/tmp/ansible-tmp-1471786926.92-121489380863382/vmware_local_user_manager.py
<esxi> EXEC /bin/sh -c 'chmod u+x /home/devel/.ansible/tmp/ansible-tmp-1471786926.92-121489380863382/ /home/devel/.ansible/tmp/ansible-tmp-1471786926.92-121489380863382/vmware_local_user_manager.py && sleep 0'
<esxi> EXEC /bin/sh -c '/usr/bin/python /home/devel/.ansible/tmp/ansible-tmp-1471786926.92-121489380863382/vmware_local_user_manager.py; rm -rf "/home/devel/.ansible/tmp/ansible-tmp-1471786926.92-121489380863382/" > /dev/null 2>&1 && sleep 0'
fatal: [esxi]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"hostname": "esxi",
"local_user_description": null,
"local_user_name": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"local_user_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"state": "present",
"username": "devel",
"validate_certs": false
},
"module_name": "vmware_local_user_manager"
},
"msg": "global name 'module' is not defined"
}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cloud/vmware/vmware_local_user_manager.py`
Content:
```
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # Copyright IBM Corp. 2016
5 # Author(s): Andreas Nafpliotis <[email protected]>
6
7 # This file is part of Ansible
8 #
9 # Ansible is free software: you can redistribute it and/or modify
10 # it under the terms of the GNU General Public License as published by
11 # the Free Software Foundation, either version 3 of the License, or
12 # (at your option) any later version.
13 #
14 # Ansible is distributed in the hope that it will be useful,
15 # but WITHOUT ANY WARRANTY; without even the implied warranty of
16 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
17 # GNU General Public License for more details.
18 #
19 # You should have received a copy of the GNU General Public License
20 # along with Ansible. If not, see <http://www.gnu.org/licenses/
21
22 DOCUMENTATION = '''
23 ---
24 module: vmware_local_user_manager
25 short_description: Manage local users on an ESXi host
26 description:
27 - Manage local users on an ESXi host
28 version_added: "2.2"
29 author: Andreas Nafpliotis
30 notes:
31 - Tested on ESXi 6.0
32 - Be sure that the ESXi user used for login, has the appropriate rights to create / delete / edit users
33 requirements:
34 - "python >= 2.6"
35 - PyVmomi installed
36 options:
37 local_user_name:
38 description:
39 - The local user name to be changed
40 required: True
41 local_user_password:
42 description:
43 - The password to be set
44 required: False
45 local_user_description:
46 description:
47 - Description for the user
48 required: False
49 state:
50 description:
51 - Indicate desired state of the user. If the user already exists when C(state=present), the user info is updated
52 choices: ['present', 'absent']
53 default: present
54 extends_documentation_fragment: vmware.documentation
55 '''
56
57 EXAMPLES = '''
58 # Example vmware_local_user_manager command from Ansible Playbooks
59 - name: Add local user to ESXi
60 local_action:
61 module: vmware_local_user_manager
62 hostname: esxi_hostname
63 username: root
64 password: vmware
65 local_user_name: foo
66 '''
67
68 RETURN = '''# '''
69
70 try:
71 from pyVmomi import vim, vmodl
72 HAS_PYVMOMI = True
73 except ImportError:
74 HAS_PYVMOMI = False
75
76
77 class VMwareLocalUserManager(object):
78 def __init__(self, module):
79 self.module = module
80 self.content = connect_to_api(self.module)
81 self.local_user_name = self.module.params['local_user_name']
82 self.local_user_password = self.module.params['local_user_password']
83 self.local_user_description = self.module.params['local_user_description']
84 self.state = self.module.params['state']
85
86 def process_state(self):
87 try:
88 local_account_manager_states = {
89 'absent': {
90 'present': self.state_remove_user,
91 'absent': self.state_exit_unchanged,
92 },
93 'present': {
94 'present': self.state_update_user,
95 'absent': self.state_create_user,
96 }
97 }
98
99 local_account_manager_states[self.state][self.check_local_user_manager_state()]()
100 except vmodl.RuntimeFault as runtime_fault:
101 self.module.fail_json(msg=runtime_fault.msg)
102 except vmodl.MethodFault as method_fault:
103 self.module.fail_json(msg=method_fault.msg)
104 except Exception as e:
105 self.module.fail_json(msg=str(e))
106
107
108 def check_local_user_manager_state(self):
109 user_account = self.find_user_account()
110 if not user_account:
111 return 'absent'
112 else:
113 return 'present'
114
115
116 def find_user_account(self):
117 searchStr = self.local_user_name
118 exactMatch = True
119 findUsers = True
120 findGroups = False
121 user_account = self.content.userDirectory.RetrieveUserGroups(None, searchStr, None, None, exactMatch, findUsers, findGroups)
122 return user_account
123
124
125 def create_account_spec(self):
126 account_spec = vim.host.LocalAccountManager.AccountSpecification()
127 account_spec.id = self.local_user_name
128 account_spec.password = self.local_user_password
129 account_spec.description = self.local_user_description
130 return account_spec
131
132
133 def state_create_user(self):
134 account_spec = self.create_account_spec()
135
136 try:
137 task = self.content.accountManager.CreateUser(account_spec)
138 self.module.exit_json(changed=True)
139 except vmodl.RuntimeFault as runtime_fault:
140 module.fail_json(msg=runtime_fault.msg)
141 except vmodl.MethodFault as method_fault:
142 module.fail_json(msg=method_fault.msg)
143
144 def state_update_user(self):
145 account_spec = self.create_account_spec()
146
147 try:
148 task = self.content.accountManager.UpdateUser(account_spec)
149 self.module.exit_json(changed=True)
150 except vmodl.RuntimeFault as runtime_fault:
151 module.fail_json(msg=runtime_fault.msg)
152 except vmodl.MethodFault as method_fault:
153 module.fail_json(msg=method_fault.msg)
154
155
156 def state_remove_user(self):
157 try:
158 task = self.content.accountManager.RemoveUser(self.local_user_name)
159 self.module.exit_json(changed=True)
160 except vmodl.RuntimeFault as runtime_fault:
161 module.fail_json(msg=runtime_fault.msg)
162 except vmodl.MethodFault as method_fault:
163 module.fail_json(msg=method_fault.msg)
164
165
166 def state_exit_unchanged(self):
167 self.module.exit_json(changed=False)
168
169
170
171 def main():
172
173 argument_spec = vmware_argument_spec()
174 argument_spec.update(dict(local_user_name=dict(required=True, type='str'),
175 local_user_password=dict(required=False, type='str', no_log=True),
176 local_user_description=dict(required=False, type='str'),
177 state=dict(default='present', choices=['present', 'absent'], type='str')))
178
179 module = AnsibleModule(argument_spec=argument_spec, supports_check_mode=False)
180
181 if not HAS_PYVMOMI:
182 module.fail_json(msg='pyvmomi is required for this module')
183
184 vmware_local_user_manager = VMwareLocalUserManager(module)
185 vmware_local_user_manager.process_state()
186
187 from ansible.module_utils.vmware import *
188 from ansible.module_utils.basic import *
189
190 if __name__ == '__main__':
191 main()
192
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cloud/vmware/vmware_local_user_manager.py b/cloud/vmware/vmware_local_user_manager.py
--- a/cloud/vmware/vmware_local_user_manager.py
+++ b/cloud/vmware/vmware_local_user_manager.py
@@ -137,9 +137,9 @@
task = self.content.accountManager.CreateUser(account_spec)
self.module.exit_json(changed=True)
except vmodl.RuntimeFault as runtime_fault:
- module.fail_json(msg=runtime_fault.msg)
+ self.module.fail_json(msg=runtime_fault.msg)
except vmodl.MethodFault as method_fault:
- module.fail_json(msg=method_fault.msg)
+ self.module.fail_json(msg=method_fault.msg)
def state_update_user(self):
account_spec = self.create_account_spec()
@@ -148,9 +148,9 @@
task = self.content.accountManager.UpdateUser(account_spec)
self.module.exit_json(changed=True)
except vmodl.RuntimeFault as runtime_fault:
- module.fail_json(msg=runtime_fault.msg)
+ self.module.fail_json(msg=runtime_fault.msg)
except vmodl.MethodFault as method_fault:
- module.fail_json(msg=method_fault.msg)
+ self.module.fail_json(msg=method_fault.msg)
def state_remove_user(self):
@@ -158,9 +158,9 @@
task = self.content.accountManager.RemoveUser(self.local_user_name)
self.module.exit_json(changed=True)
except vmodl.RuntimeFault as runtime_fault:
- module.fail_json(msg=runtime_fault.msg)
+ self.module.fail_json(msg=runtime_fault.msg)
except vmodl.MethodFault as method_fault:
- module.fail_json(msg=method_fault.msg)
+ self.module.fail_json(msg=method_fault.msg)
def state_exit_unchanged(self):
| {"golden_diff": "diff --git a/cloud/vmware/vmware_local_user_manager.py b/cloud/vmware/vmware_local_user_manager.py\n--- a/cloud/vmware/vmware_local_user_manager.py\n+++ b/cloud/vmware/vmware_local_user_manager.py\n@@ -137,9 +137,9 @@\n task = self.content.accountManager.CreateUser(account_spec)\n self.module.exit_json(changed=True)\n except vmodl.RuntimeFault as runtime_fault:\n- module.fail_json(msg=runtime_fault.msg)\n+ self.module.fail_json(msg=runtime_fault.msg)\n except vmodl.MethodFault as method_fault:\n- module.fail_json(msg=method_fault.msg)\n+ self.module.fail_json(msg=method_fault.msg)\n \n def state_update_user(self):\n account_spec = self.create_account_spec()\n@@ -148,9 +148,9 @@\n task = self.content.accountManager.UpdateUser(account_spec)\n self.module.exit_json(changed=True)\n except vmodl.RuntimeFault as runtime_fault:\n- module.fail_json(msg=runtime_fault.msg)\n+ self.module.fail_json(msg=runtime_fault.msg)\n except vmodl.MethodFault as method_fault:\n- module.fail_json(msg=method_fault.msg)\n+ self.module.fail_json(msg=method_fault.msg)\n \n \n def state_remove_user(self):\n@@ -158,9 +158,9 @@\n task = self.content.accountManager.RemoveUser(self.local_user_name)\n self.module.exit_json(changed=True)\n except vmodl.RuntimeFault as runtime_fault:\n- module.fail_json(msg=runtime_fault.msg)\n+ self.module.fail_json(msg=runtime_fault.msg)\n except vmodl.MethodFault as method_fault:\n- module.fail_json(msg=method_fault.msg)\n+ self.module.fail_json(msg=method_fault.msg)\n \n \n def state_exit_unchanged(self):\n", "issue": "vmware_local_user_manager error: 'module' not defined\n##### ISSUE TYPE\n- Bug Report\n##### COMPONENT NAME\n\nvmware_local_user_manager\n##### ANSIBLE VERSION\n\n```\nansible 2.2.0 (devel 321d2e8cee) last updated 2016/08/21 14:29:27 (GMT +200)\n lib/ansible/modules/core: (detached HEAD 91a839f1e3) last updated 2016/08/21 14:32:29 (GMT +200)\n lib/ansible/modules/extras: (detached HEAD 1aeb9f8a8c) last updated 2016/08/21 14:32:44 (GMT +200)\n```\n##### CONFIGURATION\n\nNone\n##### OS / ENVIRONMENT\n\nExecution from: Ubuntu 14.04\nExecution to: VMware ESXi 6.0 U2\n##### SUMMARY\n\nExecution of module vmware_local_user_manager fails with error\n##### STEPS TO REPRODUCE\n\n```\n- name: \"vSphere ESXi: Add users\"\n vmware_local_user_manager: validate_certs=False hostname={{ inventory_hostname }} username=root password=mypassword local_user_name=foo local_user_password=bar\n```\n##### EXPECTED RESULTS\n\nCreate user on target system\n##### ACTUAL RESULTS\n\nError:\n\n```\nTASK [vSphere ESXi: Add users] *************************************************\ntask path: /home/devel/ansible-configuration/vmware.yml:15\nUsing module file /home/devel/ansible/lib/ansible/modules/extras/cloud/vmware/vmware_local_user_manager.py\n<esxi> ESTABLISH LOCAL CONNECTION FOR USER: devel\n<esxi> EXEC /bin/sh -c '( umask 77 && mkdir -p \"` echo $HOME/.ansible/tmp/ansible-tmp-1471786926.92-121489380863382 `\" && echo ansible-tmp-1471786926.92-121489380863382=\"` echo $HOME/.ansible/tmp/ansible-tmp-1471786926.92-121489380863382 `\" ) && sleep 0'\n<esxi> PUT /tmp/tmpHVuXHh TO /home/devel/.ansible/tmp/ansible-tmp-1471786926.92-121489380863382/vmware_local_user_manager.py\n<esxi> EXEC /bin/sh -c 'chmod u+x /home/devel/.ansible/tmp/ansible-tmp-1471786926.92-121489380863382/ /home/devel/.ansible/tmp/ansible-tmp-1471786926.92-121489380863382/vmware_local_user_manager.py && sleep 0'\n<esxi> EXEC /bin/sh -c '/usr/bin/python /home/devel/.ansible/tmp/ansible-tmp-1471786926.92-121489380863382/vmware_local_user_manager.py; rm -rf \"/home/devel/.ansible/tmp/ansible-tmp-1471786926.92-121489380863382/\" > /dev/null 2>&1 && sleep 0'\nfatal: [esxi]: FAILED! => {\n \"changed\": false, \n \"failed\": true, \n \"invocation\": {\n \"module_args\": {\n \"hostname\": \"esxi\", \n \"local_user_description\": null, \n \"local_user_name\": \"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\", \n \"local_user_password\": \"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\", \n \"password\": \"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\", \n \"state\": \"present\", \n \"username\": \"devel\", \n \"validate_certs\": false\n }, \n \"module_name\": \"vmware_local_user_manager\"\n }, \n \"msg\": \"global name 'module' is not defined\"\n}\n```\n\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# Copyright IBM Corp. 2016\n# Author(s): Andreas Nafpliotis <[email protected]>\n\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/\n\nDOCUMENTATION = '''\n---\nmodule: vmware_local_user_manager\nshort_description: Manage local users on an ESXi host\ndescription:\n - Manage local users on an ESXi host\nversion_added: \"2.2\"\nauthor: Andreas Nafpliotis\nnotes:\n - Tested on ESXi 6.0\n - Be sure that the ESXi user used for login, has the appropriate rights to create / delete / edit users\nrequirements:\n - \"python >= 2.6\"\n - PyVmomi installed\noptions:\n local_user_name:\n description:\n - The local user name to be changed\n required: True\n local_user_password:\n description:\n - The password to be set\n required: False\n local_user_description:\n description:\n - Description for the user\n required: False\n state:\n description:\n - Indicate desired state of the user. If the user already exists when C(state=present), the user info is updated\n choices: ['present', 'absent']\n default: present\nextends_documentation_fragment: vmware.documentation\n'''\n\nEXAMPLES = '''\n# Example vmware_local_user_manager command from Ansible Playbooks\n- name: Add local user to ESXi\n local_action:\n module: vmware_local_user_manager\n hostname: esxi_hostname\n username: root\n password: vmware\n local_user_name: foo\n'''\n\nRETURN = '''# '''\n\ntry:\n from pyVmomi import vim, vmodl\n HAS_PYVMOMI = True\nexcept ImportError:\n HAS_PYVMOMI = False\n\n\nclass VMwareLocalUserManager(object):\n def __init__(self, module):\n self.module = module\n self.content = connect_to_api(self.module)\n self.local_user_name = self.module.params['local_user_name']\n self.local_user_password = self.module.params['local_user_password']\n self.local_user_description = self.module.params['local_user_description']\n self.state = self.module.params['state']\n\n def process_state(self):\n try:\n local_account_manager_states = {\n 'absent': {\n 'present': self.state_remove_user,\n 'absent': self.state_exit_unchanged,\n },\n 'present': {\n 'present': self.state_update_user,\n 'absent': self.state_create_user,\n }\n }\n\n local_account_manager_states[self.state][self.check_local_user_manager_state()]()\n except vmodl.RuntimeFault as runtime_fault:\n self.module.fail_json(msg=runtime_fault.msg)\n except vmodl.MethodFault as method_fault:\n self.module.fail_json(msg=method_fault.msg)\n except Exception as e:\n self.module.fail_json(msg=str(e))\n\n\n def check_local_user_manager_state(self):\n user_account = self.find_user_account()\n if not user_account:\n return 'absent'\n else:\n return 'present'\n\n\n def find_user_account(self):\n searchStr = self.local_user_name\n exactMatch = True\n findUsers = True\n findGroups = False\n user_account = self.content.userDirectory.RetrieveUserGroups(None, searchStr, None, None, exactMatch, findUsers, findGroups)\n return user_account\n\n\n def create_account_spec(self):\n account_spec = vim.host.LocalAccountManager.AccountSpecification()\n account_spec.id = self.local_user_name\n account_spec.password = self.local_user_password\n account_spec.description = self.local_user_description\n return account_spec\n\n\n def state_create_user(self):\n account_spec = self.create_account_spec()\n\n try:\n task = self.content.accountManager.CreateUser(account_spec)\n self.module.exit_json(changed=True)\n except vmodl.RuntimeFault as runtime_fault:\n module.fail_json(msg=runtime_fault.msg)\n except vmodl.MethodFault as method_fault:\n module.fail_json(msg=method_fault.msg)\n\n def state_update_user(self):\n account_spec = self.create_account_spec()\n\n try:\n task = self.content.accountManager.UpdateUser(account_spec)\n self.module.exit_json(changed=True)\n except vmodl.RuntimeFault as runtime_fault:\n module.fail_json(msg=runtime_fault.msg)\n except vmodl.MethodFault as method_fault:\n module.fail_json(msg=method_fault.msg)\n\n\n def state_remove_user(self):\n try:\n task = self.content.accountManager.RemoveUser(self.local_user_name)\n self.module.exit_json(changed=True)\n except vmodl.RuntimeFault as runtime_fault:\n module.fail_json(msg=runtime_fault.msg)\n except vmodl.MethodFault as method_fault:\n module.fail_json(msg=method_fault.msg)\n\n\n def state_exit_unchanged(self):\n self.module.exit_json(changed=False)\n\n\n\ndef main():\n\n argument_spec = vmware_argument_spec()\n argument_spec.update(dict(local_user_name=dict(required=True, type='str'),\n local_user_password=dict(required=False, type='str', no_log=True),\n local_user_description=dict(required=False, type='str'),\n state=dict(default='present', choices=['present', 'absent'], type='str')))\n\n module = AnsibleModule(argument_spec=argument_spec, supports_check_mode=False)\n\n if not HAS_PYVMOMI:\n module.fail_json(msg='pyvmomi is required for this module')\n\n vmware_local_user_manager = VMwareLocalUserManager(module)\n vmware_local_user_manager.process_state()\n\nfrom ansible.module_utils.vmware import *\nfrom ansible.module_utils.basic import *\n\nif __name__ == '__main__':\n main()\n", "path": "cloud/vmware/vmware_local_user_manager.py"}], "after_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# Copyright IBM Corp. 2016\n# Author(s): Andreas Nafpliotis <[email protected]>\n\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/\n\nDOCUMENTATION = '''\n---\nmodule: vmware_local_user_manager\nshort_description: Manage local users on an ESXi host\ndescription:\n - Manage local users on an ESXi host\nversion_added: \"2.2\"\nauthor: Andreas Nafpliotis\nnotes:\n - Tested on ESXi 6.0\n - Be sure that the ESXi user used for login, has the appropriate rights to create / delete / edit users\nrequirements:\n - \"python >= 2.6\"\n - PyVmomi installed\noptions:\n local_user_name:\n description:\n - The local user name to be changed\n required: True\n local_user_password:\n description:\n - The password to be set\n required: False\n local_user_description:\n description:\n - Description for the user\n required: False\n state:\n description:\n - Indicate desired state of the user. If the user already exists when C(state=present), the user info is updated\n choices: ['present', 'absent']\n default: present\nextends_documentation_fragment: vmware.documentation\n'''\n\nEXAMPLES = '''\n# Example vmware_local_user_manager command from Ansible Playbooks\n- name: Add local user to ESXi\n local_action:\n module: vmware_local_user_manager\n hostname: esxi_hostname\n username: root\n password: vmware\n local_user_name: foo\n'''\n\nRETURN = '''# '''\n\ntry:\n from pyVmomi import vim, vmodl\n HAS_PYVMOMI = True\nexcept ImportError:\n HAS_PYVMOMI = False\n\n\nclass VMwareLocalUserManager(object):\n def __init__(self, module):\n self.module = module\n self.content = connect_to_api(self.module)\n self.local_user_name = self.module.params['local_user_name']\n self.local_user_password = self.module.params['local_user_password']\n self.local_user_description = self.module.params['local_user_description']\n self.state = self.module.params['state']\n\n def process_state(self):\n try:\n local_account_manager_states = {\n 'absent': {\n 'present': self.state_remove_user,\n 'absent': self.state_exit_unchanged,\n },\n 'present': {\n 'present': self.state_update_user,\n 'absent': self.state_create_user,\n }\n }\n\n local_account_manager_states[self.state][self.check_local_user_manager_state()]()\n except vmodl.RuntimeFault as runtime_fault:\n self.module.fail_json(msg=runtime_fault.msg)\n except vmodl.MethodFault as method_fault:\n self.module.fail_json(msg=method_fault.msg)\n except Exception as e:\n self.module.fail_json(msg=str(e))\n\n\n def check_local_user_manager_state(self):\n user_account = self.find_user_account()\n if not user_account:\n return 'absent'\n else:\n return 'present'\n\n\n def find_user_account(self):\n searchStr = self.local_user_name\n exactMatch = True\n findUsers = True\n findGroups = False\n user_account = self.content.userDirectory.RetrieveUserGroups(None, searchStr, None, None, exactMatch, findUsers, findGroups)\n return user_account\n\n\n def create_account_spec(self):\n account_spec = vim.host.LocalAccountManager.AccountSpecification()\n account_spec.id = self.local_user_name\n account_spec.password = self.local_user_password\n account_spec.description = self.local_user_description\n return account_spec\n\n\n def state_create_user(self):\n account_spec = self.create_account_spec()\n\n try:\n task = self.content.accountManager.CreateUser(account_spec)\n self.module.exit_json(changed=True)\n except vmodl.RuntimeFault as runtime_fault:\n self.module.fail_json(msg=runtime_fault.msg)\n except vmodl.MethodFault as method_fault:\n self.module.fail_json(msg=method_fault.msg)\n\n def state_update_user(self):\n account_spec = self.create_account_spec()\n\n try:\n task = self.content.accountManager.UpdateUser(account_spec)\n self.module.exit_json(changed=True)\n except vmodl.RuntimeFault as runtime_fault:\n self.module.fail_json(msg=runtime_fault.msg)\n except vmodl.MethodFault as method_fault:\n self.module.fail_json(msg=method_fault.msg)\n\n\n def state_remove_user(self):\n try:\n task = self.content.accountManager.RemoveUser(self.local_user_name)\n self.module.exit_json(changed=True)\n except vmodl.RuntimeFault as runtime_fault:\n self.module.fail_json(msg=runtime_fault.msg)\n except vmodl.MethodFault as method_fault:\n self.module.fail_json(msg=method_fault.msg)\n\n\n def state_exit_unchanged(self):\n self.module.exit_json(changed=False)\n\n\n\ndef main():\n\n argument_spec = vmware_argument_spec()\n argument_spec.update(dict(local_user_name=dict(required=True, type='str'),\n local_user_password=dict(required=False, type='str', no_log=True),\n local_user_description=dict(required=False, type='str'),\n state=dict(default='present', choices=['present', 'absent'], type='str')))\n\n module = AnsibleModule(argument_spec=argument_spec, supports_check_mode=False)\n\n if not HAS_PYVMOMI:\n module.fail_json(msg='pyvmomi is required for this module')\n\n vmware_local_user_manager = VMwareLocalUserManager(module)\n vmware_local_user_manager.process_state()\n\nfrom ansible.module_utils.vmware import *\nfrom ansible.module_utils.basic import *\n\nif __name__ == '__main__':\n main()\n", "path": "cloud/vmware/vmware_local_user_manager.py"}]} | 3,113 | 402 |
gh_patches_debug_28704 | rasdani/github-patches | git_diff | biolab__orange3-1205 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
There is a desktop file missing in a Linux version
##### Orange version
3.3.1.dev0
##### Expected bahavior
/usr/share/applications directory is populated with a orange.desktop file providing a desktop icon on Linux machines.
##### Actual behavior
That doesn't happen - the /usr/share/applications/orange.desktop file is missing.
##### Steps to reproduce the behavior
Install Orange on a Linux machine.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #! /usr/bin/env python3
2
3 import os
4 import sys
5 import subprocess
6 from setuptools import find_packages, Command
7
8 if sys.version_info < (3, 4):
9 sys.exit('Orange requires Python >= 3.4')
10 try:
11 from numpy.distutils.core import setup
12 except ImportError:
13 sys.exit('setup requires numpy; install numpy first')
14
15 NAME = 'Orange'
16
17 VERSION = '3.3.2'
18 ISRELEASED = False
19
20 DESCRIPTION = 'Orange, a component-based data mining framework.'
21 README_FILE = os.path.join(os.path.dirname(__file__), 'README.md')
22 LONG_DESCRIPTION = open(README_FILE).read()
23 AUTHOR = 'Bioinformatics Laboratory, FRI UL'
24 AUTHOR_EMAIL = '[email protected]'
25 URL = 'http://orange.biolab.si/'
26 LICENSE = 'GPLv3+'
27
28 KEYWORDS = (
29 'data mining',
30 'machine learning',
31 'artificial intelligence',
32 )
33
34 CLASSIFIERS = (
35 'Development Status :: 4 - Beta',
36 'Environment :: X11 Applications :: Qt',
37 'Environment :: Console',
38 'Environment :: Plugins',
39 'Programming Language :: Python',
40 'Framework :: Orange',
41 'License :: OSI Approved :: '
42 'GNU General Public License v3 or later (GPLv3+)',
43 'Operating System :: POSIX',
44 'Operating System :: Microsoft :: Windows',
45 'Topic :: Scientific/Engineering :: Artificial Intelligence',
46 'Topic :: Scientific/Engineering :: Visualization',
47 'Topic :: Software Development :: Libraries :: Python Modules',
48 'Intended Audience :: Education',
49 'Intended Audience :: Science/Research',
50 'Intended Audience :: Developers',
51 )
52
53 requirements = ['requirements-core.txt', 'requirements-gui.txt']
54
55 INSTALL_REQUIRES = sorted(set(
56 line.partition('#')[0].strip()
57 for file in (os.path.join(os.path.dirname(__file__), file)
58 for file in requirements)
59 for line in open(file)
60 ) - {''})
61
62 ENTRY_POINTS = {
63 "orange.canvas.help": (
64 "html-index = Orange.widgets:WIDGET_HELP_PATH",
65 ),
66 "gui_scripts": (
67 "orange-canvas = Orange.canvas.__main__:main",
68 ),
69 }
70
71
72 # Return the git revision as a string
73 def git_version():
74 """Return the git revision as a string.
75
76 Copied from numpy setup.py
77 """
78 def _minimal_ext_cmd(cmd):
79 # construct minimal environment
80 env = {}
81 for k in ['SYSTEMROOT', 'PATH']:
82 v = os.environ.get(k)
83 if v is not None:
84 env[k] = v
85 # LANGUAGE is used on win32
86 env['LANGUAGE'] = 'C'
87 env['LANG'] = 'C'
88 env['LC_ALL'] = 'C'
89 out = subprocess.Popen(cmd, stdout = subprocess.PIPE, env=env).communicate()[0]
90 return out
91
92 try:
93 out = _minimal_ext_cmd(['git', 'rev-parse', 'HEAD'])
94 GIT_REVISION = out.strip().decode('ascii')
95 except OSError:
96 GIT_REVISION = "Unknown"
97 return GIT_REVISION
98
99
100 def write_version_py(filename='Orange/version.py'):
101 # Copied from numpy setup.py
102 cnt = """
103 # THIS FILE IS GENERATED FROM ORANGE SETUP.PY
104 short_version = '%(version)s'
105 version = '%(version)s'
106 full_version = '%(full_version)s'
107 git_revision = '%(git_revision)s'
108 release = %(isrelease)s
109
110 if not release:
111 version = full_version
112 short_version += ".dev"
113 """
114 FULLVERSION = VERSION
115 if os.path.exists('.git'):
116 GIT_REVISION = git_version()
117 elif os.path.exists('Orange/version.py'):
118 # must be a source distribution, use existing version file
119 import imp
120 version = imp.load_source("Orange.version", "Orange/version.py")
121 GIT_REVISION = version.git_revision
122 else:
123 GIT_REVISION = "Unknown"
124
125 if not ISRELEASED:
126 FULLVERSION += '.dev0+' + GIT_REVISION[:7]
127
128 a = open(filename, 'w')
129 try:
130 a.write(cnt % {'version': VERSION,
131 'full_version': FULLVERSION,
132 'git_revision': GIT_REVISION,
133 'isrelease': str(ISRELEASED)})
134 finally:
135 a.close()
136
137
138 def configuration(parent_package='', top_path=None):
139 if os.path.exists('MANIFEST'):
140 os.remove('MANIFEST')
141
142 from numpy.distutils.misc_util import Configuration
143 config = Configuration(None, parent_package, top_path)
144
145 # Avoid non-useful msg:
146 # "Ignoring attempt to set 'name' (from ... "
147 config.set_options(ignore_setup_xxx_py=True,
148 assume_default_configuration=True,
149 delegate_options_to_subpackages=True,
150 quiet=True)
151
152 config.add_subpackage('Orange')
153
154 config.get_version('Orange/version.py') # sets config.version
155
156 return config
157
158
159 PACKAGES = find_packages()
160
161 # Extra non .py, .{so,pyd} files that are installed within the package dir
162 # hierarchy
163 PACKAGE_DATA = {
164 "Orange": ["datasets/*.{}".format(ext)
165 for ext in ["tab", "csv", "basket", "info"]],
166 "Orange.canvas": ["icons/*.png", "icons/*.svg"],
167 "Orange.canvas.styles": ["*.qss", "orange/*.svg"],
168 "Orange.canvas.application.tutorials": ["*.ows"],
169 "Orange.canvas.report": ["icons/*.svg", "*.html"],
170 "Orange.widgets": ["icons/*.png", "icons/*.svg"],
171 "Orange.widgets.classify": ["icons/*.svg"],
172 "Orange.widgets.data": ["icons/*.svg",
173 "icons/paintdata/*.png",
174 "icons/paintdata/*.svg"],
175 "Orange.widgets.evaluate": ["icons/*.svg"],
176 "Orange.widgets.visualize": ["icons/*.svg"],
177 "Orange.widgets.regression": ["icons/*.svg"],
178 "Orange.widgets.unsupervised": ["icons/*.svg"],
179 "Orange.widgets.utils.plot": ["*.fs", "*.gs", "*.vs"],
180 "Orange.widgets.utils.plot.primitives": ["*.obj"],
181 "Orange.tests": ["xlsx_files/*.xlsx", "*.tab", "*.basket", "*.csv"]
182 }
183
184
185 class LintCommand(Command):
186 """A setup.py lint subcommand developers can run locally."""
187 description = "run code linter(s)"
188 user_options = []
189 initialize_options = finalize_options = lambda self: None
190
191 def run(self):
192 """Lint current branch compared to a reasonable master branch"""
193 sys.exit(subprocess.call(r'''
194 set -eu
195 upstream="$(git remote -v |
196 awk '/[@\/]github.com[:\/]biolab\/orange3[\. ]/{ print $1; exit }')"
197 git fetch -q $upstream master
198 best_ancestor=$(git merge-base HEAD refs/remotes/$upstream/master)
199 .travis/check_pylint_diff $best_ancestor
200 ''', shell=True, cwd=os.path.dirname(os.path.abspath(__file__))))
201
202 class CoverageCommand(Command):
203 """A setup.py coverage subcommand developers can run locally."""
204 description = "run code coverage"
205 user_options = []
206 initialize_options = finalize_options = lambda self: None
207
208 def run(self):
209 """Check coverage on current workdir"""
210 sys.exit(subprocess.call(r'''
211 coverage run --source=Orange -m unittest -v Orange.tests
212 echo; echo
213 coverage report
214 coverage html &&
215 { echo; echo "See also: file://$(pwd)/htmlcov/index.html"; echo; }
216 ''', shell=True, cwd=os.path.dirname(os.path.abspath(__file__))))
217
218
219
220
221 def setup_package():
222 write_version_py()
223 setup(
224 configuration=configuration,
225 name=NAME,
226 description=DESCRIPTION,
227 long_description=LONG_DESCRIPTION,
228 author=AUTHOR,
229 author_email=AUTHOR_EMAIL,
230 url=URL,
231 license=LICENSE,
232 keywords=KEYWORDS,
233 classifiers=CLASSIFIERS,
234 packages=PACKAGES,
235 package_data=PACKAGE_DATA,
236 install_requires=INSTALL_REQUIRES,
237 entry_points=ENTRY_POINTS,
238 zip_safe=False,
239 test_suite='Orange.tests.test_suite',
240 cmdclass={
241 'lint': LintCommand,
242 'coverage': CoverageCommand,
243 },
244 )
245
246 if __name__ == '__main__':
247 setup_package()
248
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -216,6 +216,38 @@
''', shell=True, cwd=os.path.dirname(os.path.abspath(__file__))))
+# Install desktop file and icon on GNU/Linux/BSD
+DATA_FILES = []
+if any(sys.platform.startswith(platform)
+ for platform in ('linux', 'freebsd')):
+ # Patch desktop file executable to work with virtualenv
+ try:
+ sys.real_prefix
+ except AttributeError:
+ pass # Not in virtualenv
+ else:
+ with open(os.path.join(os.path.dirname(__file__),
+ 'distribute',
+ 'orange-canvas.desktop'), 'r+') as desktop:
+ spec = []
+ for line in desktop:
+ if line.startswith('Exec='):
+ line = 'Exec="{}" -m Orange.canvas\n'.format(sys.executable)
+ spec.append(line)
+ desktop.seek(0)
+ desktop.truncate(0)
+ desktop.writelines(spec)
+
+ usr_share = os.path.join(sys.prefix, "share")
+ if not usr_share.startswith('/usr/') or not os.access(usr_share, os.W_OK):
+ usr_share = os.environ.get('XDG_DATA_HOME',
+ os.path.expanduser('~/.local/share'))
+ DATA_FILES += [
+ (os.path.join(usr_share, 'applications'),
+ ['distribute/orange-canvas.desktop']),
+ (os.path.join(usr_share, 'icons', 'hicolor', 'scalable', 'apps'),
+ ['distribute/orange-canvas.svg'])
+ ]
def setup_package():
@@ -235,6 +267,7 @@
package_data=PACKAGE_DATA,
install_requires=INSTALL_REQUIRES,
entry_points=ENTRY_POINTS,
+ data_files=DATA_FILES,
zip_safe=False,
test_suite='Orange.tests.test_suite',
cmdclass={
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -216,6 +216,38 @@\n ''', shell=True, cwd=os.path.dirname(os.path.abspath(__file__))))\n \n \n+# Install desktop file and icon on GNU/Linux/BSD\n+DATA_FILES = []\n+if any(sys.platform.startswith(platform)\n+ for platform in ('linux', 'freebsd')):\n+ # Patch desktop file executable to work with virtualenv\n+ try:\n+ sys.real_prefix\n+ except AttributeError:\n+ pass # Not in virtualenv\n+ else:\n+ with open(os.path.join(os.path.dirname(__file__),\n+ 'distribute',\n+ 'orange-canvas.desktop'), 'r+') as desktop:\n+ spec = []\n+ for line in desktop:\n+ if line.startswith('Exec='):\n+ line = 'Exec=\"{}\" -m Orange.canvas\\n'.format(sys.executable)\n+ spec.append(line)\n+ desktop.seek(0)\n+ desktop.truncate(0)\n+ desktop.writelines(spec)\n+\n+ usr_share = os.path.join(sys.prefix, \"share\")\n+ if not usr_share.startswith('/usr/') or not os.access(usr_share, os.W_OK):\n+ usr_share = os.environ.get('XDG_DATA_HOME',\n+ os.path.expanduser('~/.local/share'))\n+ DATA_FILES += [\n+ (os.path.join(usr_share, 'applications'),\n+ ['distribute/orange-canvas.desktop']),\n+ (os.path.join(usr_share, 'icons', 'hicolor', 'scalable', 'apps'),\n+ ['distribute/orange-canvas.svg'])\n+ ]\n \n \n def setup_package():\n@@ -235,6 +267,7 @@\n package_data=PACKAGE_DATA,\n install_requires=INSTALL_REQUIRES,\n entry_points=ENTRY_POINTS,\n+ data_files=DATA_FILES,\n zip_safe=False,\n test_suite='Orange.tests.test_suite',\n cmdclass={\n", "issue": "There is a desktop file missing in a Linux version\n##### Orange version\n\n3.3.1.dev0\n##### Expected bahavior\n\n/usr/share/applications directory is populated with a orange.desktop file providing a desktop icon on Linux machines.\n##### Actual behavior\n\nThat doesn't happen - the /usr/share/applications/orange.desktop file is missing.\n##### Steps to reproduce the behavior\n\nInstall Orange on a Linux machine.\n\n", "before_files": [{"content": "#! /usr/bin/env python3\n\nimport os\nimport sys\nimport subprocess\nfrom setuptools import find_packages, Command\n\nif sys.version_info < (3, 4):\n sys.exit('Orange requires Python >= 3.4')\ntry:\n from numpy.distutils.core import setup\nexcept ImportError:\n sys.exit('setup requires numpy; install numpy first')\n\nNAME = 'Orange'\n\nVERSION = '3.3.2'\nISRELEASED = False\n\nDESCRIPTION = 'Orange, a component-based data mining framework.'\nREADME_FILE = os.path.join(os.path.dirname(__file__), 'README.md')\nLONG_DESCRIPTION = open(README_FILE).read()\nAUTHOR = 'Bioinformatics Laboratory, FRI UL'\nAUTHOR_EMAIL = '[email protected]'\nURL = 'http://orange.biolab.si/'\nLICENSE = 'GPLv3+'\n\nKEYWORDS = (\n 'data mining',\n 'machine learning',\n 'artificial intelligence',\n)\n\nCLASSIFIERS = (\n 'Development Status :: 4 - Beta',\n 'Environment :: X11 Applications :: Qt',\n 'Environment :: Console',\n 'Environment :: Plugins',\n 'Programming Language :: Python',\n 'Framework :: Orange',\n 'License :: OSI Approved :: '\n 'GNU General Public License v3 or later (GPLv3+)',\n 'Operating System :: POSIX',\n 'Operating System :: Microsoft :: Windows',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Scientific/Engineering :: Visualization',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Developers',\n)\n\nrequirements = ['requirements-core.txt', 'requirements-gui.txt']\n\nINSTALL_REQUIRES = sorted(set(\n line.partition('#')[0].strip()\n for file in (os.path.join(os.path.dirname(__file__), file)\n for file in requirements)\n for line in open(file)\n) - {''})\n\nENTRY_POINTS = {\n \"orange.canvas.help\": (\n \"html-index = Orange.widgets:WIDGET_HELP_PATH\",\n ),\n \"gui_scripts\": (\n \"orange-canvas = Orange.canvas.__main__:main\",\n ),\n}\n\n\n# Return the git revision as a string\ndef git_version():\n \"\"\"Return the git revision as a string.\n\n Copied from numpy setup.py\n \"\"\"\n def _minimal_ext_cmd(cmd):\n # construct minimal environment\n env = {}\n for k in ['SYSTEMROOT', 'PATH']:\n v = os.environ.get(k)\n if v is not None:\n env[k] = v\n # LANGUAGE is used on win32\n env['LANGUAGE'] = 'C'\n env['LANG'] = 'C'\n env['LC_ALL'] = 'C'\n out = subprocess.Popen(cmd, stdout = subprocess.PIPE, env=env).communicate()[0]\n return out\n\n try:\n out = _minimal_ext_cmd(['git', 'rev-parse', 'HEAD'])\n GIT_REVISION = out.strip().decode('ascii')\n except OSError:\n GIT_REVISION = \"Unknown\"\n return GIT_REVISION\n\n\ndef write_version_py(filename='Orange/version.py'):\n # Copied from numpy setup.py\n cnt = \"\"\"\n# THIS FILE IS GENERATED FROM ORANGE SETUP.PY\nshort_version = '%(version)s'\nversion = '%(version)s'\nfull_version = '%(full_version)s'\ngit_revision = '%(git_revision)s'\nrelease = %(isrelease)s\n\nif not release:\n version = full_version\n short_version += \".dev\"\n\"\"\"\n FULLVERSION = VERSION\n if os.path.exists('.git'):\n GIT_REVISION = git_version()\n elif os.path.exists('Orange/version.py'):\n # must be a source distribution, use existing version file\n import imp\n version = imp.load_source(\"Orange.version\", \"Orange/version.py\")\n GIT_REVISION = version.git_revision\n else:\n GIT_REVISION = \"Unknown\"\n\n if not ISRELEASED:\n FULLVERSION += '.dev0+' + GIT_REVISION[:7]\n\n a = open(filename, 'w')\n try:\n a.write(cnt % {'version': VERSION,\n 'full_version': FULLVERSION,\n 'git_revision': GIT_REVISION,\n 'isrelease': str(ISRELEASED)})\n finally:\n a.close()\n\n\ndef configuration(parent_package='', top_path=None):\n if os.path.exists('MANIFEST'):\n os.remove('MANIFEST')\n\n from numpy.distutils.misc_util import Configuration\n config = Configuration(None, parent_package, top_path)\n\n # Avoid non-useful msg:\n # \"Ignoring attempt to set 'name' (from ... \"\n config.set_options(ignore_setup_xxx_py=True,\n assume_default_configuration=True,\n delegate_options_to_subpackages=True,\n quiet=True)\n\n config.add_subpackage('Orange')\n\n config.get_version('Orange/version.py') # sets config.version\n\n return config\n\n\nPACKAGES = find_packages()\n\n# Extra non .py, .{so,pyd} files that are installed within the package dir\n# hierarchy\nPACKAGE_DATA = {\n \"Orange\": [\"datasets/*.{}\".format(ext)\n for ext in [\"tab\", \"csv\", \"basket\", \"info\"]],\n \"Orange.canvas\": [\"icons/*.png\", \"icons/*.svg\"],\n \"Orange.canvas.styles\": [\"*.qss\", \"orange/*.svg\"],\n \"Orange.canvas.application.tutorials\": [\"*.ows\"],\n \"Orange.canvas.report\": [\"icons/*.svg\", \"*.html\"],\n \"Orange.widgets\": [\"icons/*.png\", \"icons/*.svg\"],\n \"Orange.widgets.classify\": [\"icons/*.svg\"],\n \"Orange.widgets.data\": [\"icons/*.svg\",\n \"icons/paintdata/*.png\",\n \"icons/paintdata/*.svg\"],\n \"Orange.widgets.evaluate\": [\"icons/*.svg\"],\n \"Orange.widgets.visualize\": [\"icons/*.svg\"],\n \"Orange.widgets.regression\": [\"icons/*.svg\"],\n \"Orange.widgets.unsupervised\": [\"icons/*.svg\"],\n \"Orange.widgets.utils.plot\": [\"*.fs\", \"*.gs\", \"*.vs\"],\n \"Orange.widgets.utils.plot.primitives\": [\"*.obj\"],\n \"Orange.tests\": [\"xlsx_files/*.xlsx\", \"*.tab\", \"*.basket\", \"*.csv\"]\n}\n\n\nclass LintCommand(Command):\n \"\"\"A setup.py lint subcommand developers can run locally.\"\"\"\n description = \"run code linter(s)\"\n user_options = []\n initialize_options = finalize_options = lambda self: None\n\n def run(self):\n \"\"\"Lint current branch compared to a reasonable master branch\"\"\"\n sys.exit(subprocess.call(r'''\n set -eu\n upstream=\"$(git remote -v |\n awk '/[@\\/]github.com[:\\/]biolab\\/orange3[\\. ]/{ print $1; exit }')\"\n git fetch -q $upstream master\n best_ancestor=$(git merge-base HEAD refs/remotes/$upstream/master)\n .travis/check_pylint_diff $best_ancestor\n ''', shell=True, cwd=os.path.dirname(os.path.abspath(__file__))))\n\nclass CoverageCommand(Command):\n \"\"\"A setup.py coverage subcommand developers can run locally.\"\"\"\n description = \"run code coverage\"\n user_options = []\n initialize_options = finalize_options = lambda self: None\n\n def run(self):\n \"\"\"Check coverage on current workdir\"\"\"\n sys.exit(subprocess.call(r'''\n coverage run --source=Orange -m unittest -v Orange.tests\n echo; echo\n coverage report\n coverage html &&\n { echo; echo \"See also: file://$(pwd)/htmlcov/index.html\"; echo; }\n ''', shell=True, cwd=os.path.dirname(os.path.abspath(__file__))))\n\n\n\n\ndef setup_package():\n write_version_py()\n setup(\n configuration=configuration,\n name=NAME,\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n author=AUTHOR,\n author_email=AUTHOR_EMAIL,\n url=URL,\n license=LICENSE,\n keywords=KEYWORDS,\n classifiers=CLASSIFIERS,\n packages=PACKAGES,\n package_data=PACKAGE_DATA,\n install_requires=INSTALL_REQUIRES,\n entry_points=ENTRY_POINTS,\n zip_safe=False,\n test_suite='Orange.tests.test_suite',\n cmdclass={\n 'lint': LintCommand,\n 'coverage': CoverageCommand,\n },\n )\n\nif __name__ == '__main__':\n setup_package()\n", "path": "setup.py"}], "after_files": [{"content": "#! /usr/bin/env python3\n\nimport os\nimport sys\nimport subprocess\nfrom setuptools import find_packages, Command\n\nif sys.version_info < (3, 4):\n sys.exit('Orange requires Python >= 3.4')\ntry:\n from numpy.distutils.core import setup\nexcept ImportError:\n sys.exit('setup requires numpy; install numpy first')\n\nNAME = 'Orange'\n\nVERSION = '3.3.2'\nISRELEASED = False\n\nDESCRIPTION = 'Orange, a component-based data mining framework.'\nREADME_FILE = os.path.join(os.path.dirname(__file__), 'README.md')\nLONG_DESCRIPTION = open(README_FILE).read()\nAUTHOR = 'Bioinformatics Laboratory, FRI UL'\nAUTHOR_EMAIL = '[email protected]'\nURL = 'http://orange.biolab.si/'\nLICENSE = 'GPLv3+'\n\nKEYWORDS = (\n 'data mining',\n 'machine learning',\n 'artificial intelligence',\n)\n\nCLASSIFIERS = (\n 'Development Status :: 4 - Beta',\n 'Environment :: X11 Applications :: Qt',\n 'Environment :: Console',\n 'Environment :: Plugins',\n 'Programming Language :: Python',\n 'Framework :: Orange',\n 'License :: OSI Approved :: '\n 'GNU General Public License v3 or later (GPLv3+)',\n 'Operating System :: POSIX',\n 'Operating System :: Microsoft :: Windows',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Scientific/Engineering :: Visualization',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Developers',\n)\n\nrequirements = ['requirements-core.txt', 'requirements-gui.txt']\n\nINSTALL_REQUIRES = sorted(set(\n line.partition('#')[0].strip()\n for file in (os.path.join(os.path.dirname(__file__), file)\n for file in requirements)\n for line in open(file)\n) - {''})\n\nENTRY_POINTS = {\n \"orange.canvas.help\": (\n \"html-index = Orange.widgets:WIDGET_HELP_PATH\",\n ),\n \"gui_scripts\": (\n \"orange-canvas = Orange.canvas.__main__:main\",\n ),\n}\n\n\n# Return the git revision as a string\ndef git_version():\n \"\"\"Return the git revision as a string.\n\n Copied from numpy setup.py\n \"\"\"\n def _minimal_ext_cmd(cmd):\n # construct minimal environment\n env = {}\n for k in ['SYSTEMROOT', 'PATH']:\n v = os.environ.get(k)\n if v is not None:\n env[k] = v\n # LANGUAGE is used on win32\n env['LANGUAGE'] = 'C'\n env['LANG'] = 'C'\n env['LC_ALL'] = 'C'\n out = subprocess.Popen(cmd, stdout = subprocess.PIPE, env=env).communicate()[0]\n return out\n\n try:\n out = _minimal_ext_cmd(['git', 'rev-parse', 'HEAD'])\n GIT_REVISION = out.strip().decode('ascii')\n except OSError:\n GIT_REVISION = \"Unknown\"\n return GIT_REVISION\n\n\ndef write_version_py(filename='Orange/version.py'):\n # Copied from numpy setup.py\n cnt = \"\"\"\n# THIS FILE IS GENERATED FROM ORANGE SETUP.PY\nshort_version = '%(version)s'\nversion = '%(version)s'\nfull_version = '%(full_version)s'\ngit_revision = '%(git_revision)s'\nrelease = %(isrelease)s\n\nif not release:\n version = full_version\n short_version += \".dev\"\n\"\"\"\n FULLVERSION = VERSION\n if os.path.exists('.git'):\n GIT_REVISION = git_version()\n elif os.path.exists('Orange/version.py'):\n # must be a source distribution, use existing version file\n import imp\n version = imp.load_source(\"Orange.version\", \"Orange/version.py\")\n GIT_REVISION = version.git_revision\n else:\n GIT_REVISION = \"Unknown\"\n\n if not ISRELEASED:\n FULLVERSION += '.dev0+' + GIT_REVISION[:7]\n\n a = open(filename, 'w')\n try:\n a.write(cnt % {'version': VERSION,\n 'full_version': FULLVERSION,\n 'git_revision': GIT_REVISION,\n 'isrelease': str(ISRELEASED)})\n finally:\n a.close()\n\n\ndef configuration(parent_package='', top_path=None):\n if os.path.exists('MANIFEST'):\n os.remove('MANIFEST')\n\n from numpy.distutils.misc_util import Configuration\n config = Configuration(None, parent_package, top_path)\n\n # Avoid non-useful msg:\n # \"Ignoring attempt to set 'name' (from ... \"\n config.set_options(ignore_setup_xxx_py=True,\n assume_default_configuration=True,\n delegate_options_to_subpackages=True,\n quiet=True)\n\n config.add_subpackage('Orange')\n\n config.get_version('Orange/version.py') # sets config.version\n\n return config\n\n\nPACKAGES = find_packages()\n\n# Extra non .py, .{so,pyd} files that are installed within the package dir\n# hierarchy\nPACKAGE_DATA = {\n \"Orange\": [\"datasets/*.{}\".format(ext)\n for ext in [\"tab\", \"csv\", \"basket\", \"info\"]],\n \"Orange.canvas\": [\"icons/*.png\", \"icons/*.svg\"],\n \"Orange.canvas.styles\": [\"*.qss\", \"orange/*.svg\"],\n \"Orange.canvas.application.tutorials\": [\"*.ows\"],\n \"Orange.canvas.report\": [\"icons/*.svg\", \"*.html\"],\n \"Orange.widgets\": [\"icons/*.png\", \"icons/*.svg\"],\n \"Orange.widgets.classify\": [\"icons/*.svg\"],\n \"Orange.widgets.data\": [\"icons/*.svg\",\n \"icons/paintdata/*.png\",\n \"icons/paintdata/*.svg\"],\n \"Orange.widgets.evaluate\": [\"icons/*.svg\"],\n \"Orange.widgets.visualize\": [\"icons/*.svg\"],\n \"Orange.widgets.regression\": [\"icons/*.svg\"],\n \"Orange.widgets.unsupervised\": [\"icons/*.svg\"],\n \"Orange.widgets.utils.plot\": [\"*.fs\", \"*.gs\", \"*.vs\"],\n \"Orange.widgets.utils.plot.primitives\": [\"*.obj\"],\n \"Orange.tests\": [\"xlsx_files/*.xlsx\", \"*.tab\", \"*.basket\", \"*.csv\"]\n}\n\n\nclass LintCommand(Command):\n \"\"\"A setup.py lint subcommand developers can run locally.\"\"\"\n description = \"run code linter(s)\"\n user_options = []\n initialize_options = finalize_options = lambda self: None\n\n def run(self):\n \"\"\"Lint current branch compared to a reasonable master branch\"\"\"\n sys.exit(subprocess.call(r'''\n set -eu\n upstream=\"$(git remote -v |\n awk '/[@\\/]github.com[:\\/]biolab\\/orange3[\\. ]/{ print $1; exit }')\"\n git fetch -q $upstream master\n best_ancestor=$(git merge-base HEAD refs/remotes/$upstream/master)\n .travis/check_pylint_diff $best_ancestor\n ''', shell=True, cwd=os.path.dirname(os.path.abspath(__file__))))\n\nclass CoverageCommand(Command):\n \"\"\"A setup.py coverage subcommand developers can run locally.\"\"\"\n description = \"run code coverage\"\n user_options = []\n initialize_options = finalize_options = lambda self: None\n\n def run(self):\n \"\"\"Check coverage on current workdir\"\"\"\n sys.exit(subprocess.call(r'''\n coverage run --source=Orange -m unittest -v Orange.tests\n echo; echo\n coverage report\n coverage html &&\n { echo; echo \"See also: file://$(pwd)/htmlcov/index.html\"; echo; }\n ''', shell=True, cwd=os.path.dirname(os.path.abspath(__file__))))\n\n\n# Install desktop file and icon on GNU/Linux/BSD\nDATA_FILES = []\nif any(sys.platform.startswith(platform)\n for platform in ('linux', 'freebsd')):\n # Patch desktop file executable to work with virtualenv\n try:\n sys.real_prefix\n except AttributeError:\n pass # Not in virtualenv\n else:\n with open(os.path.join(os.path.dirname(__file__),\n 'distribute',\n 'orange-canvas.desktop'), 'r+') as desktop:\n spec = []\n for line in desktop:\n if line.startswith('Exec='):\n line = 'Exec=\"{}\" -m Orange.canvas\\n'.format(sys.executable)\n spec.append(line)\n desktop.seek(0)\n desktop.truncate(0)\n desktop.writelines(spec)\n\n usr_share = os.path.join(sys.prefix, \"share\")\n if not usr_share.startswith('/usr/') or not os.access(usr_share, os.W_OK):\n usr_share = os.environ.get('XDG_DATA_HOME',\n os.path.expanduser('~/.local/share'))\n DATA_FILES += [\n (os.path.join(usr_share, 'applications'),\n ['distribute/orange-canvas.desktop']),\n (os.path.join(usr_share, 'icons', 'hicolor', 'scalable', 'apps'),\n ['distribute/orange-canvas.svg'])\n ]\n\n\ndef setup_package():\n write_version_py()\n setup(\n configuration=configuration,\n name=NAME,\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n author=AUTHOR,\n author_email=AUTHOR_EMAIL,\n url=URL,\n license=LICENSE,\n keywords=KEYWORDS,\n classifiers=CLASSIFIERS,\n packages=PACKAGES,\n package_data=PACKAGE_DATA,\n install_requires=INSTALL_REQUIRES,\n entry_points=ENTRY_POINTS,\n data_files=DATA_FILES,\n zip_safe=False,\n test_suite='Orange.tests.test_suite',\n cmdclass={\n 'lint': LintCommand,\n 'coverage': CoverageCommand,\n },\n )\n\nif __name__ == '__main__':\n setup_package()\n", "path": "setup.py"}]} | 2,774 | 432 |
gh_patches_debug_502 | rasdani/github-patches | git_diff | google__flax-2827 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot import flax.training.checkpoints in 0.6.4
### System information
- OS Platform and Distribution: Ubuntu 22.04.1 LTS, also in Colab environment
- Flax, jax, jaxlib versions:
* flax 0.6.4
* jax 0.3.25
* jaxlib 0.3.25
- Python version: 3.10.6
- GPU/TPU model and memory: No Accelerator / 16GB
### Problem you have encountered:
With FLAX v0.6.4 I can't import `flax.training.checkpoints` module due to following error:
```
ImportError: cannot import name 'monitoring' from 'jax' (/usr/local/lib/python3.8/dist-packages/jax/__init__.py)
```
This does not happen in v0.6.3.
### What you expected to happen:
The module should be imported.
### Logs, error messages, etc:
Error message from jupyter notebook:
```
ImportError Traceback (most recent call last)
[<ipython-input-3-9a234296e658>](https://localhost:8080/#) in <module>
1 import flax
----> 2 from flax.training import checkpoints
[/usr/local/lib/python3.8/dist-packages/flax/training/checkpoints.py](https://localhost:8080/#) in <module>
36 from flax import traverse_util
37 import jax
---> 38 from jax import monitoring
39 from jax import process_index
40 from jax import sharding
ImportError: cannot import name 'monitoring' from 'jax' (/usr/local/lib/python3.8/dist-packages/jax/__init__.py)
```
### Steps to reproduce:
[Colab notebook](https://colab.research.google.com/drive/1ZLR1JSJPfaaoTmL7bow8oebqyhhxrqSo?usp=sharing)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2022 The Flax Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """setup.py for Flax."""
16
17 import os
18 from setuptools import find_packages
19 from setuptools import setup
20
21 here = os.path.abspath(os.path.dirname(__file__))
22 try:
23 README = open(os.path.join(here, "README.md"), encoding="utf-8").read()
24 except OSError:
25 README = ""
26
27 install_requires = [
28 "numpy>=1.12",
29 "jax>=0.3.16",
30 "matplotlib", # only needed for tensorboard export
31 "msgpack",
32 "optax",
33 "orbax",
34 "tensorstore",
35 "rich>=11.1",
36 "typing_extensions>=4.1.1",
37 "PyYAML>=5.4.1",
38 ]
39
40 tests_require = [
41 "atari-py==0.2.5", # Last version does not have the ROMs we test on pre-packaged
42 "clu", # All examples.
43 "gym==0.18.3",
44 "jaxlib",
45 "jraph>=0.0.6dev0",
46 "ml-collections",
47 "mypy",
48 "opencv-python",
49 "pytest",
50 "pytest-cov",
51 "pytest-custom_exit_code",
52 "pytest-xdist==1.34.0", # upgrading to 2.0 broke tests, need to investigate
53 "pytype",
54 "sentencepiece", # WMT example.
55 "tensorflow_text>=2.4.0", # WMT example.
56 "tensorflow_datasets",
57 "tensorflow",
58 "torch",
59 ]
60
61 __version__ = None
62
63 with open("flax/version.py") as f:
64 exec(f.read(), globals())
65
66 setup(
67 name="flax",
68 version=__version__,
69 description="Flax: A neural network library for JAX designed for flexibility",
70 long_description="\n\n".join([README]),
71 long_description_content_type="text/markdown",
72 classifiers=[
73 "Development Status :: 3 - Alpha",
74 "Intended Audience :: Developers",
75 "Intended Audience :: Science/Research",
76 "License :: OSI Approved :: Apache Software License",
77 "Programming Language :: Python :: 3.7",
78 "Topic :: Scientific/Engineering :: Artificial Intelligence",
79 ],
80 keywords="",
81 author="Flax team",
82 author_email="[email protected]",
83 url="https://github.com/google/flax",
84 packages=find_packages(),
85 package_data={"flax": ["py.typed"]},
86 zip_safe=False,
87 install_requires=install_requires,
88 extras_require={
89 "testing": tests_require,
90 },
91 )
92
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -26,7 +26,7 @@
install_requires = [
"numpy>=1.12",
- "jax>=0.3.16",
+ "jax>=0.4.2",
"matplotlib", # only needed for tensorboard export
"msgpack",
"optax",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -26,7 +26,7 @@\n \n install_requires = [\n \"numpy>=1.12\",\n- \"jax>=0.3.16\",\n+ \"jax>=0.4.2\",\n \"matplotlib\", # only needed for tensorboard export\n \"msgpack\",\n \"optax\",\n", "issue": "Cannot import flax.training.checkpoints in 0.6.4\n### System information\r\n- OS Platform and Distribution: Ubuntu 22.04.1 LTS, also in Colab environment\r\n- Flax, jax, jaxlib versions:\r\n * flax 0.6.4\r\n * jax 0.3.25\r\n * jaxlib 0.3.25\r\n- Python version: 3.10.6\r\n- GPU/TPU model and memory: No Accelerator / 16GB\r\n\r\n### Problem you have encountered:\r\nWith FLAX v0.6.4 I can't import `flax.training.checkpoints` module due to following error:\r\n```\r\nImportError: cannot import name 'monitoring' from 'jax' (/usr/local/lib/python3.8/dist-packages/jax/__init__.py)\r\n```\r\nThis does not happen in v0.6.3.\r\n\r\n### What you expected to happen:\r\nThe module should be imported.\r\n\r\n### Logs, error messages, etc:\r\nError message from jupyter notebook:\r\n```\r\nImportError Traceback (most recent call last)\r\n\r\n[<ipython-input-3-9a234296e658>](https://localhost:8080/#) in <module>\r\n 1 import flax\r\n----> 2 from flax.training import checkpoints\r\n\r\n[/usr/local/lib/python3.8/dist-packages/flax/training/checkpoints.py](https://localhost:8080/#) in <module>\r\n 36 from flax import traverse_util\r\n 37 import jax\r\n---> 38 from jax import monitoring\r\n 39 from jax import process_index\r\n 40 from jax import sharding\r\n\r\nImportError: cannot import name 'monitoring' from 'jax' (/usr/local/lib/python3.8/dist-packages/jax/__init__.py)\r\n```\r\n\r\n### Steps to reproduce:\r\n[Colab notebook](https://colab.research.google.com/drive/1ZLR1JSJPfaaoTmL7bow8oebqyhhxrqSo?usp=sharing)\r\n\n", "before_files": [{"content": "# Copyright 2022 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"setup.py for Flax.\"\"\"\n\nimport os\nfrom setuptools import find_packages\nfrom setuptools import setup\n\nhere = os.path.abspath(os.path.dirname(__file__))\ntry:\n README = open(os.path.join(here, \"README.md\"), encoding=\"utf-8\").read()\nexcept OSError:\n README = \"\"\n\ninstall_requires = [\n \"numpy>=1.12\",\n \"jax>=0.3.16\",\n \"matplotlib\", # only needed for tensorboard export\n \"msgpack\",\n \"optax\",\n \"orbax\",\n \"tensorstore\",\n \"rich>=11.1\",\n \"typing_extensions>=4.1.1\",\n \"PyYAML>=5.4.1\",\n]\n\ntests_require = [\n \"atari-py==0.2.5\", # Last version does not have the ROMs we test on pre-packaged\n \"clu\", # All examples.\n \"gym==0.18.3\",\n \"jaxlib\",\n \"jraph>=0.0.6dev0\",\n \"ml-collections\",\n \"mypy\",\n \"opencv-python\",\n \"pytest\",\n \"pytest-cov\",\n \"pytest-custom_exit_code\",\n \"pytest-xdist==1.34.0\", # upgrading to 2.0 broke tests, need to investigate\n \"pytype\",\n \"sentencepiece\", # WMT example.\n \"tensorflow_text>=2.4.0\", # WMT example.\n \"tensorflow_datasets\",\n \"tensorflow\",\n \"torch\",\n]\n\n__version__ = None\n\nwith open(\"flax/version.py\") as f:\n exec(f.read(), globals())\n\nsetup(\n name=\"flax\",\n version=__version__,\n description=\"Flax: A neural network library for JAX designed for flexibility\",\n long_description=\"\\n\\n\".join([README]),\n long_description_content_type=\"text/markdown\",\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n keywords=\"\",\n author=\"Flax team\",\n author_email=\"[email protected]\",\n url=\"https://github.com/google/flax\",\n packages=find_packages(),\n package_data={\"flax\": [\"py.typed\"]},\n zip_safe=False,\n install_requires=install_requires,\n extras_require={\n \"testing\": tests_require,\n },\n )\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2022 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"setup.py for Flax.\"\"\"\n\nimport os\nfrom setuptools import find_packages\nfrom setuptools import setup\n\nhere = os.path.abspath(os.path.dirname(__file__))\ntry:\n README = open(os.path.join(here, \"README.md\"), encoding=\"utf-8\").read()\nexcept OSError:\n README = \"\"\n\ninstall_requires = [\n \"numpy>=1.12\",\n \"jax>=0.4.2\",\n \"matplotlib\", # only needed for tensorboard export\n \"msgpack\",\n \"optax\",\n \"orbax\",\n \"tensorstore\",\n \"rich>=11.1\",\n \"typing_extensions>=4.1.1\",\n \"PyYAML>=5.4.1\",\n]\n\ntests_require = [\n \"atari-py==0.2.5\", # Last version does not have the ROMs we test on pre-packaged\n \"clu\", # All examples.\n \"gym==0.18.3\",\n \"jaxlib\",\n \"jraph>=0.0.6dev0\",\n \"ml-collections\",\n \"mypy\",\n \"opencv-python\",\n \"pytest\",\n \"pytest-cov\",\n \"pytest-custom_exit_code\",\n \"pytest-xdist==1.34.0\", # upgrading to 2.0 broke tests, need to investigate\n \"pytype\",\n \"sentencepiece\", # WMT example.\n \"tensorflow_text>=2.4.0\", # WMT example.\n \"tensorflow_datasets\",\n \"tensorflow\",\n \"torch\",\n]\n\n__version__ = None\n\nwith open(\"flax/version.py\") as f:\n exec(f.read(), globals())\n\nsetup(\n name=\"flax\",\n version=__version__,\n description=\"Flax: A neural network library for JAX designed for flexibility\",\n long_description=\"\\n\\n\".join([README]),\n long_description_content_type=\"text/markdown\",\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n keywords=\"\",\n author=\"Flax team\",\n author_email=\"[email protected]\",\n url=\"https://github.com/google/flax\",\n packages=find_packages(),\n package_data={\"flax\": [\"py.typed\"]},\n zip_safe=False,\n install_requires=install_requires,\n extras_require={\n \"testing\": tests_require,\n },\n )\n", "path": "setup.py"}]} | 1,587 | 92 |
gh_patches_debug_26532 | rasdani/github-patches | git_diff | jazzband__pip-tools-733 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Command in autogenerated requirements.txt can be shortened
When I run `pip-compile`, my requirements.txt has
```
#
# This file is autogenerated by pip-compile
# To update, run:
#
# pip-compile --output-file requirements.txt requirements.in
#
```
But I think the `--output-file requirements.txt` can just be dropped (for brevity) when the written file itself is named `requirements.txt`.
I'm recommending this because `pip-compile` already goes ahead and modifies `requirements.txt` when no options are specified. Thoughts?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `piptools/writer.py`
Content:
```
1 import os
2 from itertools import chain
3
4 from ._compat import ExitStack
5 from .click import unstyle
6 from .io import AtomicSaver
7 from .logging import log
8 from .utils import comment, dedup, format_requirement, key_from_req, UNSAFE_PACKAGES
9
10
11 class OutputWriter(object):
12 def __init__(self, src_files, dst_file, dry_run, emit_header, emit_index,
13 emit_trusted_host, annotate, generate_hashes,
14 default_index_url, index_urls, trusted_hosts, format_control,
15 allow_unsafe):
16 self.src_files = src_files
17 self.dst_file = dst_file
18 self.dry_run = dry_run
19 self.emit_header = emit_header
20 self.emit_index = emit_index
21 self.emit_trusted_host = emit_trusted_host
22 self.annotate = annotate
23 self.generate_hashes = generate_hashes
24 self.default_index_url = default_index_url
25 self.index_urls = index_urls
26 self.trusted_hosts = trusted_hosts
27 self.format_control = format_control
28 self.allow_unsafe = allow_unsafe
29
30 def _sort_key(self, ireq):
31 return (not ireq.editable, str(ireq.req).lower())
32
33 def write_header(self):
34 if self.emit_header:
35 yield comment('#')
36 yield comment('# This file is autogenerated by pip-compile')
37 yield comment('# To update, run:')
38 yield comment('#')
39 custom_cmd = os.environ.get('CUSTOM_COMPILE_COMMAND')
40 if custom_cmd:
41 yield comment('# {}'.format(custom_cmd))
42 else:
43 params = []
44 if not self.emit_index:
45 params += ['--no-index']
46 if not self.emit_trusted_host:
47 params += ['--no-emit-trusted-host']
48 if not self.annotate:
49 params += ['--no-annotate']
50 if self.generate_hashes:
51 params += ["--generate-hashes"]
52 if self.allow_unsafe:
53 params += ["--allow-unsafe"]
54 params += ['--output-file', self.dst_file]
55 params += self.src_files
56 yield comment('# pip-compile {}'.format(' '.join(params)))
57 yield comment('#')
58
59 def write_index_options(self):
60 if self.emit_index:
61 for index, index_url in enumerate(dedup(self.index_urls)):
62 if index_url.rstrip('/') == self.default_index_url:
63 continue
64 flag = '--index-url' if index == 0 else '--extra-index-url'
65 yield '{} {}'.format(flag, index_url)
66
67 def write_trusted_hosts(self):
68 if self.emit_trusted_host:
69 for trusted_host in dedup(self.trusted_hosts):
70 yield '--trusted-host {}'.format(trusted_host)
71
72 def write_format_controls(self):
73 for nb in dedup(self.format_control.no_binary):
74 yield '--no-binary {}'.format(nb)
75 for ob in dedup(self.format_control.only_binary):
76 yield '--only-binary {}'.format(ob)
77
78 def write_flags(self):
79 emitted = False
80 for line in chain(self.write_index_options(),
81 self.write_trusted_hosts(),
82 self.write_format_controls()):
83 emitted = True
84 yield line
85 if emitted:
86 yield ''
87
88 def _iter_lines(self, results, unsafe_requirements, reverse_dependencies,
89 primary_packages, markers, hashes):
90 for line in self.write_header():
91 yield line
92 for line in self.write_flags():
93 yield line
94
95 unsafe_requirements = {r for r in results if r.name in UNSAFE_PACKAGES} if not unsafe_requirements else unsafe_requirements # noqa
96 packages = {r for r in results if r.name not in UNSAFE_PACKAGES}
97
98 packages = sorted(packages, key=self._sort_key)
99
100 for ireq in packages:
101 line = self._format_requirement(
102 ireq, reverse_dependencies, primary_packages,
103 markers.get(key_from_req(ireq.req)), hashes=hashes)
104 yield line
105
106 if unsafe_requirements:
107 unsafe_requirements = sorted(unsafe_requirements, key=self._sort_key)
108 yield ''
109 yield comment('# The following packages are considered to be unsafe in a requirements file:')
110
111 for ireq in unsafe_requirements:
112 req = self._format_requirement(ireq,
113 reverse_dependencies,
114 primary_packages,
115 marker=markers.get(key_from_req(ireq.req)),
116 hashes=hashes)
117 if not self.allow_unsafe:
118 yield comment('# {}'.format(req))
119 else:
120 yield req
121
122 def write(self, results, unsafe_requirements, reverse_dependencies,
123 primary_packages, markers, hashes):
124 with ExitStack() as stack:
125 f = None
126 if not self.dry_run:
127 f = stack.enter_context(AtomicSaver(self.dst_file))
128
129 for line in self._iter_lines(results, unsafe_requirements, reverse_dependencies,
130 primary_packages, markers, hashes):
131 log.info(line)
132 if f:
133 f.write(unstyle(line).encode('utf-8'))
134 f.write(os.linesep.encode('utf-8'))
135
136 def _format_requirement(self, ireq, reverse_dependencies, primary_packages, marker=None, hashes=None):
137 ireq_hashes = (hashes if hashes is not None else {}).get(ireq)
138
139 line = format_requirement(ireq, marker=marker, hashes=ireq_hashes)
140
141 if not self.annotate or key_from_req(ireq.req) in primary_packages:
142 return line
143
144 # Annotate what packages this package is required by
145 required_by = reverse_dependencies.get(ireq.name.lower(), [])
146 if required_by:
147 annotation = ", ".join(sorted(required_by))
148 line = "{:24}{}{}".format(
149 line,
150 " \\\n " if ireq_hashes else " ",
151 comment("# via " + annotation))
152 return line
153
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/piptools/writer.py b/piptools/writer.py
--- a/piptools/writer.py
+++ b/piptools/writer.py
@@ -1,8 +1,9 @@
import os
+import sys
from itertools import chain
from ._compat import ExitStack
-from .click import unstyle
+from .click import unstyle, get_os_args
from .io import AtomicSaver
from .logging import log
from .utils import comment, dedup, format_requirement, key_from_req, UNSAFE_PACKAGES
@@ -40,20 +41,9 @@
if custom_cmd:
yield comment('# {}'.format(custom_cmd))
else:
- params = []
- if not self.emit_index:
- params += ['--no-index']
- if not self.emit_trusted_host:
- params += ['--no-emit-trusted-host']
- if not self.annotate:
- params += ['--no-annotate']
- if self.generate_hashes:
- params += ["--generate-hashes"]
- if self.allow_unsafe:
- params += ["--allow-unsafe"]
- params += ['--output-file', self.dst_file]
- params += self.src_files
- yield comment('# pip-compile {}'.format(' '.join(params)))
+ prog = os.path.basename(sys.argv[0])
+ args = ' '.join(get_os_args())
+ yield comment('# {prog} {args}'.format(prog=prog, args=args))
yield comment('#')
def write_index_options(self):
| {"golden_diff": "diff --git a/piptools/writer.py b/piptools/writer.py\n--- a/piptools/writer.py\n+++ b/piptools/writer.py\n@@ -1,8 +1,9 @@\n import os\n+import sys\n from itertools import chain\n \n from ._compat import ExitStack\n-from .click import unstyle\n+from .click import unstyle, get_os_args\n from .io import AtomicSaver\n from .logging import log\n from .utils import comment, dedup, format_requirement, key_from_req, UNSAFE_PACKAGES\n@@ -40,20 +41,9 @@\n if custom_cmd:\n yield comment('# {}'.format(custom_cmd))\n else:\n- params = []\n- if not self.emit_index:\n- params += ['--no-index']\n- if not self.emit_trusted_host:\n- params += ['--no-emit-trusted-host']\n- if not self.annotate:\n- params += ['--no-annotate']\n- if self.generate_hashes:\n- params += [\"--generate-hashes\"]\n- if self.allow_unsafe:\n- params += [\"--allow-unsafe\"]\n- params += ['--output-file', self.dst_file]\n- params += self.src_files\n- yield comment('# pip-compile {}'.format(' '.join(params)))\n+ prog = os.path.basename(sys.argv[0])\n+ args = ' '.join(get_os_args())\n+ yield comment('# {prog} {args}'.format(prog=prog, args=args))\n yield comment('#')\n \n def write_index_options(self):\n", "issue": "Command in autogenerated requirements.txt can be shortened\nWhen I run `pip-compile`, my requirements.txt has\r\n\r\n```\r\n#\r\n# This file is autogenerated by pip-compile\r\n# To update, run:\r\n#\r\n# pip-compile --output-file requirements.txt requirements.in\r\n#\r\n```\r\n\r\nBut I think the `--output-file requirements.txt` can just be dropped (for brevity) when the written file itself is named `requirements.txt`.\r\n\r\nI'm recommending this because `pip-compile` already goes ahead and modifies `requirements.txt` when no options are specified. Thoughts?\n", "before_files": [{"content": "import os\nfrom itertools import chain\n\nfrom ._compat import ExitStack\nfrom .click import unstyle\nfrom .io import AtomicSaver\nfrom .logging import log\nfrom .utils import comment, dedup, format_requirement, key_from_req, UNSAFE_PACKAGES\n\n\nclass OutputWriter(object):\n def __init__(self, src_files, dst_file, dry_run, emit_header, emit_index,\n emit_trusted_host, annotate, generate_hashes,\n default_index_url, index_urls, trusted_hosts, format_control,\n allow_unsafe):\n self.src_files = src_files\n self.dst_file = dst_file\n self.dry_run = dry_run\n self.emit_header = emit_header\n self.emit_index = emit_index\n self.emit_trusted_host = emit_trusted_host\n self.annotate = annotate\n self.generate_hashes = generate_hashes\n self.default_index_url = default_index_url\n self.index_urls = index_urls\n self.trusted_hosts = trusted_hosts\n self.format_control = format_control\n self.allow_unsafe = allow_unsafe\n\n def _sort_key(self, ireq):\n return (not ireq.editable, str(ireq.req).lower())\n\n def write_header(self):\n if self.emit_header:\n yield comment('#')\n yield comment('# This file is autogenerated by pip-compile')\n yield comment('# To update, run:')\n yield comment('#')\n custom_cmd = os.environ.get('CUSTOM_COMPILE_COMMAND')\n if custom_cmd:\n yield comment('# {}'.format(custom_cmd))\n else:\n params = []\n if not self.emit_index:\n params += ['--no-index']\n if not self.emit_trusted_host:\n params += ['--no-emit-trusted-host']\n if not self.annotate:\n params += ['--no-annotate']\n if self.generate_hashes:\n params += [\"--generate-hashes\"]\n if self.allow_unsafe:\n params += [\"--allow-unsafe\"]\n params += ['--output-file', self.dst_file]\n params += self.src_files\n yield comment('# pip-compile {}'.format(' '.join(params)))\n yield comment('#')\n\n def write_index_options(self):\n if self.emit_index:\n for index, index_url in enumerate(dedup(self.index_urls)):\n if index_url.rstrip('/') == self.default_index_url:\n continue\n flag = '--index-url' if index == 0 else '--extra-index-url'\n yield '{} {}'.format(flag, index_url)\n\n def write_trusted_hosts(self):\n if self.emit_trusted_host:\n for trusted_host in dedup(self.trusted_hosts):\n yield '--trusted-host {}'.format(trusted_host)\n\n def write_format_controls(self):\n for nb in dedup(self.format_control.no_binary):\n yield '--no-binary {}'.format(nb)\n for ob in dedup(self.format_control.only_binary):\n yield '--only-binary {}'.format(ob)\n\n def write_flags(self):\n emitted = False\n for line in chain(self.write_index_options(),\n self.write_trusted_hosts(),\n self.write_format_controls()):\n emitted = True\n yield line\n if emitted:\n yield ''\n\n def _iter_lines(self, results, unsafe_requirements, reverse_dependencies,\n primary_packages, markers, hashes):\n for line in self.write_header():\n yield line\n for line in self.write_flags():\n yield line\n\n unsafe_requirements = {r for r in results if r.name in UNSAFE_PACKAGES} if not unsafe_requirements else unsafe_requirements # noqa\n packages = {r for r in results if r.name not in UNSAFE_PACKAGES}\n\n packages = sorted(packages, key=self._sort_key)\n\n for ireq in packages:\n line = self._format_requirement(\n ireq, reverse_dependencies, primary_packages,\n markers.get(key_from_req(ireq.req)), hashes=hashes)\n yield line\n\n if unsafe_requirements:\n unsafe_requirements = sorted(unsafe_requirements, key=self._sort_key)\n yield ''\n yield comment('# The following packages are considered to be unsafe in a requirements file:')\n\n for ireq in unsafe_requirements:\n req = self._format_requirement(ireq,\n reverse_dependencies,\n primary_packages,\n marker=markers.get(key_from_req(ireq.req)),\n hashes=hashes)\n if not self.allow_unsafe:\n yield comment('# {}'.format(req))\n else:\n yield req\n\n def write(self, results, unsafe_requirements, reverse_dependencies,\n primary_packages, markers, hashes):\n with ExitStack() as stack:\n f = None\n if not self.dry_run:\n f = stack.enter_context(AtomicSaver(self.dst_file))\n\n for line in self._iter_lines(results, unsafe_requirements, reverse_dependencies,\n primary_packages, markers, hashes):\n log.info(line)\n if f:\n f.write(unstyle(line).encode('utf-8'))\n f.write(os.linesep.encode('utf-8'))\n\n def _format_requirement(self, ireq, reverse_dependencies, primary_packages, marker=None, hashes=None):\n ireq_hashes = (hashes if hashes is not None else {}).get(ireq)\n\n line = format_requirement(ireq, marker=marker, hashes=ireq_hashes)\n\n if not self.annotate or key_from_req(ireq.req) in primary_packages:\n return line\n\n # Annotate what packages this package is required by\n required_by = reverse_dependencies.get(ireq.name.lower(), [])\n if required_by:\n annotation = \", \".join(sorted(required_by))\n line = \"{:24}{}{}\".format(\n line,\n \" \\\\\\n \" if ireq_hashes else \" \",\n comment(\"# via \" + annotation))\n return line\n", "path": "piptools/writer.py"}], "after_files": [{"content": "import os\nimport sys\nfrom itertools import chain\n\nfrom ._compat import ExitStack\nfrom .click import unstyle, get_os_args\nfrom .io import AtomicSaver\nfrom .logging import log\nfrom .utils import comment, dedup, format_requirement, key_from_req, UNSAFE_PACKAGES\n\n\nclass OutputWriter(object):\n def __init__(self, src_files, dst_file, dry_run, emit_header, emit_index,\n emit_trusted_host, annotate, generate_hashes,\n default_index_url, index_urls, trusted_hosts, format_control,\n allow_unsafe):\n self.src_files = src_files\n self.dst_file = dst_file\n self.dry_run = dry_run\n self.emit_header = emit_header\n self.emit_index = emit_index\n self.emit_trusted_host = emit_trusted_host\n self.annotate = annotate\n self.generate_hashes = generate_hashes\n self.default_index_url = default_index_url\n self.index_urls = index_urls\n self.trusted_hosts = trusted_hosts\n self.format_control = format_control\n self.allow_unsafe = allow_unsafe\n\n def _sort_key(self, ireq):\n return (not ireq.editable, str(ireq.req).lower())\n\n def write_header(self):\n if self.emit_header:\n yield comment('#')\n yield comment('# This file is autogenerated by pip-compile')\n yield comment('# To update, run:')\n yield comment('#')\n custom_cmd = os.environ.get('CUSTOM_COMPILE_COMMAND')\n if custom_cmd:\n yield comment('# {}'.format(custom_cmd))\n else:\n prog = os.path.basename(sys.argv[0])\n args = ' '.join(get_os_args())\n yield comment('# {prog} {args}'.format(prog=prog, args=args))\n yield comment('#')\n\n def write_index_options(self):\n if self.emit_index:\n for index, index_url in enumerate(dedup(self.index_urls)):\n if index_url.rstrip('/') == self.default_index_url:\n continue\n flag = '--index-url' if index == 0 else '--extra-index-url'\n yield '{} {}'.format(flag, index_url)\n\n def write_trusted_hosts(self):\n if self.emit_trusted_host:\n for trusted_host in dedup(self.trusted_hosts):\n yield '--trusted-host {}'.format(trusted_host)\n\n def write_format_controls(self):\n for nb in dedup(self.format_control.no_binary):\n yield '--no-binary {}'.format(nb)\n for ob in dedup(self.format_control.only_binary):\n yield '--only-binary {}'.format(ob)\n\n def write_flags(self):\n emitted = False\n for line in chain(self.write_index_options(),\n self.write_trusted_hosts(),\n self.write_format_controls()):\n emitted = True\n yield line\n if emitted:\n yield ''\n\n def _iter_lines(self, results, unsafe_requirements, reverse_dependencies,\n primary_packages, markers, hashes):\n for line in self.write_header():\n yield line\n for line in self.write_flags():\n yield line\n\n unsafe_requirements = {r for r in results if r.name in UNSAFE_PACKAGES} if not unsafe_requirements else unsafe_requirements # noqa\n packages = {r for r in results if r.name not in UNSAFE_PACKAGES}\n\n packages = sorted(packages, key=self._sort_key)\n\n for ireq in packages:\n line = self._format_requirement(\n ireq, reverse_dependencies, primary_packages,\n markers.get(key_from_req(ireq.req)), hashes=hashes)\n yield line\n\n if unsafe_requirements:\n unsafe_requirements = sorted(unsafe_requirements, key=self._sort_key)\n yield ''\n yield comment('# The following packages are considered to be unsafe in a requirements file:')\n\n for ireq in unsafe_requirements:\n req = self._format_requirement(ireq,\n reverse_dependencies,\n primary_packages,\n marker=markers.get(key_from_req(ireq.req)),\n hashes=hashes)\n if not self.allow_unsafe:\n yield comment('# {}'.format(req))\n else:\n yield req\n\n def write(self, results, unsafe_requirements, reverse_dependencies,\n primary_packages, markers, hashes):\n with ExitStack() as stack:\n f = None\n if not self.dry_run:\n f = stack.enter_context(AtomicSaver(self.dst_file))\n\n for line in self._iter_lines(results, unsafe_requirements, reverse_dependencies,\n primary_packages, markers, hashes):\n log.info(line)\n if f:\n f.write(unstyle(line).encode('utf-8'))\n f.write(os.linesep.encode('utf-8'))\n\n def _format_requirement(self, ireq, reverse_dependencies, primary_packages, marker=None, hashes=None):\n ireq_hashes = (hashes if hashes is not None else {}).get(ireq)\n\n line = format_requirement(ireq, marker=marker, hashes=ireq_hashes)\n\n if not self.annotate or key_from_req(ireq.req) in primary_packages:\n return line\n\n # Annotate what packages this package is required by\n required_by = reverse_dependencies.get(ireq.name.lower(), [])\n if required_by:\n annotation = \", \".join(sorted(required_by))\n line = \"{:24}{}{}\".format(\n line,\n \" \\\\\\n \" if ireq_hashes else \" \",\n comment(\"# via \" + annotation))\n return line\n", "path": "piptools/writer.py"}]} | 1,960 | 341 |
gh_patches_debug_14738 | rasdani/github-patches | git_diff | crytic__slither-530 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Suicidal detector fails on external functions
If the [example](https://github.com/crytic/slither/wiki/Detector-Documentation#suicidal) function for the suicidal detector is changed from `public` to `external` the issue is no longer flagged.
```
pragma solidity ^0.5.0;
contract Suicidal{
function kill() external{
selfdestruct(msg.sender);
}
}
```
`slither --version`: 0.6.12
`solc --version`: 0.5.15
Suicidal detector fails on external functions
If the [example](https://github.com/crytic/slither/wiki/Detector-Documentation#suicidal) function for the suicidal detector is changed from `public` to `external` the issue is no longer flagged.
```
pragma solidity ^0.5.0;
contract Suicidal{
function kill() external{
selfdestruct(msg.sender);
}
}
```
`slither --version`: 0.6.12
`solc --version`: 0.5.15
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `slither/detectors/functions/suicidal.py`
Content:
```
1 """
2 Module detecting suicidal contract
3
4 A suicidal contract is an unprotected function that calls selfdestruct
5 """
6
7 from slither.detectors.abstract_detector import AbstractDetector, DetectorClassification
8
9
10 class Suicidal(AbstractDetector):
11 """
12 Unprotected function detector
13 """
14
15 ARGUMENT = 'suicidal'
16 HELP = 'Functions allowing anyone to destruct the contract'
17 IMPACT = DetectorClassification.HIGH
18 CONFIDENCE = DetectorClassification.HIGH
19
20 WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#suicidal'
21
22
23 WIKI_TITLE = 'Suicidal'
24 WIKI_DESCRIPTION = 'Unprotected call to a function executing `selfdestruct`/`suicide`.'
25 WIKI_EXPLOIT_SCENARIO = '''
26 ```solidity
27 contract Suicidal{
28 function kill() public{
29 selfdestruct(msg.sender);
30 }
31 }
32 ```
33 Bob calls `kill` and destructs the contract.'''
34
35 WIKI_RECOMMENDATION = 'Protect access to all sensitive functions.'
36
37 @staticmethod
38 def detect_suicidal_func(func):
39 """ Detect if the function is suicidal
40
41 Detect the public functions calling suicide/selfdestruct without protection
42 Returns:
43 (bool): True if the function is suicidal
44 """
45
46 if func.is_constructor:
47 return False
48
49 if func.visibility != 'public':
50 return False
51
52 calls = [c.name for c in func.internal_calls]
53 if not ('suicide(address)' in calls or 'selfdestruct(address)' in calls):
54 return False
55
56 if func.is_protected():
57 return False
58
59 return True
60
61 def detect_suicidal(self, contract):
62 ret = []
63 for f in [f for f in contract.functions if f.contract_declarer == contract]:
64 if self.detect_suicidal_func(f):
65 ret.append(f)
66 return ret
67
68 def _detect(self):
69 """ Detect the suicidal functions
70 """
71 results = []
72 for c in self.contracts:
73 functions = self.detect_suicidal(c)
74 for func in functions:
75
76 info = [func, " allows anyone to destruct the contract\n"]
77
78 res = self.generate_result(info)
79
80 results.append(res)
81
82 return results
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/slither/detectors/functions/suicidal.py b/slither/detectors/functions/suicidal.py
--- a/slither/detectors/functions/suicidal.py
+++ b/slither/detectors/functions/suicidal.py
@@ -46,7 +46,7 @@
if func.is_constructor:
return False
- if func.visibility != 'public':
+ if func.visibility not in ['public', 'external']:
return False
calls = [c.name for c in func.internal_calls]
@@ -60,7 +60,7 @@
def detect_suicidal(self, contract):
ret = []
- for f in [f for f in contract.functions if f.contract_declarer == contract]:
+ for f in contract.functions_declared:
if self.detect_suicidal_func(f):
ret.append(f)
return ret
| {"golden_diff": "diff --git a/slither/detectors/functions/suicidal.py b/slither/detectors/functions/suicidal.py\n--- a/slither/detectors/functions/suicidal.py\n+++ b/slither/detectors/functions/suicidal.py\n@@ -46,7 +46,7 @@\n if func.is_constructor:\n return False\n \n- if func.visibility != 'public':\n+ if func.visibility not in ['public', 'external']:\n return False\n \n calls = [c.name for c in func.internal_calls]\n@@ -60,7 +60,7 @@\n \n def detect_suicidal(self, contract):\n ret = []\n- for f in [f for f in contract.functions if f.contract_declarer == contract]:\n+ for f in contract.functions_declared:\n if self.detect_suicidal_func(f):\n ret.append(f)\n return ret\n", "issue": "Suicidal detector fails on external functions\nIf the [example](https://github.com/crytic/slither/wiki/Detector-Documentation#suicidal) function for the suicidal detector is changed from `public` to `external` the issue is no longer flagged.\r\n\r\n```\r\npragma solidity ^0.5.0;\r\ncontract Suicidal{\r\n function kill() external{\r\n selfdestruct(msg.sender);\r\n }\r\n}\r\n```\r\n\r\n`slither --version`: 0.6.12\r\n`solc --version`: 0.5.15\nSuicidal detector fails on external functions\nIf the [example](https://github.com/crytic/slither/wiki/Detector-Documentation#suicidal) function for the suicidal detector is changed from `public` to `external` the issue is no longer flagged.\r\n\r\n```\r\npragma solidity ^0.5.0;\r\ncontract Suicidal{\r\n function kill() external{\r\n selfdestruct(msg.sender);\r\n }\r\n}\r\n```\r\n\r\n`slither --version`: 0.6.12\r\n`solc --version`: 0.5.15\n", "before_files": [{"content": "\"\"\"\nModule detecting suicidal contract\n\nA suicidal contract is an unprotected function that calls selfdestruct\n\"\"\"\n\nfrom slither.detectors.abstract_detector import AbstractDetector, DetectorClassification\n\n\nclass Suicidal(AbstractDetector):\n \"\"\"\n Unprotected function detector\n \"\"\"\n\n ARGUMENT = 'suicidal'\n HELP = 'Functions allowing anyone to destruct the contract'\n IMPACT = DetectorClassification.HIGH\n CONFIDENCE = DetectorClassification.HIGH\n\n WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#suicidal'\n\n\n WIKI_TITLE = 'Suicidal'\n WIKI_DESCRIPTION = 'Unprotected call to a function executing `selfdestruct`/`suicide`.'\n WIKI_EXPLOIT_SCENARIO = '''\n```solidity\ncontract Suicidal{\n function kill() public{\n selfdestruct(msg.sender);\n }\n}\n```\nBob calls `kill` and destructs the contract.'''\n\n WIKI_RECOMMENDATION = 'Protect access to all sensitive functions.'\n\n @staticmethod\n def detect_suicidal_func(func):\n \"\"\" Detect if the function is suicidal\n\n Detect the public functions calling suicide/selfdestruct without protection\n Returns:\n (bool): True if the function is suicidal\n \"\"\"\n\n if func.is_constructor:\n return False\n\n if func.visibility != 'public':\n return False\n\n calls = [c.name for c in func.internal_calls]\n if not ('suicide(address)' in calls or 'selfdestruct(address)' in calls):\n return False\n\n if func.is_protected():\n return False\n\n return True\n\n def detect_suicidal(self, contract):\n ret = []\n for f in [f for f in contract.functions if f.contract_declarer == contract]:\n if self.detect_suicidal_func(f):\n ret.append(f)\n return ret\n\n def _detect(self):\n \"\"\" Detect the suicidal functions\n \"\"\"\n results = []\n for c in self.contracts:\n functions = self.detect_suicidal(c)\n for func in functions:\n\n info = [func, \" allows anyone to destruct the contract\\n\"]\n\n res = self.generate_result(info)\n\n results.append(res)\n\n return results\n", "path": "slither/detectors/functions/suicidal.py"}], "after_files": [{"content": "\"\"\"\nModule detecting suicidal contract\n\nA suicidal contract is an unprotected function that calls selfdestruct\n\"\"\"\n\nfrom slither.detectors.abstract_detector import AbstractDetector, DetectorClassification\n\n\nclass Suicidal(AbstractDetector):\n \"\"\"\n Unprotected function detector\n \"\"\"\n\n ARGUMENT = 'suicidal'\n HELP = 'Functions allowing anyone to destruct the contract'\n IMPACT = DetectorClassification.HIGH\n CONFIDENCE = DetectorClassification.HIGH\n\n WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#suicidal'\n\n\n WIKI_TITLE = 'Suicidal'\n WIKI_DESCRIPTION = 'Unprotected call to a function executing `selfdestruct`/`suicide`.'\n WIKI_EXPLOIT_SCENARIO = '''\n```solidity\ncontract Suicidal{\n function kill() public{\n selfdestruct(msg.sender);\n }\n}\n```\nBob calls `kill` and destructs the contract.'''\n\n WIKI_RECOMMENDATION = 'Protect access to all sensitive functions.'\n\n @staticmethod\n def detect_suicidal_func(func):\n \"\"\" Detect if the function is suicidal\n\n Detect the public functions calling suicide/selfdestruct without protection\n Returns:\n (bool): True if the function is suicidal\n \"\"\"\n\n if func.is_constructor:\n return False\n\n if func.visibility not in ['public', 'external']:\n return False\n\n calls = [c.name for c in func.internal_calls]\n if not ('suicide(address)' in calls or 'selfdestruct(address)' in calls):\n return False\n\n if func.is_protected():\n return False\n\n return True\n\n def detect_suicidal(self, contract):\n ret = []\n for f in contract.functions_declared:\n if self.detect_suicidal_func(f):\n ret.append(f)\n return ret\n\n def _detect(self):\n \"\"\" Detect the suicidal functions\n \"\"\"\n results = []\n for c in self.contracts:\n functions = self.detect_suicidal(c)\n for func in functions:\n\n info = [func, \" allows anyone to destruct the contract\\n\"]\n\n res = self.generate_result(info)\n\n results.append(res)\n\n return results\n", "path": "slither/detectors/functions/suicidal.py"}]} | 1,144 | 194 |
gh_patches_debug_18153 | rasdani/github-patches | git_diff | openmc-dev__openmc-2906 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Implement contains for BoundingBox
## Description
Implement `__contains__` for `BoundingBox` containing either a point, or another `BoundingBox`. This means that users could then write:
`if point in box:` or `if little_box in big_box`.
## Alternatives
It is possible for users to accomplish this currently but requires some clever coding to avoid becoming difficult to read:
``` python
def in_box(point, box):
for min_p, p, max_p in zip(box.lower_left, point, box.upper_right):
if p < min_p or > max_p:
return False
return True
```
## Compatibility
This would be an enhancement, and would not alter the behavior of the existing API.
There is a risk though that users will misinterpret the results. A point in the bounding box of a volume *may* be in the volume, but not necessarily. A user could misuse this information and create problems for themselves. Also a small volume's bounding box can be completely contained in another volume's bounding box, and be completely outside that other volume.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openmc/bounding_box.py`
Content:
```
1 from __future__ import annotations
2 from typing import Iterable
3
4 import numpy as np
5
6 from .checkvalue import check_length
7
8
9 class BoundingBox:
10 """Axis-aligned bounding box.
11
12 .. versionadded:: 0.14.0
13
14 Parameters
15 ----------
16 lower_left : iterable of float
17 The x, y, z coordinates of the lower left corner of the bounding box in [cm]
18 upper_right : iterable of float
19 The x, y, z coordinates of the upper right corner of the bounding box in [cm]
20
21 Attributes
22 ----------
23 center : numpy.ndarray
24 x, y, z coordinates of the center of the bounding box in [cm]
25 lower_left : numpy.ndarray
26 The x, y, z coordinates of the lower left corner of the bounding box in [cm]
27 upper_right : numpy.ndarray
28 The x, y, z coordinates of the upper right corner of the bounding box in [cm]
29 volume : float
30 The volume of the bounding box in [cm^3]
31 extent : dict
32 A dictionary of basis as keys and the extent (left, right, bottom, top)
33 as values. Intended use in Matplotlib plots when setting extent
34 width : iterable of float
35 The width of the x, y and z axis in [cm]
36 """
37
38 def __init__(self, lower_left: Iterable[float], upper_right: Iterable[float]):
39 check_length("lower_left", lower_left, 3, 3)
40 check_length("upper_right", upper_right, 3, 3)
41 self._bounds = np.asarray([lower_left, upper_right], dtype=float)
42
43 def __repr__(self) -> str:
44 return "BoundingBox(lower_left={}, upper_right={})".format(
45 tuple(self.lower_left), tuple(self.upper_right))
46
47 def __getitem__(self, key) -> np.ndarray:
48 return self._bounds[key]
49
50 def __len__(self):
51 return 2
52
53 def __setitem__(self, key, val):
54 self._bounds[key] = val
55
56 def __iand__(self, other: BoundingBox) -> BoundingBox:
57 """Updates the box be the intersection of itself and another box
58
59 Parameters
60 ----------
61 other : BoundingBox
62 The box used to resize this box
63
64 Returns
65 -------
66 An updated bounding box
67 """
68 self.lower_left = np.maximum(self.lower_left, other.lower_left)
69 self.upper_right = np.minimum(self.upper_right, other.upper_right)
70 return self
71
72 def __and__(self, other: BoundingBox) -> BoundingBox:
73 new = BoundingBox(*self)
74 new &= other
75 return new
76
77 def __ior__(self, other: BoundingBox) -> BoundingBox:
78 """Updates the box be the union of itself and another box
79
80 Parameters
81 ----------
82 other : BoundingBox
83 The box used to resize this box
84
85 Returns
86 -------
87 An updated bounding box
88 """
89 self.lower_left = np.minimum(self.lower_left, other.lower_left)
90 self.upper_right = np.maximum(self.upper_right, other.upper_right)
91 return self
92
93 def __or__(self, other: BoundingBox) -> BoundingBox:
94 new = BoundingBox(*self)
95 new |= other
96 return new
97
98 def __contains__(self, point):
99 """Check whether or not a point is in the bounding box"""
100 return all(point > self.lower_left) and all(point < self.upper_right)
101
102 @property
103 def center(self) -> np.ndarray:
104 return (self[0] + self[1]) / 2
105
106 @property
107 def lower_left(self) -> np.ndarray:
108 return self[0]
109
110 @lower_left.setter
111 def lower_left(self, llc):
112 check_length('lower_left', llc, 3, 3)
113 self[0] = llc
114
115 @property
116 def upper_right(self) -> np.ndarray:
117 return self[1]
118
119 @upper_right.setter
120 def upper_right(self, urc):
121 check_length('upper_right', urc, 3, 3)
122 self[1] = urc
123
124 @property
125 def volume(self) -> float:
126 return np.abs(np.prod(self[1] - self[0]))
127
128 @property
129 def extent(self):
130 return {
131 "xy": (
132 self.lower_left[0],
133 self.upper_right[0],
134 self.lower_left[1],
135 self.upper_right[1],
136 ),
137 "xz": (
138 self.lower_left[0],
139 self.upper_right[0],
140 self.lower_left[2],
141 self.upper_right[2],
142 ),
143 "yz": (
144 self.lower_left[1],
145 self.upper_right[1],
146 self.lower_left[2],
147 self.upper_right[2],
148 ),
149 }
150
151 @property
152 def width(self):
153 return self.upper_right - self.lower_left
154
155 def expand(self, padding_distance: float, inplace: bool = False) -> BoundingBox:
156 """Returns an expanded bounding box
157
158 Parameters
159 ----------
160 padding_distance : float
161 The distance to enlarge the bounding box by
162 inplace : bool
163 Whether or not to return a new BoundingBox instance or to modify the
164 current BoundingBox object.
165
166 Returns
167 -------
168 An expanded bounding box
169 """
170 if inplace:
171 self[0] -= padding_distance
172 self[1] += padding_distance
173 return self
174 else:
175 return BoundingBox(self[0] - padding_distance, self[1] + padding_distance)
176
177 @classmethod
178 def infinite(cls) -> BoundingBox:
179 """Create an infinite box. Useful as a starting point for determining
180 geometry bounds.
181
182 Returns
183 -------
184 An infinitely large bounding box.
185 """
186 infs = np.full((3,), np.inf)
187 return cls(-infs, infs)
188
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/openmc/bounding_box.py b/openmc/bounding_box.py
--- a/openmc/bounding_box.py
+++ b/openmc/bounding_box.py
@@ -95,9 +95,23 @@
new |= other
return new
- def __contains__(self, point):
- """Check whether or not a point is in the bounding box"""
- return all(point > self.lower_left) and all(point < self.upper_right)
+ def __contains__(self, other):
+ """Check whether or not a point or another bounding box is in the bounding box.
+
+ For another bounding box to be in the parent it must lie fully inside of it.
+ """
+ # test for a single point
+ if isinstance(other, (tuple, list, np.ndarray)):
+ point = other
+ check_length("Point", point, 3, 3)
+ return all(point > self.lower_left) and all(point < self.upper_right)
+ elif isinstance(other, BoundingBox):
+ return all([p in self for p in [other.lower_left, other.upper_right]])
+ else:
+ raise TypeError(
+ f"Unable to determine if {other} is in the bounding box."
+ f" Expected a tuple or a bounding box, but {type(other)} given"
+ )
@property
def center(self) -> np.ndarray:
| {"golden_diff": "diff --git a/openmc/bounding_box.py b/openmc/bounding_box.py\n--- a/openmc/bounding_box.py\n+++ b/openmc/bounding_box.py\n@@ -95,9 +95,23 @@\n new |= other\n return new\n \n- def __contains__(self, point):\n- \"\"\"Check whether or not a point is in the bounding box\"\"\"\n- return all(point > self.lower_left) and all(point < self.upper_right)\n+ def __contains__(self, other):\n+ \"\"\"Check whether or not a point or another bounding box is in the bounding box.\n+\n+ For another bounding box to be in the parent it must lie fully inside of it.\n+ \"\"\"\n+ # test for a single point\n+ if isinstance(other, (tuple, list, np.ndarray)):\n+ point = other\n+ check_length(\"Point\", point, 3, 3)\n+ return all(point > self.lower_left) and all(point < self.upper_right)\n+ elif isinstance(other, BoundingBox):\n+ return all([p in self for p in [other.lower_left, other.upper_right]])\n+ else:\n+ raise TypeError(\n+ f\"Unable to determine if {other} is in the bounding box.\"\n+ f\" Expected a tuple or a bounding box, but {type(other)} given\"\n+ )\n \n @property\n def center(self) -> np.ndarray:\n", "issue": "Implement contains for BoundingBox\n## Description\r\nImplement `__contains__` for `BoundingBox` containing either a point, or another `BoundingBox`. This means that users could then write:\r\n\r\n`if point in box:` or `if little_box in big_box`.\r\n\r\n\r\n## Alternatives\r\nIt is possible for users to accomplish this currently but requires some clever coding to avoid becoming difficult to read:\r\n\r\n``` python\r\ndef in_box(point, box):\r\n for min_p, p, max_p in zip(box.lower_left, point, box.upper_right):\r\n if p < min_p or > max_p:\r\n return False \r\n return True\r\n```\r\n\r\n\r\n## Compatibility\r\nThis would be an enhancement, and would not alter the behavior of the existing API. \r\n\r\nThere is a risk though that users will misinterpret the results. A point in the bounding box of a volume *may* be in the volume, but not necessarily. A user could misuse this information and create problems for themselves. Also a small volume's bounding box can be completely contained in another volume's bounding box, and be completely outside that other volume. \n", "before_files": [{"content": "from __future__ import annotations\nfrom typing import Iterable\n\nimport numpy as np\n\nfrom .checkvalue import check_length\n\n\nclass BoundingBox:\n \"\"\"Axis-aligned bounding box.\n\n .. versionadded:: 0.14.0\n\n Parameters\n ----------\n lower_left : iterable of float\n The x, y, z coordinates of the lower left corner of the bounding box in [cm]\n upper_right : iterable of float\n The x, y, z coordinates of the upper right corner of the bounding box in [cm]\n\n Attributes\n ----------\n center : numpy.ndarray\n x, y, z coordinates of the center of the bounding box in [cm]\n lower_left : numpy.ndarray\n The x, y, z coordinates of the lower left corner of the bounding box in [cm]\n upper_right : numpy.ndarray\n The x, y, z coordinates of the upper right corner of the bounding box in [cm]\n volume : float\n The volume of the bounding box in [cm^3]\n extent : dict\n A dictionary of basis as keys and the extent (left, right, bottom, top)\n as values. Intended use in Matplotlib plots when setting extent\n width : iterable of float\n The width of the x, y and z axis in [cm]\n \"\"\"\n\n def __init__(self, lower_left: Iterable[float], upper_right: Iterable[float]):\n check_length(\"lower_left\", lower_left, 3, 3)\n check_length(\"upper_right\", upper_right, 3, 3)\n self._bounds = np.asarray([lower_left, upper_right], dtype=float)\n\n def __repr__(self) -> str:\n return \"BoundingBox(lower_left={}, upper_right={})\".format(\n tuple(self.lower_left), tuple(self.upper_right))\n\n def __getitem__(self, key) -> np.ndarray:\n return self._bounds[key]\n\n def __len__(self):\n return 2\n\n def __setitem__(self, key, val):\n self._bounds[key] = val\n\n def __iand__(self, other: BoundingBox) -> BoundingBox:\n \"\"\"Updates the box be the intersection of itself and another box\n\n Parameters\n ----------\n other : BoundingBox\n The box used to resize this box\n\n Returns\n -------\n An updated bounding box\n \"\"\"\n self.lower_left = np.maximum(self.lower_left, other.lower_left)\n self.upper_right = np.minimum(self.upper_right, other.upper_right)\n return self\n\n def __and__(self, other: BoundingBox) -> BoundingBox:\n new = BoundingBox(*self)\n new &= other\n return new\n\n def __ior__(self, other: BoundingBox) -> BoundingBox:\n \"\"\"Updates the box be the union of itself and another box\n\n Parameters\n ----------\n other : BoundingBox\n The box used to resize this box\n\n Returns\n -------\n An updated bounding box\n \"\"\"\n self.lower_left = np.minimum(self.lower_left, other.lower_left)\n self.upper_right = np.maximum(self.upper_right, other.upper_right)\n return self\n\n def __or__(self, other: BoundingBox) -> BoundingBox:\n new = BoundingBox(*self)\n new |= other\n return new\n\n def __contains__(self, point):\n \"\"\"Check whether or not a point is in the bounding box\"\"\"\n return all(point > self.lower_left) and all(point < self.upper_right)\n\n @property\n def center(self) -> np.ndarray:\n return (self[0] + self[1]) / 2\n\n @property\n def lower_left(self) -> np.ndarray:\n return self[0]\n\n @lower_left.setter\n def lower_left(self, llc):\n check_length('lower_left', llc, 3, 3)\n self[0] = llc\n\n @property\n def upper_right(self) -> np.ndarray:\n return self[1]\n\n @upper_right.setter\n def upper_right(self, urc):\n check_length('upper_right', urc, 3, 3)\n self[1] = urc\n\n @property\n def volume(self) -> float:\n return np.abs(np.prod(self[1] - self[0]))\n\n @property\n def extent(self):\n return {\n \"xy\": (\n self.lower_left[0],\n self.upper_right[0],\n self.lower_left[1],\n self.upper_right[1],\n ),\n \"xz\": (\n self.lower_left[0],\n self.upper_right[0],\n self.lower_left[2],\n self.upper_right[2],\n ),\n \"yz\": (\n self.lower_left[1],\n self.upper_right[1],\n self.lower_left[2],\n self.upper_right[2],\n ),\n }\n\n @property\n def width(self):\n return self.upper_right - self.lower_left\n\n def expand(self, padding_distance: float, inplace: bool = False) -> BoundingBox:\n \"\"\"Returns an expanded bounding box\n\n Parameters\n ----------\n padding_distance : float\n The distance to enlarge the bounding box by\n inplace : bool\n Whether or not to return a new BoundingBox instance or to modify the\n current BoundingBox object.\n\n Returns\n -------\n An expanded bounding box\n \"\"\"\n if inplace:\n self[0] -= padding_distance\n self[1] += padding_distance\n return self\n else:\n return BoundingBox(self[0] - padding_distance, self[1] + padding_distance)\n\n @classmethod\n def infinite(cls) -> BoundingBox:\n \"\"\"Create an infinite box. Useful as a starting point for determining\n geometry bounds.\n\n Returns\n -------\n An infinitely large bounding box.\n \"\"\"\n infs = np.full((3,), np.inf)\n return cls(-infs, infs)\n", "path": "openmc/bounding_box.py"}], "after_files": [{"content": "from __future__ import annotations\nfrom typing import Iterable\n\nimport numpy as np\n\nfrom .checkvalue import check_length\n\n\nclass BoundingBox:\n \"\"\"Axis-aligned bounding box.\n\n .. versionadded:: 0.14.0\n\n Parameters\n ----------\n lower_left : iterable of float\n The x, y, z coordinates of the lower left corner of the bounding box in [cm]\n upper_right : iterable of float\n The x, y, z coordinates of the upper right corner of the bounding box in [cm]\n\n Attributes\n ----------\n center : numpy.ndarray\n x, y, z coordinates of the center of the bounding box in [cm]\n lower_left : numpy.ndarray\n The x, y, z coordinates of the lower left corner of the bounding box in [cm]\n upper_right : numpy.ndarray\n The x, y, z coordinates of the upper right corner of the bounding box in [cm]\n volume : float\n The volume of the bounding box in [cm^3]\n extent : dict\n A dictionary of basis as keys and the extent (left, right, bottom, top)\n as values. Intended use in Matplotlib plots when setting extent\n width : iterable of float\n The width of the x, y and z axis in [cm]\n \"\"\"\n\n def __init__(self, lower_left: Iterable[float], upper_right: Iterable[float]):\n check_length(\"lower_left\", lower_left, 3, 3)\n check_length(\"upper_right\", upper_right, 3, 3)\n self._bounds = np.asarray([lower_left, upper_right], dtype=float)\n\n def __repr__(self) -> str:\n return \"BoundingBox(lower_left={}, upper_right={})\".format(\n tuple(self.lower_left), tuple(self.upper_right))\n\n def __getitem__(self, key) -> np.ndarray:\n return self._bounds[key]\n\n def __len__(self):\n return 2\n\n def __setitem__(self, key, val):\n self._bounds[key] = val\n\n def __iand__(self, other: BoundingBox) -> BoundingBox:\n \"\"\"Updates the box be the intersection of itself and another box\n\n Parameters\n ----------\n other : BoundingBox\n The box used to resize this box\n\n Returns\n -------\n An updated bounding box\n \"\"\"\n self.lower_left = np.maximum(self.lower_left, other.lower_left)\n self.upper_right = np.minimum(self.upper_right, other.upper_right)\n return self\n\n def __and__(self, other: BoundingBox) -> BoundingBox:\n new = BoundingBox(*self)\n new &= other\n return new\n\n def __ior__(self, other: BoundingBox) -> BoundingBox:\n \"\"\"Updates the box be the union of itself and another box\n\n Parameters\n ----------\n other : BoundingBox\n The box used to resize this box\n\n Returns\n -------\n An updated bounding box\n \"\"\"\n self.lower_left = np.minimum(self.lower_left, other.lower_left)\n self.upper_right = np.maximum(self.upper_right, other.upper_right)\n return self\n\n def __or__(self, other: BoundingBox) -> BoundingBox:\n new = BoundingBox(*self)\n new |= other\n return new\n\n def __contains__(self, other):\n \"\"\"Check whether or not a point or another bounding box is in the bounding box.\n\n For another bounding box to be in the parent it must lie fully inside of it.\n \"\"\"\n # test for a single point\n if isinstance(other, (tuple, list, np.ndarray)):\n point = other\n check_length(\"Point\", point, 3, 3)\n return all(point > self.lower_left) and all(point < self.upper_right)\n elif isinstance(other, BoundingBox):\n return all([p in self for p in [other.lower_left, other.upper_right]])\n else:\n raise TypeError(\n f\"Unable to determine if {other} is in the bounding box.\"\n f\" Expected a tuple or a bounding box, but {type(other)} given\"\n )\n\n @property\n def center(self) -> np.ndarray:\n return (self[0] + self[1]) / 2\n\n @property\n def lower_left(self) -> np.ndarray:\n return self[0]\n\n @lower_left.setter\n def lower_left(self, llc):\n check_length('lower_left', llc, 3, 3)\n self[0] = llc\n\n @property\n def upper_right(self) -> np.ndarray:\n return self[1]\n\n @upper_right.setter\n def upper_right(self, urc):\n check_length('upper_right', urc, 3, 3)\n self[1] = urc\n\n @property\n def volume(self) -> float:\n return np.abs(np.prod(self[1] - self[0]))\n\n @property\n def extent(self):\n return {\n \"xy\": (\n self.lower_left[0],\n self.upper_right[0],\n self.lower_left[1],\n self.upper_right[1],\n ),\n \"xz\": (\n self.lower_left[0],\n self.upper_right[0],\n self.lower_left[2],\n self.upper_right[2],\n ),\n \"yz\": (\n self.lower_left[1],\n self.upper_right[1],\n self.lower_left[2],\n self.upper_right[2],\n ),\n }\n\n @property\n def width(self):\n return self.upper_right - self.lower_left\n\n def expand(self, padding_distance: float, inplace: bool = False) -> BoundingBox:\n \"\"\"Returns an expanded bounding box\n\n Parameters\n ----------\n padding_distance : float\n The distance to enlarge the bounding box by\n inplace : bool\n Whether or not to return a new BoundingBox instance or to modify the\n current BoundingBox object.\n\n Returns\n -------\n An expanded bounding box\n \"\"\"\n if inplace:\n self[0] -= padding_distance\n self[1] += padding_distance\n return self\n else:\n return BoundingBox(self[0] - padding_distance, self[1] + padding_distance)\n\n @classmethod\n def infinite(cls) -> BoundingBox:\n \"\"\"Create an infinite box. Useful as a starting point for determining\n geometry bounds.\n\n Returns\n -------\n An infinitely large bounding box.\n \"\"\"\n infs = np.full((3,), np.inf)\n return cls(-infs, infs)\n", "path": "openmc/bounding_box.py"}]} | 2,229 | 306 |
gh_patches_debug_945 | rasdani/github-patches | git_diff | magenta__magenta-1793 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Retraining Onsets and Frames Drums model with E-GMD dataset
Hello,
I am trying to retrain OaF model with the E-GMD dataset for drums transcription. I first downloaded the E-GMD dataset which has its corresponding csv file and a directoy for each drummer and subdirectories with the sessions.
I am trying to do the first step following the code in ```onsets_frames_transcription_create_tfrecords``` which I found that it is:
```
onsets_frames_transcription_create_tfrecords \
--csv=".../e-gmd-v1.0.0/e-gmd-v1.0.0.csv" \
--output_directory=".../e-gmd-v1.0.0" \
--num_shards="0" \
--wav_dir=".../e-gmd-v1.0.0" \
--midi_dir=".../e-gmd-v1.0.0" \
--expected_splits="train,validation,test"
```
But i got the following error which I don't know where does it come from:
```
2020-08-05 17:23:45.289023: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found
2020-08-05 17:23:45.289348: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
WARNING:tensorflow:From c:\users\carlos\anaconda3\lib\site-packages\tensorflow\python\compat\v2_compat.py:96: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
WARNING:tensorflow:From c:\users\carlos\anaconda3\lib\site-packages\tensorflow\python\compat\v2_compat.py:96: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
Traceback (most recent call last):
File "c:\users\carlos\anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\users\carlos\anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\Carlos\Anaconda3\Scripts\onsets_frames_transcription_create_tfrecords.exe\__main__.py", line 4, in <module>
ImportError: cannot import name 'console_entry_point'
```
I don't know if I have to change the paths of the wav and MIDI files in order to have the wav files in a directory and the MIDI files in other directory or the error comes from installation issues, versions, etc.
I am using Winows 10.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `magenta/models/onsets_frames_transcription/onsets_frames_transcription_create_tfrecords.py`
Content:
```
1 # Copyright 2020 The Magenta Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # Lint as: python3
16 r"""Beam job for creating tfrecord files from datasets.
17
18 Expects a CSV with the following fields: audio_filename, midi_filename, split
19
20 Usage:
21 onsets_frames_transcription_create_tfrecords \
22 --csv="/path/to/dataset.csv" \
23 --output_directory="/path/to/output" \
24 --num_shards="0" \
25 --wav_dir="/path/to/dataset/audio" \
26 --midi_dir="/path/to/dataset/midi" \
27 --expected_splits="train,validation,test"
28
29 """
30
31 import collections
32 import copy
33 import csv
34 import os
35
36 from absl import app
37 from absl import flags
38 from absl import logging
39
40 import apache_beam as beam
41 from apache_beam.metrics import Metrics
42 from magenta.models.onsets_frames_transcription import audio_label_data_utils
43 from note_seq import midi_io
44 from note_seq.protobuf import music_pb2
45 import tensorflow.compat.v1 as tf
46
47 tf.disable_v2_behavior()
48
49 FLAGS = flags.FLAGS
50
51 flags.DEFINE_string('csv', None, 'Path to dataset CSV')
52 flags.DEFINE_string('output_directory', None, 'Path to output_directory')
53 flags.DEFINE_string('wav_dir', None, 'Directory for wav files.')
54 flags.DEFINE_string('midi_dir', None, 'Directory for midi files.')
55 flags.DEFINE_integer('num_shards', 0, 'number of output shards')
56 flags.DEFINE_string('expected_splits', 'train,validation,test',
57 'Comma separated list of expected splits.')
58 flags.DEFINE_boolean(
59 'add_wav_glob', False,
60 'If true, will add * to end of wav paths and use all matching files.')
61 flags.DEFINE_list(
62 'pipeline_options', '--runner=DirectRunner',
63 'A comma-separated list of command line arguments to be used as options '
64 'for the Beam Pipeline.')
65
66
67 class CreateExampleDoFn(beam.DoFn):
68 """Splits wav and midi files for the dataset."""
69
70 def __init__(self, wav_dir, midi_dir, add_wav_glob,
71 *unused_args, **unused_kwargs):
72 self._wav_dir = wav_dir
73 self._midi_dir = midi_dir
74 self._add_wav_glob = add_wav_glob
75 super(CreateExampleDoFn, self).__init__(*unused_args, **unused_kwargs)
76
77 def process(self, paths):
78 midi_path, wav_path_base = paths
79
80 if self._add_wav_glob:
81 wav_paths = tf.io.gfile.glob(wav_path_base + '*')
82 else:
83 wav_paths = [wav_path_base]
84
85 if midi_path:
86 base_ns = midi_io.midi_file_to_note_sequence(midi_path)
87 base_ns.filename = midi_path
88 else:
89 base_ns = music_pb2.NoteSequence()
90
91 for wav_path in wav_paths:
92 logging.info('Creating Example %s:%s', midi_path, wav_path)
93 wav_data = tf.io.gfile.GFile(wav_path, 'rb').read()
94
95 ns = copy.deepcopy(base_ns)
96
97 # Use base names.
98 ns.id = '%s:%s' % (wav_path.replace(self._wav_dir, ''),
99 midi_path.replace(self._midi_dir, ''))
100
101 Metrics.counter('create_example', 'read_midi_wav').inc()
102
103 example = audio_label_data_utils.create_example(ns.id, ns, wav_data)
104
105 Metrics.counter('create_example', 'created_example').inc()
106 yield example
107
108
109 def main(argv):
110 del argv
111
112
113 flags.mark_flags_as_required(['csv', 'output_directory'])
114
115 tf.io.gfile.makedirs(FLAGS.output_directory)
116
117 with tf.io.gfile.GFile(FLAGS.csv) as f:
118 reader = csv.DictReader(f)
119
120 splits = collections.defaultdict(list)
121 for row in reader:
122 splits[row['split']].append(
123 (os.path.join(FLAGS.midi_dir, row['midi_filename']),
124 os.path.join(FLAGS.wav_dir, row['audio_filename'])))
125
126 if sorted(splits.keys()) != sorted(FLAGS.expected_splits.split(',')):
127 raise ValueError('Got unexpected set of splits: %s' % list(splits.keys()))
128
129 pipeline_options = beam.options.pipeline_options.PipelineOptions(
130 FLAGS.pipeline_options)
131 with beam.Pipeline(options=pipeline_options) as p:
132 for split in splits:
133 split_p = p | 'prepare_split_%s' % split >> beam.Create(splits[split])
134 split_p |= 'create_examples_%s' % split >> beam.ParDo(
135 CreateExampleDoFn(FLAGS.wav_dir, FLAGS.midi_dir, FLAGS.add_wav_glob))
136 split_p |= 'write_%s' % split >> beam.io.WriteToTFRecord(
137 os.path.join(FLAGS.output_directory, '%s.tfrecord' % split),
138 coder=beam.coders.ProtoCoder(tf.train.Example),
139 num_shards=FLAGS.num_shards)
140
141
142 if __name__ == '__main__':
143 app.run(main)
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/magenta/models/onsets_frames_transcription/onsets_frames_transcription_create_tfrecords.py b/magenta/models/onsets_frames_transcription/onsets_frames_transcription_create_tfrecords.py
--- a/magenta/models/onsets_frames_transcription/onsets_frames_transcription_create_tfrecords.py
+++ b/magenta/models/onsets_frames_transcription/onsets_frames_transcription_create_tfrecords.py
@@ -139,5 +139,10 @@
num_shards=FLAGS.num_shards)
-if __name__ == '__main__':
+def console_entry_point():
+ tf.disable_v2_behavior()
app.run(main)
+
+
+if __name__ == '__main__':
+ console_entry_point()
| {"golden_diff": "diff --git a/magenta/models/onsets_frames_transcription/onsets_frames_transcription_create_tfrecords.py b/magenta/models/onsets_frames_transcription/onsets_frames_transcription_create_tfrecords.py\n--- a/magenta/models/onsets_frames_transcription/onsets_frames_transcription_create_tfrecords.py\n+++ b/magenta/models/onsets_frames_transcription/onsets_frames_transcription_create_tfrecords.py\n@@ -139,5 +139,10 @@\n num_shards=FLAGS.num_shards)\n \n \n-if __name__ == '__main__':\n+def console_entry_point():\n+ tf.disable_v2_behavior()\n app.run(main)\n+\n+\n+if __name__ == '__main__':\n+ console_entry_point()\n", "issue": "Retraining Onsets and Frames Drums model with E-GMD dataset\nHello,\r\n\r\nI am trying to retrain OaF model with the E-GMD dataset for drums transcription. I first downloaded the E-GMD dataset which has its corresponding csv file and a directoy for each drummer and subdirectories with the sessions.\r\n\r\nI am trying to do the first step following the code in ```onsets_frames_transcription_create_tfrecords``` which I found that it is:\r\n\r\n```\r\nonsets_frames_transcription_create_tfrecords \\\r\n --csv=\".../e-gmd-v1.0.0/e-gmd-v1.0.0.csv\" \\\r\n --output_directory=\".../e-gmd-v1.0.0\" \\\r\n --num_shards=\"0\" \\\r\n --wav_dir=\".../e-gmd-v1.0.0\" \\\r\n --midi_dir=\".../e-gmd-v1.0.0\" \\\r\n --expected_splits=\"train,validation,test\"\r\n```\r\n\r\nBut i got the following error which I don't know where does it come from:\r\n\r\n```\r\n2020-08-05 17:23:45.289023: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found\r\n2020-08-05 17:23:45.289348: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\nWARNING:tensorflow:From c:\\users\\carlos\\anaconda3\\lib\\site-packages\\tensorflow\\python\\compat\\v2_compat.py:96: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.\r\nInstructions for updating:\r\nnon-resource variables are not supported in the long term\r\nWARNING:tensorflow:From c:\\users\\carlos\\anaconda3\\lib\\site-packages\\tensorflow\\python\\compat\\v2_compat.py:96: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.\r\nInstructions for updating:\r\nnon-resource variables are not supported in the long term\r\nTraceback (most recent call last):\r\n File \"c:\\users\\carlos\\anaconda3\\lib\\runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"c:\\users\\carlos\\anaconda3\\lib\\runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\Users\\Carlos\\Anaconda3\\Scripts\\onsets_frames_transcription_create_tfrecords.exe\\__main__.py\", line 4, in <module>\r\nImportError: cannot import name 'console_entry_point'\r\n```\r\nI don't know if I have to change the paths of the wav and MIDI files in order to have the wav files in a directory and the MIDI files in other directory or the error comes from installation issues, versions, etc.\r\n\r\nI am using Winows 10.\n", "before_files": [{"content": "# Copyright 2020 The Magenta Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Lint as: python3\nr\"\"\"Beam job for creating tfrecord files from datasets.\n\nExpects a CSV with the following fields: audio_filename, midi_filename, split\n\nUsage:\nonsets_frames_transcription_create_tfrecords \\\n --csv=\"/path/to/dataset.csv\" \\\n --output_directory=\"/path/to/output\" \\\n --num_shards=\"0\" \\\n --wav_dir=\"/path/to/dataset/audio\" \\\n --midi_dir=\"/path/to/dataset/midi\" \\\n --expected_splits=\"train,validation,test\"\n\n\"\"\"\n\nimport collections\nimport copy\nimport csv\nimport os\n\nfrom absl import app\nfrom absl import flags\nfrom absl import logging\n\nimport apache_beam as beam\nfrom apache_beam.metrics import Metrics\nfrom magenta.models.onsets_frames_transcription import audio_label_data_utils\nfrom note_seq import midi_io\nfrom note_seq.protobuf import music_pb2\nimport tensorflow.compat.v1 as tf\n\ntf.disable_v2_behavior()\n\nFLAGS = flags.FLAGS\n\nflags.DEFINE_string('csv', None, 'Path to dataset CSV')\nflags.DEFINE_string('output_directory', None, 'Path to output_directory')\nflags.DEFINE_string('wav_dir', None, 'Directory for wav files.')\nflags.DEFINE_string('midi_dir', None, 'Directory for midi files.')\nflags.DEFINE_integer('num_shards', 0, 'number of output shards')\nflags.DEFINE_string('expected_splits', 'train,validation,test',\n 'Comma separated list of expected splits.')\nflags.DEFINE_boolean(\n 'add_wav_glob', False,\n 'If true, will add * to end of wav paths and use all matching files.')\nflags.DEFINE_list(\n 'pipeline_options', '--runner=DirectRunner',\n 'A comma-separated list of command line arguments to be used as options '\n 'for the Beam Pipeline.')\n\n\nclass CreateExampleDoFn(beam.DoFn):\n \"\"\"Splits wav and midi files for the dataset.\"\"\"\n\n def __init__(self, wav_dir, midi_dir, add_wav_glob,\n *unused_args, **unused_kwargs):\n self._wav_dir = wav_dir\n self._midi_dir = midi_dir\n self._add_wav_glob = add_wav_glob\n super(CreateExampleDoFn, self).__init__(*unused_args, **unused_kwargs)\n\n def process(self, paths):\n midi_path, wav_path_base = paths\n\n if self._add_wav_glob:\n wav_paths = tf.io.gfile.glob(wav_path_base + '*')\n else:\n wav_paths = [wav_path_base]\n\n if midi_path:\n base_ns = midi_io.midi_file_to_note_sequence(midi_path)\n base_ns.filename = midi_path\n else:\n base_ns = music_pb2.NoteSequence()\n\n for wav_path in wav_paths:\n logging.info('Creating Example %s:%s', midi_path, wav_path)\n wav_data = tf.io.gfile.GFile(wav_path, 'rb').read()\n\n ns = copy.deepcopy(base_ns)\n\n # Use base names.\n ns.id = '%s:%s' % (wav_path.replace(self._wav_dir, ''),\n midi_path.replace(self._midi_dir, ''))\n\n Metrics.counter('create_example', 'read_midi_wav').inc()\n\n example = audio_label_data_utils.create_example(ns.id, ns, wav_data)\n\n Metrics.counter('create_example', 'created_example').inc()\n yield example\n\n\ndef main(argv):\n del argv\n\n\n flags.mark_flags_as_required(['csv', 'output_directory'])\n\n tf.io.gfile.makedirs(FLAGS.output_directory)\n\n with tf.io.gfile.GFile(FLAGS.csv) as f:\n reader = csv.DictReader(f)\n\n splits = collections.defaultdict(list)\n for row in reader:\n splits[row['split']].append(\n (os.path.join(FLAGS.midi_dir, row['midi_filename']),\n os.path.join(FLAGS.wav_dir, row['audio_filename'])))\n\n if sorted(splits.keys()) != sorted(FLAGS.expected_splits.split(',')):\n raise ValueError('Got unexpected set of splits: %s' % list(splits.keys()))\n\n pipeline_options = beam.options.pipeline_options.PipelineOptions(\n FLAGS.pipeline_options)\n with beam.Pipeline(options=pipeline_options) as p:\n for split in splits:\n split_p = p | 'prepare_split_%s' % split >> beam.Create(splits[split])\n split_p |= 'create_examples_%s' % split >> beam.ParDo(\n CreateExampleDoFn(FLAGS.wav_dir, FLAGS.midi_dir, FLAGS.add_wav_glob))\n split_p |= 'write_%s' % split >> beam.io.WriteToTFRecord(\n os.path.join(FLAGS.output_directory, '%s.tfrecord' % split),\n coder=beam.coders.ProtoCoder(tf.train.Example),\n num_shards=FLAGS.num_shards)\n\n\nif __name__ == '__main__':\n app.run(main)\n", "path": "magenta/models/onsets_frames_transcription/onsets_frames_transcription_create_tfrecords.py"}], "after_files": [{"content": "# Copyright 2020 The Magenta Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Lint as: python3\nr\"\"\"Beam job for creating tfrecord files from datasets.\n\nExpects a CSV with the following fields: audio_filename, midi_filename, split\n\nUsage:\nonsets_frames_transcription_create_tfrecords \\\n --csv=\"/path/to/dataset.csv\" \\\n --output_directory=\"/path/to/output\" \\\n --num_shards=\"0\" \\\n --wav_dir=\"/path/to/dataset/audio\" \\\n --midi_dir=\"/path/to/dataset/midi\" \\\n --expected_splits=\"train,validation,test\"\n\n\"\"\"\n\nimport collections\nimport copy\nimport csv\nimport os\n\nfrom absl import app\nfrom absl import flags\nfrom absl import logging\n\nimport apache_beam as beam\nfrom apache_beam.metrics import Metrics\nfrom magenta.models.onsets_frames_transcription import audio_label_data_utils\nfrom note_seq import midi_io\nfrom note_seq.protobuf import music_pb2\nimport tensorflow.compat.v1 as tf\n\ntf.disable_v2_behavior()\n\nFLAGS = flags.FLAGS\n\nflags.DEFINE_string('csv', None, 'Path to dataset CSV')\nflags.DEFINE_string('output_directory', None, 'Path to output_directory')\nflags.DEFINE_string('wav_dir', None, 'Directory for wav files.')\nflags.DEFINE_string('midi_dir', None, 'Directory for midi files.')\nflags.DEFINE_integer('num_shards', 0, 'number of output shards')\nflags.DEFINE_string('expected_splits', 'train,validation,test',\n 'Comma separated list of expected splits.')\nflags.DEFINE_boolean(\n 'add_wav_glob', False,\n 'If true, will add * to end of wav paths and use all matching files.')\nflags.DEFINE_list(\n 'pipeline_options', '--runner=DirectRunner',\n 'A comma-separated list of command line arguments to be used as options '\n 'for the Beam Pipeline.')\n\n\nclass CreateExampleDoFn(beam.DoFn):\n \"\"\"Splits wav and midi files for the dataset.\"\"\"\n\n def __init__(self, wav_dir, midi_dir, add_wav_glob,\n *unused_args, **unused_kwargs):\n self._wav_dir = wav_dir\n self._midi_dir = midi_dir\n self._add_wav_glob = add_wav_glob\n super(CreateExampleDoFn, self).__init__(*unused_args, **unused_kwargs)\n\n def process(self, paths):\n midi_path, wav_path_base = paths\n\n if self._add_wav_glob:\n wav_paths = tf.io.gfile.glob(wav_path_base + '*')\n else:\n wav_paths = [wav_path_base]\n\n if midi_path:\n base_ns = midi_io.midi_file_to_note_sequence(midi_path)\n base_ns.filename = midi_path\n else:\n base_ns = music_pb2.NoteSequence()\n\n for wav_path in wav_paths:\n logging.info('Creating Example %s:%s', midi_path, wav_path)\n wav_data = tf.io.gfile.GFile(wav_path, 'rb').read()\n\n ns = copy.deepcopy(base_ns)\n\n # Use base names.\n ns.id = '%s:%s' % (wav_path.replace(self._wav_dir, ''),\n midi_path.replace(self._midi_dir, ''))\n\n Metrics.counter('create_example', 'read_midi_wav').inc()\n\n example = audio_label_data_utils.create_example(ns.id, ns, wav_data)\n\n Metrics.counter('create_example', 'created_example').inc()\n yield example\n\n\ndef main(argv):\n del argv\n\n\n flags.mark_flags_as_required(['csv', 'output_directory'])\n\n tf.io.gfile.makedirs(FLAGS.output_directory)\n\n with tf.io.gfile.GFile(FLAGS.csv) as f:\n reader = csv.DictReader(f)\n\n splits = collections.defaultdict(list)\n for row in reader:\n splits[row['split']].append(\n (os.path.join(FLAGS.midi_dir, row['midi_filename']),\n os.path.join(FLAGS.wav_dir, row['audio_filename'])))\n\n if sorted(splits.keys()) != sorted(FLAGS.expected_splits.split(',')):\n raise ValueError('Got unexpected set of splits: %s' % list(splits.keys()))\n\n pipeline_options = beam.options.pipeline_options.PipelineOptions(\n FLAGS.pipeline_options)\n with beam.Pipeline(options=pipeline_options) as p:\n for split in splits:\n split_p = p | 'prepare_split_%s' % split >> beam.Create(splits[split])\n split_p |= 'create_examples_%s' % split >> beam.ParDo(\n CreateExampleDoFn(FLAGS.wav_dir, FLAGS.midi_dir, FLAGS.add_wav_glob))\n split_p |= 'write_%s' % split >> beam.io.WriteToTFRecord(\n os.path.join(FLAGS.output_directory, '%s.tfrecord' % split),\n coder=beam.coders.ProtoCoder(tf.train.Example),\n num_shards=FLAGS.num_shards)\n\n\ndef console_entry_point():\n tf.disable_v2_behavior()\n app.run(main)\n\n\nif __name__ == '__main__':\n console_entry_point()\n", "path": "magenta/models/onsets_frames_transcription/onsets_frames_transcription_create_tfrecords.py"}]} | 2,469 | 150 |
gh_patches_debug_57650 | rasdani/github-patches | git_diff | facebookresearch__ParlAI-1956 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Quickstart AttributeError: 'HogwildWorld' object has no attribute 'acts'
**Bug description**
When going through the ParlAI [quickstart](https://parl.ai/docs/tutorial_quick.html#install), I got the following error:
``` python
Traceback (most recent call last):
File "examples/interactive.py", line 18, in <module>
interactive(opt, print_parser=parser)
File "/root/ParlAI/parlai/scripts/interactive.py", line 68, in interactive
agent = create_agent(opt, requireModelExists=True)
File "/root/ParlAI/parlai/core/agents.py", line 683, in create_agent
model = load_agent_module(opt)
File "/root/ParlAI/parlai/core/agents.py", line 548, in load_agent_module
return model_class(new_opt)
File "/root/ParlAI/parlai/agents/memnn/memnn.py", line 86, in __init__
super().__init__(opt, shared)
File "/root/ParlAI/parlai/core/torch_ranker_agent.py", line 135, in __init__
super().__init__(opt, shared)
File "/root/ParlAI/parlai/core/torch_agent.py", line 737, in __init__
self.set_interactive_mode(opt['interactive_mode'], shared)
File "/root/ParlAI/parlai/core/torch_ranker_agent.py", line 206, in set_interactive_mode
path = self.get_task_candidates_path()
File "/root/ParlAI/parlai/core/torch_ranker_agent.py", line 230, in get_task_candidates_path
build_cands(opt)
File "/root/ParlAI/parlai/scripts/build_candidates.py", line 47, in build_cands
acts = world.get_acts()[0]
File "/root/ParlAI/parlai/core/worlds.py", line 162, in get_acts
return self.acts
AttributeError: 'HogwildWorld' object has no attribute 'acts'
```
**While running**
```python
python examples/interactive.py -mf /tmp/babi_memnn -ecands vocab
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parlai/scripts/build_candidates.py`
Content:
```
1 #!/usr/bin/env python3
2
3 # Copyright (c) Facebook, Inc. and its affiliates.
4 # This source code is licensed under the MIT license found in the
5 # LICENSE file in the root directory of this source tree.
6 """Build the candidate responses for a retrieval model.
7
8 Examples
9 --------
10
11 .. code-block:: shell
12
13 python build_candidates.py -t convai2 --outfile /tmp/cands.txt
14 """
15
16 from parlai.core.params import ParlaiParser
17 from parlai.agents.repeat_label.repeat_label import RepeatLabelAgent
18 from parlai.core.worlds import create_task
19 from parlai.core.utils import TimeLogger
20 import random
21 import tempfile
22
23
24 def build_cands(opt):
25 # create repeat label agent and assign it to the specified task
26 agent = RepeatLabelAgent(opt)
27 world = create_task(opt, agent)
28 if opt['outfile'] is None:
29 outfile = tempfile.mkstemp(
30 prefix='{}_{}_'.format(opt['task'], opt['datatype']), suffix='.txt'
31 )[1]
32 else:
33 outfile = opt['outfile']
34
35 if opt.get('num_examples', -1) == -1:
36 num_examples = world.num_examples()
37 else:
38 num_examples = opt['num_examples']
39 log_timer = TimeLogger()
40
41 print('[ starting to build candidates from task.. (ex:' + str(num_examples) + ')]')
42 print('[ saving output to {} ]'.format(outfile))
43 cands = []
44 for _ in range(num_examples):
45 world.parley()
46 # We get the acts of the first agent, which is the teacher.
47 acts = world.get_acts()[0]
48 if isinstance(acts, dict):
49 # We turn into a batch of 1 example, in case batching is being used.
50 acts = [acts]
51 for a in acts:
52 candidate = a.get('labels', a.get('eval_labels', None))
53 if candidate is not None:
54 candidate = candidate[0]
55 cands.append(candidate)
56 if log_timer.time() > opt['log_every_n_secs']:
57 text, _log = log_timer.log(world.total_parleys, world.num_examples())
58 print(text)
59 if world.epoch_done():
60 print('EPOCH DONE')
61 break
62 fw = open(outfile, 'w')
63 fw.write('\n'.join(cands))
64 fw.close()
65
66
67 def main():
68 random.seed(42)
69 # Get command line arguments
70 parser = ParlaiParser()
71 parser.add_argument(
72 '-n',
73 '--num-examples',
74 default=-1,
75 type=int,
76 help='Total number of exs to convert, -1 to convert all examples',
77 )
78 parser.add_argument(
79 '-of',
80 '--outfile',
81 default=None,
82 type=str,
83 help='Output file where to save, by default will be created in /tmp',
84 )
85 parser.add_argument('-ltim', '--log-every-n-secs', type=float, default=2)
86 parser.set_defaults(datatype='train:evalmode')
87 opt = parser.parse_args()
88 build_cands(opt)
89
90
91 if __name__ == '__main__':
92 main()
93
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/parlai/scripts/build_candidates.py b/parlai/scripts/build_candidates.py
--- a/parlai/scripts/build_candidates.py
+++ b/parlai/scripts/build_candidates.py
@@ -23,6 +23,9 @@
def build_cands(opt):
# create repeat label agent and assign it to the specified task
+ if opt['numthreads'] > 1:
+ # Broken in hogwild mode. Just fall back to single processing mode
+ opt['numthreads'] = 1
agent = RepeatLabelAgent(opt)
world = create_task(opt, agent)
if opt['outfile'] is None:
| {"golden_diff": "diff --git a/parlai/scripts/build_candidates.py b/parlai/scripts/build_candidates.py\n--- a/parlai/scripts/build_candidates.py\n+++ b/parlai/scripts/build_candidates.py\n@@ -23,6 +23,9 @@\n \n def build_cands(opt):\n # create repeat label agent and assign it to the specified task\n+ if opt['numthreads'] > 1:\n+ # Broken in hogwild mode. Just fall back to single processing mode\n+ opt['numthreads'] = 1\n agent = RepeatLabelAgent(opt)\n world = create_task(opt, agent)\n if opt['outfile'] is None:\n", "issue": "Quickstart AttributeError: 'HogwildWorld' object has no attribute 'acts'\n**Bug description**\r\nWhen going through the ParlAI [quickstart](https://parl.ai/docs/tutorial_quick.html#install), I got the following error:\r\n\r\n``` python\r\nTraceback (most recent call last):\r\n File \"examples/interactive.py\", line 18, in <module>\r\n interactive(opt, print_parser=parser)\r\n File \"/root/ParlAI/parlai/scripts/interactive.py\", line 68, in interactive\r\n agent = create_agent(opt, requireModelExists=True)\r\n File \"/root/ParlAI/parlai/core/agents.py\", line 683, in create_agent\r\n model = load_agent_module(opt)\r\n File \"/root/ParlAI/parlai/core/agents.py\", line 548, in load_agent_module\r\n return model_class(new_opt)\r\n File \"/root/ParlAI/parlai/agents/memnn/memnn.py\", line 86, in __init__\r\n super().__init__(opt, shared)\r\n File \"/root/ParlAI/parlai/core/torch_ranker_agent.py\", line 135, in __init__\r\n super().__init__(opt, shared)\r\n File \"/root/ParlAI/parlai/core/torch_agent.py\", line 737, in __init__\r\n self.set_interactive_mode(opt['interactive_mode'], shared)\r\n File \"/root/ParlAI/parlai/core/torch_ranker_agent.py\", line 206, in set_interactive_mode\r\n path = self.get_task_candidates_path()\r\n File \"/root/ParlAI/parlai/core/torch_ranker_agent.py\", line 230, in get_task_candidates_path\r\n build_cands(opt)\r\n File \"/root/ParlAI/parlai/scripts/build_candidates.py\", line 47, in build_cands\r\n acts = world.get_acts()[0]\r\n File \"/root/ParlAI/parlai/core/worlds.py\", line 162, in get_acts\r\n return self.acts\r\nAttributeError: 'HogwildWorld' object has no attribute 'acts'\r\n```\r\n\r\n**While running**\r\n```python\r\npython examples/interactive.py -mf /tmp/babi_memnn -ecands vocab\r\n```\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\"\"\"Build the candidate responses for a retrieval model.\n\nExamples\n--------\n\n.. code-block:: shell\n\n python build_candidates.py -t convai2 --outfile /tmp/cands.txt\n\"\"\"\n\nfrom parlai.core.params import ParlaiParser\nfrom parlai.agents.repeat_label.repeat_label import RepeatLabelAgent\nfrom parlai.core.worlds import create_task\nfrom parlai.core.utils import TimeLogger\nimport random\nimport tempfile\n\n\ndef build_cands(opt):\n # create repeat label agent and assign it to the specified task\n agent = RepeatLabelAgent(opt)\n world = create_task(opt, agent)\n if opt['outfile'] is None:\n outfile = tempfile.mkstemp(\n prefix='{}_{}_'.format(opt['task'], opt['datatype']), suffix='.txt'\n )[1]\n else:\n outfile = opt['outfile']\n\n if opt.get('num_examples', -1) == -1:\n num_examples = world.num_examples()\n else:\n num_examples = opt['num_examples']\n log_timer = TimeLogger()\n\n print('[ starting to build candidates from task.. (ex:' + str(num_examples) + ')]')\n print('[ saving output to {} ]'.format(outfile))\n cands = []\n for _ in range(num_examples):\n world.parley()\n # We get the acts of the first agent, which is the teacher.\n acts = world.get_acts()[0]\n if isinstance(acts, dict):\n # We turn into a batch of 1 example, in case batching is being used.\n acts = [acts]\n for a in acts:\n candidate = a.get('labels', a.get('eval_labels', None))\n if candidate is not None:\n candidate = candidate[0]\n cands.append(candidate)\n if log_timer.time() > opt['log_every_n_secs']:\n text, _log = log_timer.log(world.total_parleys, world.num_examples())\n print(text)\n if world.epoch_done():\n print('EPOCH DONE')\n break\n fw = open(outfile, 'w')\n fw.write('\\n'.join(cands))\n fw.close()\n\n\ndef main():\n random.seed(42)\n # Get command line arguments\n parser = ParlaiParser()\n parser.add_argument(\n '-n',\n '--num-examples',\n default=-1,\n type=int,\n help='Total number of exs to convert, -1 to convert all examples',\n )\n parser.add_argument(\n '-of',\n '--outfile',\n default=None,\n type=str,\n help='Output file where to save, by default will be created in /tmp',\n )\n parser.add_argument('-ltim', '--log-every-n-secs', type=float, default=2)\n parser.set_defaults(datatype='train:evalmode')\n opt = parser.parse_args()\n build_cands(opt)\n\n\nif __name__ == '__main__':\n main()\n", "path": "parlai/scripts/build_candidates.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\"\"\"Build the candidate responses for a retrieval model.\n\nExamples\n--------\n\n.. code-block:: shell\n\n python build_candidates.py -t convai2 --outfile /tmp/cands.txt\n\"\"\"\n\nfrom parlai.core.params import ParlaiParser\nfrom parlai.agents.repeat_label.repeat_label import RepeatLabelAgent\nfrom parlai.core.worlds import create_task\nfrom parlai.core.utils import TimeLogger\nimport random\nimport tempfile\n\n\ndef build_cands(opt):\n # create repeat label agent and assign it to the specified task\n if opt['numthreads'] > 1:\n # Broken in hogwild mode. Just fall back to single processing mode\n opt['numthreads'] = 1\n agent = RepeatLabelAgent(opt)\n world = create_task(opt, agent)\n if opt['outfile'] is None:\n outfile = tempfile.mkstemp(\n prefix='{}_{}_'.format(opt['task'], opt['datatype']), suffix='.txt'\n )[1]\n else:\n outfile = opt['outfile']\n\n if opt.get('num_examples', -1) == -1:\n num_examples = world.num_examples()\n else:\n num_examples = opt['num_examples']\n log_timer = TimeLogger()\n\n print('[ starting to build candidates from task.. (ex:' + str(num_examples) + ')]')\n print('[ saving output to {} ]'.format(outfile))\n cands = []\n for _ in range(num_examples):\n world.parley()\n # We get the acts of the first agent, which is the teacher.\n acts = world.get_acts()[0]\n if isinstance(acts, dict):\n # We turn into a batch of 1 example, in case batching is being used.\n acts = [acts]\n for a in acts:\n candidate = a.get('labels', a.get('eval_labels', None))\n if candidate is not None:\n candidate = candidate[0]\n cands.append(candidate)\n if log_timer.time() > opt['log_every_n_secs']:\n text, _log = log_timer.log(world.total_parleys, world.num_examples())\n print(text)\n if world.epoch_done():\n print('EPOCH DONE')\n break\n fw = open(outfile, 'w')\n fw.write('\\n'.join(cands))\n fw.close()\n\n\ndef main():\n random.seed(42)\n # Get command line arguments\n parser = ParlaiParser()\n parser.add_argument(\n '-n',\n '--num-examples',\n default=-1,\n type=int,\n help='Total number of exs to convert, -1 to convert all examples',\n )\n parser.add_argument(\n '-of',\n '--outfile',\n default=None,\n type=str,\n help='Output file where to save, by default will be created in /tmp',\n )\n parser.add_argument('-ltim', '--log-every-n-secs', type=float, default=2)\n parser.set_defaults(datatype='train:evalmode')\n opt = parser.parse_args()\n build_cands(opt)\n\n\nif __name__ == '__main__':\n main()\n", "path": "parlai/scripts/build_candidates.py"}]} | 1,619 | 143 |
gh_patches_debug_12054 | rasdani/github-patches | git_diff | quantumlib__Cirq-4816 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Substates are separated after measurements, when measurements are ignored
**Description of the issue**
Density matrix separates qubit states after measurement when split_untanged_states=True. However when ignore_measurement_results=True, this should not happen, as the this changes the measurement into a dephase and does not make the state separable.
For instance, when measuring a Bell state, the resulting DM should be 0.5 |00> + 0.5 |11>. However, separating those states (partial tracing each qubit) and re-kronning them gives 0.25 of each; i.e. it causes each qubit to be 0.5 |0> and 0.5 |1> independently. Therefore we need to avoid separating states after measurements if ignore_measurement_results=True.
**How to reproduce the issue**
```python
def test_ignore_measurements_remains_entangled():
q0, q1 = cirq.LineQubit.range(2)
simulator1 = cirq.DensityMatrixSimulator(
ignore_measurement_results=True, split_untangled_states=False
)
simulator2 = cirq.DensityMatrixSimulator(
ignore_measurement_results=True, split_untangled_states=True
)
circuit = cirq.Circuit(
cirq.H(q0),
cirq.CX(q0, q1),
cirq.measure(q0),
)
result1 = simulator1.simulate(circuit)
result2 = simulator2.simulate(circuit)
np.testing.assert_almost_equal(result2.final_density_matrix, result1.final_density_matrix)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cirq-core/cirq/sim/act_on_args_container.py`
Content:
```
1 # Copyright 2021 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from collections import abc
16 from typing import (
17 Dict,
18 TYPE_CHECKING,
19 Generic,
20 Sequence,
21 Optional,
22 Iterator,
23 Any,
24 Tuple,
25 List,
26 Union,
27 )
28
29 import numpy as np
30
31 from cirq import ops, protocols
32 from cirq.sim.operation_target import OperationTarget
33 from cirq.sim.simulator import (
34 TActOnArgs,
35 )
36
37 if TYPE_CHECKING:
38 import cirq
39
40
41 class ActOnArgsContainer(
42 Generic[TActOnArgs],
43 OperationTarget[TActOnArgs],
44 abc.Mapping,
45 ):
46 """A container for a `Qid`-to-`ActOnArgs` dictionary."""
47
48 def __init__(
49 self,
50 args: Dict[Optional['cirq.Qid'], TActOnArgs],
51 qubits: Sequence['cirq.Qid'],
52 split_untangled_states: bool,
53 log_of_measurement_results: Dict[str, Any],
54 ):
55 """Initializes the class.
56
57 Args:
58 args: The `ActOnArgs` dictionary. This will not be copied; the
59 original reference will be kept here.
60 qubits: The canonical ordering of qubits.
61 split_untangled_states: If True, optimizes operations by running
62 unentangled qubit sets independently and merging those states
63 at the end.
64 log_of_measurement_results: A mutable object that measurements are
65 being recorded into.
66 """
67 self.args = args
68 self._qubits = tuple(qubits)
69 self.split_untangled_states = split_untangled_states
70 self._log_of_measurement_results = log_of_measurement_results
71
72 def create_merged_state(self) -> TActOnArgs:
73 if not self.split_untangled_states:
74 return self.args[None]
75 final_args = self.args[None]
76 for args in set([self.args[k] for k in self.args.keys() if k is not None]):
77 final_args = final_args.kronecker_product(args)
78 return final_args.transpose_to_qubit_order(self.qubits)
79
80 def _act_on_fallback_(
81 self,
82 action: Union['cirq.Operation', 'cirq.Gate'],
83 qubits: Sequence['cirq.Qid'],
84 allow_decompose: bool = True,
85 ) -> bool:
86 gate = action.gate if isinstance(action, ops.Operation) else action
87
88 if isinstance(gate, ops.IdentityGate):
89 return True
90
91 if isinstance(gate, ops.SwapPowGate) and gate.exponent % 2 == 1 and gate.global_shift == 0:
92 q0, q1 = qubits
93 args0 = self.args[q0]
94 args1 = self.args[q1]
95 if args0 is args1:
96 args0.swap(q0, q1, inplace=True)
97 else:
98 self.args[q0] = args1.rename(q1, q0, inplace=True)
99 self.args[q1] = args0.rename(q0, q1, inplace=True)
100 return True
101
102 # Go through the op's qubits and join any disparate ActOnArgs states
103 # into a new combined state.
104 op_args_opt: Optional[TActOnArgs] = None
105 for q in qubits:
106 if op_args_opt is None:
107 op_args_opt = self.args[q]
108 elif q not in op_args_opt.qubits:
109 op_args_opt = op_args_opt.kronecker_product(self.args[q])
110 op_args = op_args_opt or self.args[None]
111
112 # (Backfill the args map with the new value)
113 for q in op_args.qubits:
114 self.args[q] = op_args
115
116 # Act on the args with the operation
117 act_on_qubits = qubits if isinstance(action, ops.Gate) else None
118 protocols.act_on(action, op_args, act_on_qubits, allow_decompose=allow_decompose)
119
120 # Decouple any measurements or resets
121 if self.split_untangled_states and isinstance(
122 gate, (ops.MeasurementGate, ops.ResetChannel)
123 ):
124 for q in qubits:
125 q_args, op_args = op_args.factor((q,), validate=False)
126 self.args[q] = q_args
127
128 # (Backfill the args map with the new value)
129 for q in op_args.qubits:
130 self.args[q] = op_args
131 return True
132
133 def copy(self) -> 'cirq.ActOnArgsContainer[TActOnArgs]':
134 logs = self.log_of_measurement_results.copy()
135 copies = {a: a.copy() for a in set(self.args.values())}
136 for copy in copies.values():
137 copy._log_of_measurement_results = logs
138 args = {q: copies[a] for q, a in self.args.items()}
139 return ActOnArgsContainer(args, self.qubits, self.split_untangled_states, logs)
140
141 @property
142 def qubits(self) -> Tuple['cirq.Qid', ...]:
143 return self._qubits
144
145 @property
146 def log_of_measurement_results(self) -> Dict[str, Any]:
147 return self._log_of_measurement_results
148
149 def sample(
150 self,
151 qubits: List['cirq.Qid'],
152 repetitions: int = 1,
153 seed: 'cirq.RANDOM_STATE_OR_SEED_LIKE' = None,
154 ) -> np.ndarray:
155 columns = []
156 selected_order: List[ops.Qid] = []
157 q_set = set(qubits)
158 for v in dict.fromkeys(self.args.values()):
159 qs = [q for q in v.qubits if q in q_set]
160 if any(qs):
161 column = v.sample(qs, repetitions, seed)
162 columns.append(column)
163 selected_order += qs
164 stacked = np.column_stack(columns)
165 qubit_map = {q: i for i, q in enumerate(selected_order)}
166 index_order = [qubit_map[q] for q in qubits]
167 return stacked[:, index_order]
168
169 def __getitem__(self, item: Optional['cirq.Qid']) -> TActOnArgs:
170 return self.args[item]
171
172 def __len__(self) -> int:
173 return len(self.args)
174
175 def __iter__(self) -> Iterator[Optional['cirq.Qid']]:
176 return iter(self.args)
177
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cirq-core/cirq/sim/act_on_args_container.py b/cirq-core/cirq/sim/act_on_args_container.py
--- a/cirq-core/cirq/sim/act_on_args_container.py
+++ b/cirq-core/cirq/sim/act_on_args_container.py
@@ -118,8 +118,9 @@
protocols.act_on(action, op_args, act_on_qubits, allow_decompose=allow_decompose)
# Decouple any measurements or resets
- if self.split_untangled_states and isinstance(
- gate, (ops.MeasurementGate, ops.ResetChannel)
+ if self.split_untangled_states and (
+ isinstance(gate, ops.ResetChannel)
+ or (isinstance(gate, ops.MeasurementGate) and not op_args.ignore_measurement_results)
):
for q in qubits:
q_args, op_args = op_args.factor((q,), validate=False)
| {"golden_diff": "diff --git a/cirq-core/cirq/sim/act_on_args_container.py b/cirq-core/cirq/sim/act_on_args_container.py\n--- a/cirq-core/cirq/sim/act_on_args_container.py\n+++ b/cirq-core/cirq/sim/act_on_args_container.py\n@@ -118,8 +118,9 @@\n protocols.act_on(action, op_args, act_on_qubits, allow_decompose=allow_decompose)\n \n # Decouple any measurements or resets\n- if self.split_untangled_states and isinstance(\n- gate, (ops.MeasurementGate, ops.ResetChannel)\n+ if self.split_untangled_states and (\n+ isinstance(gate, ops.ResetChannel)\n+ or (isinstance(gate, ops.MeasurementGate) and not op_args.ignore_measurement_results)\n ):\n for q in qubits:\n q_args, op_args = op_args.factor((q,), validate=False)\n", "issue": "Substates are separated after measurements, when measurements are ignored\n**Description of the issue**\r\n\r\nDensity matrix separates qubit states after measurement when split_untanged_states=True. However when ignore_measurement_results=True, this should not happen, as the this changes the measurement into a dephase and does not make the state separable.\r\n\r\nFor instance, when measuring a Bell state, the resulting DM should be 0.5 |00> + 0.5 |11>. However, separating those states (partial tracing each qubit) and re-kronning them gives 0.25 of each; i.e. it causes each qubit to be 0.5 |0> and 0.5 |1> independently. Therefore we need to avoid separating states after measurements if ignore_measurement_results=True.\r\n\r\n\r\n**How to reproduce the issue**\r\n\r\n```python\r\ndef test_ignore_measurements_remains_entangled():\r\n q0, q1 = cirq.LineQubit.range(2)\r\n simulator1 = cirq.DensityMatrixSimulator(\r\n ignore_measurement_results=True, split_untangled_states=False\r\n )\r\n simulator2 = cirq.DensityMatrixSimulator(\r\n ignore_measurement_results=True, split_untangled_states=True\r\n )\r\n circuit = cirq.Circuit(\r\n cirq.H(q0),\r\n cirq.CX(q0, q1),\r\n cirq.measure(q0),\r\n )\r\n result1 = simulator1.simulate(circuit)\r\n result2 = simulator2.simulate(circuit)\r\n np.testing.assert_almost_equal(result2.final_density_matrix, result1.final_density_matrix)\r\n```\r\n\r\n\n", "before_files": [{"content": "# Copyright 2021 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom collections import abc\nfrom typing import (\n Dict,\n TYPE_CHECKING,\n Generic,\n Sequence,\n Optional,\n Iterator,\n Any,\n Tuple,\n List,\n Union,\n)\n\nimport numpy as np\n\nfrom cirq import ops, protocols\nfrom cirq.sim.operation_target import OperationTarget\nfrom cirq.sim.simulator import (\n TActOnArgs,\n)\n\nif TYPE_CHECKING:\n import cirq\n\n\nclass ActOnArgsContainer(\n Generic[TActOnArgs],\n OperationTarget[TActOnArgs],\n abc.Mapping,\n):\n \"\"\"A container for a `Qid`-to-`ActOnArgs` dictionary.\"\"\"\n\n def __init__(\n self,\n args: Dict[Optional['cirq.Qid'], TActOnArgs],\n qubits: Sequence['cirq.Qid'],\n split_untangled_states: bool,\n log_of_measurement_results: Dict[str, Any],\n ):\n \"\"\"Initializes the class.\n\n Args:\n args: The `ActOnArgs` dictionary. This will not be copied; the\n original reference will be kept here.\n qubits: The canonical ordering of qubits.\n split_untangled_states: If True, optimizes operations by running\n unentangled qubit sets independently and merging those states\n at the end.\n log_of_measurement_results: A mutable object that measurements are\n being recorded into.\n \"\"\"\n self.args = args\n self._qubits = tuple(qubits)\n self.split_untangled_states = split_untangled_states\n self._log_of_measurement_results = log_of_measurement_results\n\n def create_merged_state(self) -> TActOnArgs:\n if not self.split_untangled_states:\n return self.args[None]\n final_args = self.args[None]\n for args in set([self.args[k] for k in self.args.keys() if k is not None]):\n final_args = final_args.kronecker_product(args)\n return final_args.transpose_to_qubit_order(self.qubits)\n\n def _act_on_fallback_(\n self,\n action: Union['cirq.Operation', 'cirq.Gate'],\n qubits: Sequence['cirq.Qid'],\n allow_decompose: bool = True,\n ) -> bool:\n gate = action.gate if isinstance(action, ops.Operation) else action\n\n if isinstance(gate, ops.IdentityGate):\n return True\n\n if isinstance(gate, ops.SwapPowGate) and gate.exponent % 2 == 1 and gate.global_shift == 0:\n q0, q1 = qubits\n args0 = self.args[q0]\n args1 = self.args[q1]\n if args0 is args1:\n args0.swap(q0, q1, inplace=True)\n else:\n self.args[q0] = args1.rename(q1, q0, inplace=True)\n self.args[q1] = args0.rename(q0, q1, inplace=True)\n return True\n\n # Go through the op's qubits and join any disparate ActOnArgs states\n # into a new combined state.\n op_args_opt: Optional[TActOnArgs] = None\n for q in qubits:\n if op_args_opt is None:\n op_args_opt = self.args[q]\n elif q not in op_args_opt.qubits:\n op_args_opt = op_args_opt.kronecker_product(self.args[q])\n op_args = op_args_opt or self.args[None]\n\n # (Backfill the args map with the new value)\n for q in op_args.qubits:\n self.args[q] = op_args\n\n # Act on the args with the operation\n act_on_qubits = qubits if isinstance(action, ops.Gate) else None\n protocols.act_on(action, op_args, act_on_qubits, allow_decompose=allow_decompose)\n\n # Decouple any measurements or resets\n if self.split_untangled_states and isinstance(\n gate, (ops.MeasurementGate, ops.ResetChannel)\n ):\n for q in qubits:\n q_args, op_args = op_args.factor((q,), validate=False)\n self.args[q] = q_args\n\n # (Backfill the args map with the new value)\n for q in op_args.qubits:\n self.args[q] = op_args\n return True\n\n def copy(self) -> 'cirq.ActOnArgsContainer[TActOnArgs]':\n logs = self.log_of_measurement_results.copy()\n copies = {a: a.copy() for a in set(self.args.values())}\n for copy in copies.values():\n copy._log_of_measurement_results = logs\n args = {q: copies[a] for q, a in self.args.items()}\n return ActOnArgsContainer(args, self.qubits, self.split_untangled_states, logs)\n\n @property\n def qubits(self) -> Tuple['cirq.Qid', ...]:\n return self._qubits\n\n @property\n def log_of_measurement_results(self) -> Dict[str, Any]:\n return self._log_of_measurement_results\n\n def sample(\n self,\n qubits: List['cirq.Qid'],\n repetitions: int = 1,\n seed: 'cirq.RANDOM_STATE_OR_SEED_LIKE' = None,\n ) -> np.ndarray:\n columns = []\n selected_order: List[ops.Qid] = []\n q_set = set(qubits)\n for v in dict.fromkeys(self.args.values()):\n qs = [q for q in v.qubits if q in q_set]\n if any(qs):\n column = v.sample(qs, repetitions, seed)\n columns.append(column)\n selected_order += qs\n stacked = np.column_stack(columns)\n qubit_map = {q: i for i, q in enumerate(selected_order)}\n index_order = [qubit_map[q] for q in qubits]\n return stacked[:, index_order]\n\n def __getitem__(self, item: Optional['cirq.Qid']) -> TActOnArgs:\n return self.args[item]\n\n def __len__(self) -> int:\n return len(self.args)\n\n def __iter__(self) -> Iterator[Optional['cirq.Qid']]:\n return iter(self.args)\n", "path": "cirq-core/cirq/sim/act_on_args_container.py"}], "after_files": [{"content": "# Copyright 2021 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom collections import abc\nfrom typing import (\n Dict,\n TYPE_CHECKING,\n Generic,\n Sequence,\n Optional,\n Iterator,\n Any,\n Tuple,\n List,\n Union,\n)\n\nimport numpy as np\n\nfrom cirq import ops, protocols\nfrom cirq.sim.operation_target import OperationTarget\nfrom cirq.sim.simulator import (\n TActOnArgs,\n)\n\nif TYPE_CHECKING:\n import cirq\n\n\nclass ActOnArgsContainer(\n Generic[TActOnArgs],\n OperationTarget[TActOnArgs],\n abc.Mapping,\n):\n \"\"\"A container for a `Qid`-to-`ActOnArgs` dictionary.\"\"\"\n\n def __init__(\n self,\n args: Dict[Optional['cirq.Qid'], TActOnArgs],\n qubits: Sequence['cirq.Qid'],\n split_untangled_states: bool,\n log_of_measurement_results: Dict[str, Any],\n ):\n \"\"\"Initializes the class.\n\n Args:\n args: The `ActOnArgs` dictionary. This will not be copied; the\n original reference will be kept here.\n qubits: The canonical ordering of qubits.\n split_untangled_states: If True, optimizes operations by running\n unentangled qubit sets independently and merging those states\n at the end.\n log_of_measurement_results: A mutable object that measurements are\n being recorded into.\n \"\"\"\n self.args = args\n self._qubits = tuple(qubits)\n self.split_untangled_states = split_untangled_states\n self._log_of_measurement_results = log_of_measurement_results\n\n def create_merged_state(self) -> TActOnArgs:\n if not self.split_untangled_states:\n return self.args[None]\n final_args = self.args[None]\n for args in set([self.args[k] for k in self.args.keys() if k is not None]):\n final_args = final_args.kronecker_product(args)\n return final_args.transpose_to_qubit_order(self.qubits)\n\n def _act_on_fallback_(\n self,\n action: Union['cirq.Operation', 'cirq.Gate'],\n qubits: Sequence['cirq.Qid'],\n allow_decompose: bool = True,\n ) -> bool:\n gate = action.gate if isinstance(action, ops.Operation) else action\n\n if isinstance(gate, ops.IdentityGate):\n return True\n\n if isinstance(gate, ops.SwapPowGate) and gate.exponent % 2 == 1 and gate.global_shift == 0:\n q0, q1 = qubits\n args0 = self.args[q0]\n args1 = self.args[q1]\n if args0 is args1:\n args0.swap(q0, q1, inplace=True)\n else:\n self.args[q0] = args1.rename(q1, q0, inplace=True)\n self.args[q1] = args0.rename(q0, q1, inplace=True)\n return True\n\n # Go through the op's qubits and join any disparate ActOnArgs states\n # into a new combined state.\n op_args_opt: Optional[TActOnArgs] = None\n for q in qubits:\n if op_args_opt is None:\n op_args_opt = self.args[q]\n elif q not in op_args_opt.qubits:\n op_args_opt = op_args_opt.kronecker_product(self.args[q])\n op_args = op_args_opt or self.args[None]\n\n # (Backfill the args map with the new value)\n for q in op_args.qubits:\n self.args[q] = op_args\n\n # Act on the args with the operation\n act_on_qubits = qubits if isinstance(action, ops.Gate) else None\n protocols.act_on(action, op_args, act_on_qubits, allow_decompose=allow_decompose)\n\n # Decouple any measurements or resets\n if self.split_untangled_states and (\n isinstance(gate, ops.ResetChannel)\n or (isinstance(gate, ops.MeasurementGate) and not op_args.ignore_measurement_results)\n ):\n for q in qubits:\n q_args, op_args = op_args.factor((q,), validate=False)\n self.args[q] = q_args\n\n # (Backfill the args map with the new value)\n for q in op_args.qubits:\n self.args[q] = op_args\n return True\n\n def copy(self) -> 'cirq.ActOnArgsContainer[TActOnArgs]':\n logs = self.log_of_measurement_results.copy()\n copies = {a: a.copy() for a in set(self.args.values())}\n for copy in copies.values():\n copy._log_of_measurement_results = logs\n args = {q: copies[a] for q, a in self.args.items()}\n return ActOnArgsContainer(args, self.qubits, self.split_untangled_states, logs)\n\n @property\n def qubits(self) -> Tuple['cirq.Qid', ...]:\n return self._qubits\n\n @property\n def log_of_measurement_results(self) -> Dict[str, Any]:\n return self._log_of_measurement_results\n\n def sample(\n self,\n qubits: List['cirq.Qid'],\n repetitions: int = 1,\n seed: 'cirq.RANDOM_STATE_OR_SEED_LIKE' = None,\n ) -> np.ndarray:\n columns = []\n selected_order: List[ops.Qid] = []\n q_set = set(qubits)\n for v in dict.fromkeys(self.args.values()):\n qs = [q for q in v.qubits if q in q_set]\n if any(qs):\n column = v.sample(qs, repetitions, seed)\n columns.append(column)\n selected_order += qs\n stacked = np.column_stack(columns)\n qubit_map = {q: i for i, q in enumerate(selected_order)}\n index_order = [qubit_map[q] for q in qubits]\n return stacked[:, index_order]\n\n def __getitem__(self, item: Optional['cirq.Qid']) -> TActOnArgs:\n return self.args[item]\n\n def __len__(self) -> int:\n return len(self.args)\n\n def __iter__(self) -> Iterator[Optional['cirq.Qid']]:\n return iter(self.args)\n", "path": "cirq-core/cirq/sim/act_on_args_container.py"}]} | 2,508 | 204 |
gh_patches_debug_1156 | rasdani/github-patches | git_diff | facebookresearch__hydra-1531 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add `env` to Hydra's config group
This is a follow up to #1441
the `env` config group will allows users to manually change the env defaults value. (such as provides default callbacks or update run.dir )
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hydra/conf/__init__.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 from dataclasses import dataclass, field
3 from typing import Any, Dict, List, Optional
4
5 from omegaconf import MISSING
6
7 from hydra.core.config_store import ConfigStore
8
9
10 @dataclass
11 class HelpConf:
12 app_name: str = MISSING
13 header: str = MISSING
14 footer: str = MISSING
15 template: str = MISSING
16
17
18 @dataclass
19 class HydraHelpConf:
20 hydra_help: str = MISSING
21 template: str = MISSING
22
23
24 @dataclass
25 class RunDir:
26 dir: str = MISSING
27
28
29 @dataclass
30 class SweepDir:
31 dir: str = MISSING
32 subdir: str = MISSING
33
34
35 @dataclass
36 class OverridesConf:
37 # Overrides for the hydra configuration
38 hydra: List[str] = field(default_factory=lambda: [])
39 # Overrides for the task configuration
40 task: List[str] = field(default_factory=lambda: [])
41
42
43 # job runtime information will be populated here
44 @dataclass
45 class JobConf:
46 # Job name, populated automatically unless specified by the user (in config or cli)
47 name: str = MISSING
48
49 # Populated automatically by Hydra.
50 # Concatenation of job overrides that can be used as a part
51 # of the directory name.
52 # This can be configured via hydra.job.config.override_dirname
53 override_dirname: str = MISSING
54
55 # Job ID in underlying scheduling system
56 id: str = MISSING
57
58 # Job number if job is a part of a sweep
59 num: int = MISSING
60
61 # The config name used by the job
62 config_name: Optional[str] = MISSING
63
64 # Environment variables to set remotely
65 env_set: Dict[str, str] = field(default_factory=dict)
66 # Environment variables to copy from the launching machine
67 env_copy: List[str] = field(default_factory=list)
68
69 # Job config
70 @dataclass
71 class JobConfig:
72 @dataclass
73 # configuration for the ${hydra.job.override_dirname} runtime variable
74 class OverrideDirname:
75 kv_sep: str = "="
76 item_sep: str = ","
77 exclude_keys: List[str] = field(default_factory=list)
78
79 override_dirname: OverrideDirname = OverrideDirname()
80
81 config: JobConfig = JobConfig()
82
83
84 @dataclass
85 class RuntimeConf:
86 version: str = MISSING
87 cwd: str = MISSING
88
89
90 @dataclass
91 class HydraConf:
92 defaults: List[Any] = field(
93 default_factory=lambda: [
94 {"output": "default"},
95 {"launcher": "basic"},
96 {"sweeper": "basic"},
97 {"help": "default"},
98 {"hydra_help": "default"},
99 {"hydra_logging": "default"},
100 {"job_logging": "default"},
101 {"callbacks": None},
102 ]
103 )
104
105 # Elements to append to the config search path.
106 # Note: This can only be configured in the primary config.
107 searchpath: List[str] = field(default_factory=list)
108
109 # Normal run output configuration
110 run: RunDir = RunDir()
111 # Multi-run output configuration
112 sweep: SweepDir = SweepDir()
113 # Logging configuration for Hydra
114 hydra_logging: Any = MISSING
115 # Logging configuration for the job
116 job_logging: Any = MISSING
117
118 # Sweeper configuration
119 sweeper: Any = MISSING
120 # Launcher configuration
121 launcher: Any = MISSING
122 # Callbacks configuration
123 callbacks: Dict[str, Any] = field(default_factory=dict)
124
125 # Program Help template
126 help: HelpConf = HelpConf()
127 # Hydra's Help template
128 hydra_help: HydraHelpConf = HydraHelpConf()
129
130 # Output directory for produced configuration files and overrides.
131 # E.g., hydra.yaml, overrides.yaml will go here. Useful for debugging
132 # and extra context when looking at past runs.
133 # Setting to None will prevent the creation of the output subdir.
134 output_subdir: Optional[str] = ".hydra"
135
136 # Those lists will contain runtime overrides
137 overrides: OverridesConf = OverridesConf()
138
139 job: JobConf = JobConf()
140
141 # populated at runtime
142 runtime: RuntimeConf = RuntimeConf()
143
144 # Can be a boolean, string or a list of strings
145 # If a boolean, setting to true will set the log level for the root logger to debug
146 # If a string, it's interpreted as a the list [string]
147 # If a list, each element is interpreted as a logger to have logging level set to debug.
148 # Typical command lines to manipulate hydra.verbose:
149 # hydra.verbose=true
150 # hydra.verbose=[hydra,__main__]
151 # TODO: good use case for Union support in OmegaConf
152 verbose: Any = False
153
154 # Composition choices dictionary
155 choices: Dict[str, str] = field(default_factory=lambda: {})
156
157
158 cs = ConfigStore.instance()
159
160 cs.store(
161 group="hydra",
162 name="config",
163 node=HydraConf(),
164 provider="hydra",
165 )
166
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/hydra/conf/__init__.py b/hydra/conf/__init__.py
--- a/hydra/conf/__init__.py
+++ b/hydra/conf/__init__.py
@@ -99,6 +99,8 @@
{"hydra_logging": "default"},
{"job_logging": "default"},
{"callbacks": None},
+ # env specific overrides
+ {"env": "default"},
]
)
| {"golden_diff": "diff --git a/hydra/conf/__init__.py b/hydra/conf/__init__.py\n--- a/hydra/conf/__init__.py\n+++ b/hydra/conf/__init__.py\n@@ -99,6 +99,8 @@\n {\"hydra_logging\": \"default\"},\n {\"job_logging\": \"default\"},\n {\"callbacks\": None},\n+ # env specific overrides\n+ {\"env\": \"default\"},\n ]\n )\n", "issue": "Add `env` to Hydra's config group\nThis is a follow up to #1441\r\n\r\nthe `env` config group will allows users to manually change the env defaults value. (such as provides default callbacks or update run.dir )\r\n\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nfrom dataclasses import dataclass, field\nfrom typing import Any, Dict, List, Optional\n\nfrom omegaconf import MISSING\n\nfrom hydra.core.config_store import ConfigStore\n\n\n@dataclass\nclass HelpConf:\n app_name: str = MISSING\n header: str = MISSING\n footer: str = MISSING\n template: str = MISSING\n\n\n@dataclass\nclass HydraHelpConf:\n hydra_help: str = MISSING\n template: str = MISSING\n\n\n@dataclass\nclass RunDir:\n dir: str = MISSING\n\n\n@dataclass\nclass SweepDir:\n dir: str = MISSING\n subdir: str = MISSING\n\n\n@dataclass\nclass OverridesConf:\n # Overrides for the hydra configuration\n hydra: List[str] = field(default_factory=lambda: [])\n # Overrides for the task configuration\n task: List[str] = field(default_factory=lambda: [])\n\n\n# job runtime information will be populated here\n@dataclass\nclass JobConf:\n # Job name, populated automatically unless specified by the user (in config or cli)\n name: str = MISSING\n\n # Populated automatically by Hydra.\n # Concatenation of job overrides that can be used as a part\n # of the directory name.\n # This can be configured via hydra.job.config.override_dirname\n override_dirname: str = MISSING\n\n # Job ID in underlying scheduling system\n id: str = MISSING\n\n # Job number if job is a part of a sweep\n num: int = MISSING\n\n # The config name used by the job\n config_name: Optional[str] = MISSING\n\n # Environment variables to set remotely\n env_set: Dict[str, str] = field(default_factory=dict)\n # Environment variables to copy from the launching machine\n env_copy: List[str] = field(default_factory=list)\n\n # Job config\n @dataclass\n class JobConfig:\n @dataclass\n # configuration for the ${hydra.job.override_dirname} runtime variable\n class OverrideDirname:\n kv_sep: str = \"=\"\n item_sep: str = \",\"\n exclude_keys: List[str] = field(default_factory=list)\n\n override_dirname: OverrideDirname = OverrideDirname()\n\n config: JobConfig = JobConfig()\n\n\n@dataclass\nclass RuntimeConf:\n version: str = MISSING\n cwd: str = MISSING\n\n\n@dataclass\nclass HydraConf:\n defaults: List[Any] = field(\n default_factory=lambda: [\n {\"output\": \"default\"},\n {\"launcher\": \"basic\"},\n {\"sweeper\": \"basic\"},\n {\"help\": \"default\"},\n {\"hydra_help\": \"default\"},\n {\"hydra_logging\": \"default\"},\n {\"job_logging\": \"default\"},\n {\"callbacks\": None},\n ]\n )\n\n # Elements to append to the config search path.\n # Note: This can only be configured in the primary config.\n searchpath: List[str] = field(default_factory=list)\n\n # Normal run output configuration\n run: RunDir = RunDir()\n # Multi-run output configuration\n sweep: SweepDir = SweepDir()\n # Logging configuration for Hydra\n hydra_logging: Any = MISSING\n # Logging configuration for the job\n job_logging: Any = MISSING\n\n # Sweeper configuration\n sweeper: Any = MISSING\n # Launcher configuration\n launcher: Any = MISSING\n # Callbacks configuration\n callbacks: Dict[str, Any] = field(default_factory=dict)\n\n # Program Help template\n help: HelpConf = HelpConf()\n # Hydra's Help template\n hydra_help: HydraHelpConf = HydraHelpConf()\n\n # Output directory for produced configuration files and overrides.\n # E.g., hydra.yaml, overrides.yaml will go here. Useful for debugging\n # and extra context when looking at past runs.\n # Setting to None will prevent the creation of the output subdir.\n output_subdir: Optional[str] = \".hydra\"\n\n # Those lists will contain runtime overrides\n overrides: OverridesConf = OverridesConf()\n\n job: JobConf = JobConf()\n\n # populated at runtime\n runtime: RuntimeConf = RuntimeConf()\n\n # Can be a boolean, string or a list of strings\n # If a boolean, setting to true will set the log level for the root logger to debug\n # If a string, it's interpreted as a the list [string]\n # If a list, each element is interpreted as a logger to have logging level set to debug.\n # Typical command lines to manipulate hydra.verbose:\n # hydra.verbose=true\n # hydra.verbose=[hydra,__main__]\n # TODO: good use case for Union support in OmegaConf\n verbose: Any = False\n\n # Composition choices dictionary\n choices: Dict[str, str] = field(default_factory=lambda: {})\n\n\ncs = ConfigStore.instance()\n\ncs.store(\n group=\"hydra\",\n name=\"config\",\n node=HydraConf(),\n provider=\"hydra\",\n)\n", "path": "hydra/conf/__init__.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nfrom dataclasses import dataclass, field\nfrom typing import Any, Dict, List, Optional\n\nfrom omegaconf import MISSING\n\nfrom hydra.core.config_store import ConfigStore\n\n\n@dataclass\nclass HelpConf:\n app_name: str = MISSING\n header: str = MISSING\n footer: str = MISSING\n template: str = MISSING\n\n\n@dataclass\nclass HydraHelpConf:\n hydra_help: str = MISSING\n template: str = MISSING\n\n\n@dataclass\nclass RunDir:\n dir: str = MISSING\n\n\n@dataclass\nclass SweepDir:\n dir: str = MISSING\n subdir: str = MISSING\n\n\n@dataclass\nclass OverridesConf:\n # Overrides for the hydra configuration\n hydra: List[str] = field(default_factory=lambda: [])\n # Overrides for the task configuration\n task: List[str] = field(default_factory=lambda: [])\n\n\n# job runtime information will be populated here\n@dataclass\nclass JobConf:\n # Job name, populated automatically unless specified by the user (in config or cli)\n name: str = MISSING\n\n # Populated automatically by Hydra.\n # Concatenation of job overrides that can be used as a part\n # of the directory name.\n # This can be configured via hydra.job.config.override_dirname\n override_dirname: str = MISSING\n\n # Job ID in underlying scheduling system\n id: str = MISSING\n\n # Job number if job is a part of a sweep\n num: int = MISSING\n\n # The config name used by the job\n config_name: Optional[str] = MISSING\n\n # Environment variables to set remotely\n env_set: Dict[str, str] = field(default_factory=dict)\n # Environment variables to copy from the launching machine\n env_copy: List[str] = field(default_factory=list)\n\n # Job config\n @dataclass\n class JobConfig:\n @dataclass\n # configuration for the ${hydra.job.override_dirname} runtime variable\n class OverrideDirname:\n kv_sep: str = \"=\"\n item_sep: str = \",\"\n exclude_keys: List[str] = field(default_factory=list)\n\n override_dirname: OverrideDirname = OverrideDirname()\n\n config: JobConfig = JobConfig()\n\n\n@dataclass\nclass RuntimeConf:\n version: str = MISSING\n cwd: str = MISSING\n\n\n@dataclass\nclass HydraConf:\n defaults: List[Any] = field(\n default_factory=lambda: [\n {\"output\": \"default\"},\n {\"launcher\": \"basic\"},\n {\"sweeper\": \"basic\"},\n {\"help\": \"default\"},\n {\"hydra_help\": \"default\"},\n {\"hydra_logging\": \"default\"},\n {\"job_logging\": \"default\"},\n {\"callbacks\": None},\n # env specific overrides\n {\"env\": \"default\"},\n ]\n )\n\n # Elements to append to the config search path.\n # Note: This can only be configured in the primary config.\n searchpath: List[str] = field(default_factory=list)\n\n # Normal run output configuration\n run: RunDir = RunDir()\n # Multi-run output configuration\n sweep: SweepDir = SweepDir()\n # Logging configuration for Hydra\n hydra_logging: Any = MISSING\n # Logging configuration for the job\n job_logging: Any = MISSING\n\n # Sweeper configuration\n sweeper: Any = MISSING\n # Launcher configuration\n launcher: Any = MISSING\n # Callbacks configuration\n callbacks: Dict[str, Any] = field(default_factory=dict)\n\n # Program Help template\n help: HelpConf = HelpConf()\n # Hydra's Help template\n hydra_help: HydraHelpConf = HydraHelpConf()\n\n # Output directory for produced configuration files and overrides.\n # E.g., hydra.yaml, overrides.yaml will go here. Useful for debugging\n # and extra context when looking at past runs.\n # Setting to None will prevent the creation of the output subdir.\n output_subdir: Optional[str] = \".hydra\"\n\n # Those lists will contain runtime overrides\n overrides: OverridesConf = OverridesConf()\n\n job: JobConf = JobConf()\n\n # populated at runtime\n runtime: RuntimeConf = RuntimeConf()\n\n # Can be a boolean, string or a list of strings\n # If a boolean, setting to true will set the log level for the root logger to debug\n # If a string, it's interpreted as a the list [string]\n # If a list, each element is interpreted as a logger to have logging level set to debug.\n # Typical command lines to manipulate hydra.verbose:\n # hydra.verbose=true\n # hydra.verbose=[hydra,__main__]\n # TODO: good use case for Union support in OmegaConf\n verbose: Any = False\n\n # Composition choices dictionary\n choices: Dict[str, str] = field(default_factory=lambda: {})\n\n\ncs = ConfigStore.instance()\n\ncs.store(\n group=\"hydra\",\n name=\"config\",\n node=HydraConf(),\n provider=\"hydra\",\n)\n", "path": "hydra/conf/__init__.py"}]} | 1,820 | 98 |
gh_patches_debug_3803 | rasdani/github-patches | git_diff | mkdocs__mkdocs-1656 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
url filter gives the wrong path to local anchor links
When passed "#internal-anchor" on a page other than the top-level index, the `url` filter returns a relative path to the top-level index.
Like, if I have a page `about.md` and the page.url is `about/`; if I pass "#internal-anchor" to the `url` filter, I get `../#internal-anchor`.
The `url` filter should not modify those URLs that are internal to the current page.
(I suffer from this problem when using the material theme; in 3.0.4, in the "Skip to content" link, it passes a toc item's url to the `url` filter, which breaks it on every page except the top-level index. HTMLProofer complains about the broken link.)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mkdocs/utils/__init__.py`
Content:
```
1 # coding: utf-8
2
3 """
4 Standalone file utils.
5
6 Nothing in this module should have an knowledge of config or the layout
7 and structure of the site and pages in the site.
8 """
9
10 from __future__ import unicode_literals
11
12 import logging
13 import os
14 import pkg_resources
15 import shutil
16 import re
17 import sys
18 import yaml
19 import fnmatch
20 import posixpath
21
22 from mkdocs import exceptions
23
24 try: # pragma: no cover
25 from urllib.parse import urlparse, urlunparse, urljoin # noqa
26 from collections import UserDict # noqa
27 except ImportError: # pragma: no cover
28 from urlparse import urlparse, urlunparse, urljoin # noqa
29 from UserDict import UserDict # noqa
30
31
32 PY3 = sys.version_info[0] == 3
33
34 if PY3: # pragma: no cover
35 string_types = str, # noqa
36 text_type = str # noqa
37 else: # pragma: no cover
38 string_types = basestring, # noqa
39 text_type = unicode # noqa
40
41 log = logging.getLogger(__name__)
42
43 markdown_extensions = [
44 '.markdown',
45 '.mdown',
46 '.mkdn',
47 '.mkd',
48 '.md'
49 ]
50
51
52 def yaml_load(source, loader=yaml.Loader):
53 """
54 Wrap PyYaml's loader so we can extend it to suit our needs.
55
56 Load all strings as unicode.
57 https://stackoverflow.com/a/2967461/3609487
58 """
59
60 def construct_yaml_str(self, node):
61 """
62 Override the default string handling function to always return
63 unicode objects.
64 """
65 return self.construct_scalar(node)
66
67 class Loader(loader):
68 """
69 Define a custom loader derived from the global loader to leave the
70 global loader unaltered.
71 """
72
73 # Attach our unicode constructor to our custom loader ensuring all strings
74 # will be unicode on translation.
75 Loader.add_constructor('tag:yaml.org,2002:str', construct_yaml_str)
76
77 try:
78 return yaml.load(source, Loader)
79 finally:
80 # TODO: Remove this when external calls are properly cleaning up file
81 # objects. Some mkdocs internal calls, sometimes in test lib, will
82 # load configs with a file object but never close it. On some
83 # systems, if a delete action is performed on that file without Python
84 # closing that object, there will be an access error. This will
85 # process the file and close it as there should be no more use for the
86 # file once we process the yaml content.
87 if hasattr(source, 'close'):
88 source.close()
89
90
91 def modified_time(file_path):
92 """
93 Return the modified time of the supplied file. If the file does not exists zero is returned.
94 see build_pages for use.
95 """
96 if os.path.exists(file_path):
97 return os.path.getmtime(file_path)
98 else:
99 return 0.0
100
101
102 def reduce_list(data_set):
103 """ Reduce duplicate items in a list and preserve order """
104 seen = set()
105 return [item for item in data_set if
106 item not in seen and not seen.add(item)]
107
108
109 def copy_file(source_path, output_path):
110 """
111 Copy source_path to output_path, making sure any parent directories exist.
112
113 The output_path may be a directory.
114 """
115 output_dir = os.path.dirname(output_path)
116 if not os.path.exists(output_dir):
117 os.makedirs(output_dir)
118 if os.path.isdir(output_path):
119 output_path = os.path.join(output_path, os.path.basename(source_path))
120 shutil.copyfile(source_path, output_path)
121
122
123 def write_file(content, output_path):
124 """
125 Write content to output_path, making sure any parent directories exist.
126 """
127 output_dir = os.path.dirname(output_path)
128 if not os.path.exists(output_dir):
129 os.makedirs(output_dir)
130 with open(output_path, 'wb') as f:
131 f.write(content)
132
133
134 def clean_directory(directory):
135 """
136 Remove the content of a directory recursively but not the directory itself.
137 """
138 if not os.path.exists(directory):
139 return
140
141 for entry in os.listdir(directory):
142
143 # Don't remove hidden files from the directory. We never copy files
144 # that are hidden, so we shouldn't delete them either.
145 if entry.startswith('.'):
146 continue
147
148 path = os.path.join(directory, entry)
149 if os.path.isdir(path):
150 shutil.rmtree(path, True)
151 else:
152 os.unlink(path)
153
154
155 def get_html_path(path):
156 """
157 Map a source file path to an output html path.
158
159 Paths like 'index.md' will be converted to 'index.html'
160 Paths like 'about.md' will be converted to 'about/index.html'
161 Paths like 'api-guide/core.md' will be converted to 'api-guide/core/index.html'
162 """
163 path = os.path.splitext(path)[0]
164 if os.path.basename(path) == 'index':
165 return path + '.html'
166 return "/".join((path, 'index.html'))
167
168
169 def get_url_path(path, use_directory_urls=True):
170 """
171 Map a source file path to an output html path.
172
173 Paths like 'index.md' will be converted to '/'
174 Paths like 'about.md' will be converted to '/about/'
175 Paths like 'api-guide/core.md' will be converted to '/api-guide/core/'
176
177 If `use_directory_urls` is `False`, returned URLs will include the a trailing
178 `index.html` rather than just returning the directory path.
179 """
180 path = get_html_path(path)
181 url = '/' + path.replace(os.path.sep, '/')
182 if use_directory_urls:
183 return url[:-len('index.html')]
184 return url
185
186
187 def is_markdown_file(path):
188 """
189 Return True if the given file path is a Markdown file.
190
191 https://superuser.com/questions/249436/file-extension-for-markdown-files
192 """
193 return any(fnmatch.fnmatch(path.lower(), '*{0}'.format(x)) for x in markdown_extensions)
194
195
196 def is_html_file(path):
197 """
198 Return True if the given file path is an HTML file.
199 """
200 ext = os.path.splitext(path)[1].lower()
201 return ext in [
202 '.html',
203 '.htm',
204 ]
205
206
207 def is_template_file(path):
208 """
209 Return True if the given file path is an HTML file.
210 """
211 ext = os.path.splitext(path)[1].lower()
212 return ext in [
213 '.html',
214 '.htm',
215 '.xml',
216 ]
217
218
219 _ERROR_TEMPLATE_RE = re.compile(r'^\d{3}\.html?$')
220
221
222 def is_error_template(path):
223 """
224 Return True if the given file path is an HTTP error template.
225 """
226 return bool(_ERROR_TEMPLATE_RE.match(path))
227
228
229 def get_relative_url(url, other):
230 """
231 Return given url relative to other.
232 """
233 if other != '.':
234 # Remove filename from other url if it has one.
235 parts = posixpath.split(other)
236 other = parts[0] if '.' in parts[1] else other
237 relurl = posixpath.relpath(url, other)
238 return relurl + '/' if url.endswith('/') else relurl
239
240
241 def normalize_url(path, page=None, base=''):
242 """ Return a URL relative to the given page or using the base. """
243 path = path_to_url(path or '.')
244 # Allow links to be fully qualified URL's
245 parsed = urlparse(path)
246 if parsed.scheme or parsed.netloc or path.startswith('/'):
247 return path
248
249 # We must be looking at a local path.
250 if page is not None:
251 return get_relative_url(path, page.url)
252 else:
253 return posixpath.join(base, path)
254
255
256 def create_media_urls(path_list, page=None, base=''):
257 """
258 Return a list of URLs relative to the given page or using the base.
259 """
260 urls = []
261
262 for path in path_list:
263 urls.append(normalize_url(path, page, base))
264
265 return urls
266
267
268 def path_to_url(path):
269 """Convert a system path to a URL."""
270
271 return '/'.join(path.split('\\'))
272
273
274 def get_theme_dir(name):
275 """ Return the directory of an installed theme by name. """
276
277 theme = get_themes()[name]
278 return os.path.dirname(os.path.abspath(theme.load().__file__))
279
280
281 def get_themes():
282 """ Return a dict of all installed themes as (name, entry point) pairs. """
283
284 themes = {}
285 builtins = pkg_resources.get_entry_map(dist='mkdocs', group='mkdocs.themes')
286
287 for theme in pkg_resources.iter_entry_points(group='mkdocs.themes'):
288
289 if theme.name in builtins and theme.dist.key != 'mkdocs':
290 raise exceptions.ConfigurationError(
291 "The theme {0} is a builtin theme but {1} provides a theme "
292 "with the same name".format(theme.name, theme.dist.key))
293
294 elif theme.name in themes:
295 multiple_packages = [themes[theme.name].dist.key, theme.dist.key]
296 log.warning("The theme %s is provided by the Python packages "
297 "'%s'. The one in %s will be used.",
298 theme.name, ','.join(multiple_packages), theme.dist.key)
299
300 themes[theme.name] = theme
301
302 return themes
303
304
305 def get_theme_names():
306 """Return a list of all installed themes by name."""
307
308 return get_themes().keys()
309
310
311 def dirname_to_title(dirname):
312
313 title = dirname
314 title = title.replace('-', ' ').replace('_', ' ')
315 # Capitalize if the dirname was all lowercase, otherwise leave it as-is.
316 if title.lower() == title:
317 title = title.capitalize()
318
319 return title
320
321
322 def get_markdown_title(markdown_src):
323 """
324 Get the title of a Markdown document. The title in this case is considered
325 to be a H1 that occurs before any other content in the document.
326 The procedure is then to iterate through the lines, stopping at the first
327 non-whitespace content. If it is a title, return that, otherwise return
328 None.
329 """
330
331 lines = markdown_src.replace('\r\n', '\n').replace('\r', '\n').split('\n')
332 while lines:
333 line = lines.pop(0).strip()
334 if not line.strip():
335 continue
336 if not line.startswith('# '):
337 return
338 return line.lstrip('# ')
339
340
341 def find_or_create_node(branch, key):
342 """
343 Given a list, look for dictionary with a key matching key and return it's
344 value. If it doesn't exist, create it with the value of an empty list and
345 return that.
346 """
347
348 for node in branch:
349 if not isinstance(node, dict):
350 continue
351
352 if key in node:
353 return node[key]
354
355 new_branch = []
356 node = {key: new_branch}
357 branch.append(node)
358 return new_branch
359
360
361 def nest_paths(paths):
362 """
363 Given a list of paths, convert them into a nested structure that will match
364 the pages config.
365 """
366 nested = []
367
368 for path in paths:
369
370 if os.path.sep not in path:
371 nested.append(path)
372 continue
373
374 directory, _ = os.path.split(path)
375 parts = directory.split(os.path.sep)
376
377 branch = nested
378 for part in parts:
379 part = dirname_to_title(part)
380 branch = find_or_create_node(branch, part)
381
382 branch.append(path)
383
384 return nested
385
386
387 class WarningFilter(logging.Filter):
388 """ Counts all WARNING level log messages. """
389 count = 0
390
391 def filter(self, record):
392 if record.levelno == logging.WARNING:
393 self.count += 1
394 return True
395
396
397 # A global instance to use throughout package
398 warning_filter = WarningFilter()
399
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mkdocs/utils/__init__.py b/mkdocs/utils/__init__.py
--- a/mkdocs/utils/__init__.py
+++ b/mkdocs/utils/__init__.py
@@ -243,7 +243,7 @@
path = path_to_url(path or '.')
# Allow links to be fully qualified URL's
parsed = urlparse(path)
- if parsed.scheme or parsed.netloc or path.startswith('/'):
+ if parsed.scheme or parsed.netloc or path.startswith(('/', '#')):
return path
# We must be looking at a local path.
| {"golden_diff": "diff --git a/mkdocs/utils/__init__.py b/mkdocs/utils/__init__.py\n--- a/mkdocs/utils/__init__.py\n+++ b/mkdocs/utils/__init__.py\n@@ -243,7 +243,7 @@\n path = path_to_url(path or '.')\n # Allow links to be fully qualified URL's\n parsed = urlparse(path)\n- if parsed.scheme or parsed.netloc or path.startswith('/'):\n+ if parsed.scheme or parsed.netloc or path.startswith(('/', '#')):\n return path\n \n # We must be looking at a local path.\n", "issue": "url filter gives the wrong path to local anchor links\nWhen passed \"#internal-anchor\" on a page other than the top-level index, the `url` filter returns a relative path to the top-level index.\r\n\r\nLike, if I have a page `about.md` and the page.url is `about/`; if I pass \"#internal-anchor\" to the `url` filter, I get `../#internal-anchor`.\r\n\r\nThe `url` filter should not modify those URLs that are internal to the current page.\r\n\r\n(I suffer from this problem when using the material theme; in 3.0.4, in the \"Skip to content\" link, it passes a toc item's url to the `url` filter, which breaks it on every page except the top-level index. HTMLProofer complains about the broken link.)\n", "before_files": [{"content": "# coding: utf-8\n\n\"\"\"\nStandalone file utils.\n\nNothing in this module should have an knowledge of config or the layout\nand structure of the site and pages in the site.\n\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport logging\nimport os\nimport pkg_resources\nimport shutil\nimport re\nimport sys\nimport yaml\nimport fnmatch\nimport posixpath\n\nfrom mkdocs import exceptions\n\ntry: # pragma: no cover\n from urllib.parse import urlparse, urlunparse, urljoin # noqa\n from collections import UserDict # noqa\nexcept ImportError: # pragma: no cover\n from urlparse import urlparse, urlunparse, urljoin # noqa\n from UserDict import UserDict # noqa\n\n\nPY3 = sys.version_info[0] == 3\n\nif PY3: # pragma: no cover\n string_types = str, # noqa\n text_type = str # noqa\nelse: # pragma: no cover\n string_types = basestring, # noqa\n text_type = unicode # noqa\n\nlog = logging.getLogger(__name__)\n\nmarkdown_extensions = [\n '.markdown',\n '.mdown',\n '.mkdn',\n '.mkd',\n '.md'\n]\n\n\ndef yaml_load(source, loader=yaml.Loader):\n \"\"\"\n Wrap PyYaml's loader so we can extend it to suit our needs.\n\n Load all strings as unicode.\n https://stackoverflow.com/a/2967461/3609487\n \"\"\"\n\n def construct_yaml_str(self, node):\n \"\"\"\n Override the default string handling function to always return\n unicode objects.\n \"\"\"\n return self.construct_scalar(node)\n\n class Loader(loader):\n \"\"\"\n Define a custom loader derived from the global loader to leave the\n global loader unaltered.\n \"\"\"\n\n # Attach our unicode constructor to our custom loader ensuring all strings\n # will be unicode on translation.\n Loader.add_constructor('tag:yaml.org,2002:str', construct_yaml_str)\n\n try:\n return yaml.load(source, Loader)\n finally:\n # TODO: Remove this when external calls are properly cleaning up file\n # objects. Some mkdocs internal calls, sometimes in test lib, will\n # load configs with a file object but never close it. On some\n # systems, if a delete action is performed on that file without Python\n # closing that object, there will be an access error. This will\n # process the file and close it as there should be no more use for the\n # file once we process the yaml content.\n if hasattr(source, 'close'):\n source.close()\n\n\ndef modified_time(file_path):\n \"\"\"\n Return the modified time of the supplied file. If the file does not exists zero is returned.\n see build_pages for use.\n \"\"\"\n if os.path.exists(file_path):\n return os.path.getmtime(file_path)\n else:\n return 0.0\n\n\ndef reduce_list(data_set):\n \"\"\" Reduce duplicate items in a list and preserve order \"\"\"\n seen = set()\n return [item for item in data_set if\n item not in seen and not seen.add(item)]\n\n\ndef copy_file(source_path, output_path):\n \"\"\"\n Copy source_path to output_path, making sure any parent directories exist.\n\n The output_path may be a directory.\n \"\"\"\n output_dir = os.path.dirname(output_path)\n if not os.path.exists(output_dir):\n os.makedirs(output_dir)\n if os.path.isdir(output_path):\n output_path = os.path.join(output_path, os.path.basename(source_path))\n shutil.copyfile(source_path, output_path)\n\n\ndef write_file(content, output_path):\n \"\"\"\n Write content to output_path, making sure any parent directories exist.\n \"\"\"\n output_dir = os.path.dirname(output_path)\n if not os.path.exists(output_dir):\n os.makedirs(output_dir)\n with open(output_path, 'wb') as f:\n f.write(content)\n\n\ndef clean_directory(directory):\n \"\"\"\n Remove the content of a directory recursively but not the directory itself.\n \"\"\"\n if not os.path.exists(directory):\n return\n\n for entry in os.listdir(directory):\n\n # Don't remove hidden files from the directory. We never copy files\n # that are hidden, so we shouldn't delete them either.\n if entry.startswith('.'):\n continue\n\n path = os.path.join(directory, entry)\n if os.path.isdir(path):\n shutil.rmtree(path, True)\n else:\n os.unlink(path)\n\n\ndef get_html_path(path):\n \"\"\"\n Map a source file path to an output html path.\n\n Paths like 'index.md' will be converted to 'index.html'\n Paths like 'about.md' will be converted to 'about/index.html'\n Paths like 'api-guide/core.md' will be converted to 'api-guide/core/index.html'\n \"\"\"\n path = os.path.splitext(path)[0]\n if os.path.basename(path) == 'index':\n return path + '.html'\n return \"/\".join((path, 'index.html'))\n\n\ndef get_url_path(path, use_directory_urls=True):\n \"\"\"\n Map a source file path to an output html path.\n\n Paths like 'index.md' will be converted to '/'\n Paths like 'about.md' will be converted to '/about/'\n Paths like 'api-guide/core.md' will be converted to '/api-guide/core/'\n\n If `use_directory_urls` is `False`, returned URLs will include the a trailing\n `index.html` rather than just returning the directory path.\n \"\"\"\n path = get_html_path(path)\n url = '/' + path.replace(os.path.sep, '/')\n if use_directory_urls:\n return url[:-len('index.html')]\n return url\n\n\ndef is_markdown_file(path):\n \"\"\"\n Return True if the given file path is a Markdown file.\n\n https://superuser.com/questions/249436/file-extension-for-markdown-files\n \"\"\"\n return any(fnmatch.fnmatch(path.lower(), '*{0}'.format(x)) for x in markdown_extensions)\n\n\ndef is_html_file(path):\n \"\"\"\n Return True if the given file path is an HTML file.\n \"\"\"\n ext = os.path.splitext(path)[1].lower()\n return ext in [\n '.html',\n '.htm',\n ]\n\n\ndef is_template_file(path):\n \"\"\"\n Return True if the given file path is an HTML file.\n \"\"\"\n ext = os.path.splitext(path)[1].lower()\n return ext in [\n '.html',\n '.htm',\n '.xml',\n ]\n\n\n_ERROR_TEMPLATE_RE = re.compile(r'^\\d{3}\\.html?$')\n\n\ndef is_error_template(path):\n \"\"\"\n Return True if the given file path is an HTTP error template.\n \"\"\"\n return bool(_ERROR_TEMPLATE_RE.match(path))\n\n\ndef get_relative_url(url, other):\n \"\"\"\n Return given url relative to other.\n \"\"\"\n if other != '.':\n # Remove filename from other url if it has one.\n parts = posixpath.split(other)\n other = parts[0] if '.' in parts[1] else other\n relurl = posixpath.relpath(url, other)\n return relurl + '/' if url.endswith('/') else relurl\n\n\ndef normalize_url(path, page=None, base=''):\n \"\"\" Return a URL relative to the given page or using the base. \"\"\"\n path = path_to_url(path or '.')\n # Allow links to be fully qualified URL's\n parsed = urlparse(path)\n if parsed.scheme or parsed.netloc or path.startswith('/'):\n return path\n\n # We must be looking at a local path.\n if page is not None:\n return get_relative_url(path, page.url)\n else:\n return posixpath.join(base, path)\n\n\ndef create_media_urls(path_list, page=None, base=''):\n \"\"\"\n Return a list of URLs relative to the given page or using the base.\n \"\"\"\n urls = []\n\n for path in path_list:\n urls.append(normalize_url(path, page, base))\n\n return urls\n\n\ndef path_to_url(path):\n \"\"\"Convert a system path to a URL.\"\"\"\n\n return '/'.join(path.split('\\\\'))\n\n\ndef get_theme_dir(name):\n \"\"\" Return the directory of an installed theme by name. \"\"\"\n\n theme = get_themes()[name]\n return os.path.dirname(os.path.abspath(theme.load().__file__))\n\n\ndef get_themes():\n \"\"\" Return a dict of all installed themes as (name, entry point) pairs. \"\"\"\n\n themes = {}\n builtins = pkg_resources.get_entry_map(dist='mkdocs', group='mkdocs.themes')\n\n for theme in pkg_resources.iter_entry_points(group='mkdocs.themes'):\n\n if theme.name in builtins and theme.dist.key != 'mkdocs':\n raise exceptions.ConfigurationError(\n \"The theme {0} is a builtin theme but {1} provides a theme \"\n \"with the same name\".format(theme.name, theme.dist.key))\n\n elif theme.name in themes:\n multiple_packages = [themes[theme.name].dist.key, theme.dist.key]\n log.warning(\"The theme %s is provided by the Python packages \"\n \"'%s'. The one in %s will be used.\",\n theme.name, ','.join(multiple_packages), theme.dist.key)\n\n themes[theme.name] = theme\n\n return themes\n\n\ndef get_theme_names():\n \"\"\"Return a list of all installed themes by name.\"\"\"\n\n return get_themes().keys()\n\n\ndef dirname_to_title(dirname):\n\n title = dirname\n title = title.replace('-', ' ').replace('_', ' ')\n # Capitalize if the dirname was all lowercase, otherwise leave it as-is.\n if title.lower() == title:\n title = title.capitalize()\n\n return title\n\n\ndef get_markdown_title(markdown_src):\n \"\"\"\n Get the title of a Markdown document. The title in this case is considered\n to be a H1 that occurs before any other content in the document.\n The procedure is then to iterate through the lines, stopping at the first\n non-whitespace content. If it is a title, return that, otherwise return\n None.\n \"\"\"\n\n lines = markdown_src.replace('\\r\\n', '\\n').replace('\\r', '\\n').split('\\n')\n while lines:\n line = lines.pop(0).strip()\n if not line.strip():\n continue\n if not line.startswith('# '):\n return\n return line.lstrip('# ')\n\n\ndef find_or_create_node(branch, key):\n \"\"\"\n Given a list, look for dictionary with a key matching key and return it's\n value. If it doesn't exist, create it with the value of an empty list and\n return that.\n \"\"\"\n\n for node in branch:\n if not isinstance(node, dict):\n continue\n\n if key in node:\n return node[key]\n\n new_branch = []\n node = {key: new_branch}\n branch.append(node)\n return new_branch\n\n\ndef nest_paths(paths):\n \"\"\"\n Given a list of paths, convert them into a nested structure that will match\n the pages config.\n \"\"\"\n nested = []\n\n for path in paths:\n\n if os.path.sep not in path:\n nested.append(path)\n continue\n\n directory, _ = os.path.split(path)\n parts = directory.split(os.path.sep)\n\n branch = nested\n for part in parts:\n part = dirname_to_title(part)\n branch = find_or_create_node(branch, part)\n\n branch.append(path)\n\n return nested\n\n\nclass WarningFilter(logging.Filter):\n \"\"\" Counts all WARNING level log messages. \"\"\"\n count = 0\n\n def filter(self, record):\n if record.levelno == logging.WARNING:\n self.count += 1\n return True\n\n\n# A global instance to use throughout package\nwarning_filter = WarningFilter()\n", "path": "mkdocs/utils/__init__.py"}], "after_files": [{"content": "# coding: utf-8\n\n\"\"\"\nStandalone file utils.\n\nNothing in this module should have an knowledge of config or the layout\nand structure of the site and pages in the site.\n\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport logging\nimport os\nimport pkg_resources\nimport shutil\nimport re\nimport sys\nimport yaml\nimport fnmatch\nimport posixpath\n\nfrom mkdocs import exceptions\n\ntry: # pragma: no cover\n from urllib.parse import urlparse, urlunparse, urljoin # noqa\n from collections import UserDict # noqa\nexcept ImportError: # pragma: no cover\n from urlparse import urlparse, urlunparse, urljoin # noqa\n from UserDict import UserDict # noqa\n\n\nPY3 = sys.version_info[0] == 3\n\nif PY3: # pragma: no cover\n string_types = str, # noqa\n text_type = str # noqa\nelse: # pragma: no cover\n string_types = basestring, # noqa\n text_type = unicode # noqa\n\nlog = logging.getLogger(__name__)\n\nmarkdown_extensions = [\n '.markdown',\n '.mdown',\n '.mkdn',\n '.mkd',\n '.md'\n]\n\n\ndef yaml_load(source, loader=yaml.Loader):\n \"\"\"\n Wrap PyYaml's loader so we can extend it to suit our needs.\n\n Load all strings as unicode.\n https://stackoverflow.com/a/2967461/3609487\n \"\"\"\n\n def construct_yaml_str(self, node):\n \"\"\"\n Override the default string handling function to always return\n unicode objects.\n \"\"\"\n return self.construct_scalar(node)\n\n class Loader(loader):\n \"\"\"\n Define a custom loader derived from the global loader to leave the\n global loader unaltered.\n \"\"\"\n\n # Attach our unicode constructor to our custom loader ensuring all strings\n # will be unicode on translation.\n Loader.add_constructor('tag:yaml.org,2002:str', construct_yaml_str)\n\n try:\n return yaml.load(source, Loader)\n finally:\n # TODO: Remove this when external calls are properly cleaning up file\n # objects. Some mkdocs internal calls, sometimes in test lib, will\n # load configs with a file object but never close it. On some\n # systems, if a delete action is performed on that file without Python\n # closing that object, there will be an access error. This will\n # process the file and close it as there should be no more use for the\n # file once we process the yaml content.\n if hasattr(source, 'close'):\n source.close()\n\n\ndef modified_time(file_path):\n \"\"\"\n Return the modified time of the supplied file. If the file does not exists zero is returned.\n see build_pages for use.\n \"\"\"\n if os.path.exists(file_path):\n return os.path.getmtime(file_path)\n else:\n return 0.0\n\n\ndef reduce_list(data_set):\n \"\"\" Reduce duplicate items in a list and preserve order \"\"\"\n seen = set()\n return [item for item in data_set if\n item not in seen and not seen.add(item)]\n\n\ndef copy_file(source_path, output_path):\n \"\"\"\n Copy source_path to output_path, making sure any parent directories exist.\n\n The output_path may be a directory.\n \"\"\"\n output_dir = os.path.dirname(output_path)\n if not os.path.exists(output_dir):\n os.makedirs(output_dir)\n if os.path.isdir(output_path):\n output_path = os.path.join(output_path, os.path.basename(source_path))\n shutil.copyfile(source_path, output_path)\n\n\ndef write_file(content, output_path):\n \"\"\"\n Write content to output_path, making sure any parent directories exist.\n \"\"\"\n output_dir = os.path.dirname(output_path)\n if not os.path.exists(output_dir):\n os.makedirs(output_dir)\n with open(output_path, 'wb') as f:\n f.write(content)\n\n\ndef clean_directory(directory):\n \"\"\"\n Remove the content of a directory recursively but not the directory itself.\n \"\"\"\n if not os.path.exists(directory):\n return\n\n for entry in os.listdir(directory):\n\n # Don't remove hidden files from the directory. We never copy files\n # that are hidden, so we shouldn't delete them either.\n if entry.startswith('.'):\n continue\n\n path = os.path.join(directory, entry)\n if os.path.isdir(path):\n shutil.rmtree(path, True)\n else:\n os.unlink(path)\n\n\ndef get_html_path(path):\n \"\"\"\n Map a source file path to an output html path.\n\n Paths like 'index.md' will be converted to 'index.html'\n Paths like 'about.md' will be converted to 'about/index.html'\n Paths like 'api-guide/core.md' will be converted to 'api-guide/core/index.html'\n \"\"\"\n path = os.path.splitext(path)[0]\n if os.path.basename(path) == 'index':\n return path + '.html'\n return \"/\".join((path, 'index.html'))\n\n\ndef get_url_path(path, use_directory_urls=True):\n \"\"\"\n Map a source file path to an output html path.\n\n Paths like 'index.md' will be converted to '/'\n Paths like 'about.md' will be converted to '/about/'\n Paths like 'api-guide/core.md' will be converted to '/api-guide/core/'\n\n If `use_directory_urls` is `False`, returned URLs will include the a trailing\n `index.html` rather than just returning the directory path.\n \"\"\"\n path = get_html_path(path)\n url = '/' + path.replace(os.path.sep, '/')\n if use_directory_urls:\n return url[:-len('index.html')]\n return url\n\n\ndef is_markdown_file(path):\n \"\"\"\n Return True if the given file path is a Markdown file.\n\n https://superuser.com/questions/249436/file-extension-for-markdown-files\n \"\"\"\n return any(fnmatch.fnmatch(path.lower(), '*{0}'.format(x)) for x in markdown_extensions)\n\n\ndef is_html_file(path):\n \"\"\"\n Return True if the given file path is an HTML file.\n \"\"\"\n ext = os.path.splitext(path)[1].lower()\n return ext in [\n '.html',\n '.htm',\n ]\n\n\ndef is_template_file(path):\n \"\"\"\n Return True if the given file path is an HTML file.\n \"\"\"\n ext = os.path.splitext(path)[1].lower()\n return ext in [\n '.html',\n '.htm',\n '.xml',\n ]\n\n\n_ERROR_TEMPLATE_RE = re.compile(r'^\\d{3}\\.html?$')\n\n\ndef is_error_template(path):\n \"\"\"\n Return True if the given file path is an HTTP error template.\n \"\"\"\n return bool(_ERROR_TEMPLATE_RE.match(path))\n\n\ndef get_relative_url(url, other):\n \"\"\"\n Return given url relative to other.\n \"\"\"\n if other != '.':\n # Remove filename from other url if it has one.\n parts = posixpath.split(other)\n other = parts[0] if '.' in parts[1] else other\n relurl = posixpath.relpath(url, other)\n return relurl + '/' if url.endswith('/') else relurl\n\n\ndef normalize_url(path, page=None, base=''):\n \"\"\" Return a URL relative to the given page or using the base. \"\"\"\n path = path_to_url(path or '.')\n # Allow links to be fully qualified URL's\n parsed = urlparse(path)\n if parsed.scheme or parsed.netloc or path.startswith(('/', '#')):\n return path\n\n # We must be looking at a local path.\n if page is not None:\n return get_relative_url(path, page.url)\n else:\n return posixpath.join(base, path)\n\n\ndef create_media_urls(path_list, page=None, base=''):\n \"\"\"\n Return a list of URLs relative to the given page or using the base.\n \"\"\"\n urls = []\n\n for path in path_list:\n urls.append(normalize_url(path, page, base))\n\n return urls\n\n\ndef path_to_url(path):\n \"\"\"Convert a system path to a URL.\"\"\"\n\n return '/'.join(path.split('\\\\'))\n\n\ndef get_theme_dir(name):\n \"\"\" Return the directory of an installed theme by name. \"\"\"\n\n theme = get_themes()[name]\n return os.path.dirname(os.path.abspath(theme.load().__file__))\n\n\ndef get_themes():\n \"\"\" Return a dict of all installed themes as (name, entry point) pairs. \"\"\"\n\n themes = {}\n builtins = pkg_resources.get_entry_map(dist='mkdocs', group='mkdocs.themes')\n\n for theme in pkg_resources.iter_entry_points(group='mkdocs.themes'):\n\n if theme.name in builtins and theme.dist.key != 'mkdocs':\n raise exceptions.ConfigurationError(\n \"The theme {0} is a builtin theme but {1} provides a theme \"\n \"with the same name\".format(theme.name, theme.dist.key))\n\n elif theme.name in themes:\n multiple_packages = [themes[theme.name].dist.key, theme.dist.key]\n log.warning(\"The theme %s is provided by the Python packages \"\n \"'%s'. The one in %s will be used.\",\n theme.name, ','.join(multiple_packages), theme.dist.key)\n\n themes[theme.name] = theme\n\n return themes\n\n\ndef get_theme_names():\n \"\"\"Return a list of all installed themes by name.\"\"\"\n\n return get_themes().keys()\n\n\ndef dirname_to_title(dirname):\n\n title = dirname\n title = title.replace('-', ' ').replace('_', ' ')\n # Capitalize if the dirname was all lowercase, otherwise leave it as-is.\n if title.lower() == title:\n title = title.capitalize()\n\n return title\n\n\ndef get_markdown_title(markdown_src):\n \"\"\"\n Get the title of a Markdown document. The title in this case is considered\n to be a H1 that occurs before any other content in the document.\n The procedure is then to iterate through the lines, stopping at the first\n non-whitespace content. If it is a title, return that, otherwise return\n None.\n \"\"\"\n\n lines = markdown_src.replace('\\r\\n', '\\n').replace('\\r', '\\n').split('\\n')\n while lines:\n line = lines.pop(0).strip()\n if not line.strip():\n continue\n if not line.startswith('# '):\n return\n return line.lstrip('# ')\n\n\ndef find_or_create_node(branch, key):\n \"\"\"\n Given a list, look for dictionary with a key matching key and return it's\n value. If it doesn't exist, create it with the value of an empty list and\n return that.\n \"\"\"\n\n for node in branch:\n if not isinstance(node, dict):\n continue\n\n if key in node:\n return node[key]\n\n new_branch = []\n node = {key: new_branch}\n branch.append(node)\n return new_branch\n\n\ndef nest_paths(paths):\n \"\"\"\n Given a list of paths, convert them into a nested structure that will match\n the pages config.\n \"\"\"\n nested = []\n\n for path in paths:\n\n if os.path.sep not in path:\n nested.append(path)\n continue\n\n directory, _ = os.path.split(path)\n parts = directory.split(os.path.sep)\n\n branch = nested\n for part in parts:\n part = dirname_to_title(part)\n branch = find_or_create_node(branch, part)\n\n branch.append(path)\n\n return nested\n\n\nclass WarningFilter(logging.Filter):\n \"\"\" Counts all WARNING level log messages. \"\"\"\n count = 0\n\n def filter(self, record):\n if record.levelno == logging.WARNING:\n self.count += 1\n return True\n\n\n# A global instance to use throughout package\nwarning_filter = WarningFilter()\n", "path": "mkdocs/utils/__init__.py"}]} | 4,093 | 130 |
gh_patches_debug_9241 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-1066 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
IoT Topic Rule fails with E1029
*cfn-lint version: 0.22.4*
*Description of issue.*
When using AWS IoT substitution templates (in my case, for IoT SQL functions) within Cloud Formation, it is necessary to use the dollar sign and curly braces (For example `${topic()}`). This gets misinterpreted as a Fn::Sub Parameter which throws an E1029 error.
Please provide as much information as possible:
* Template linting issues:
* Please provide a CloudFormation sample that generated the issue.
```yaml
IotTopicRule:
Type: AWS::IoT::TopicRule
Properties:
RuleName: IotTopicRule
TopicRulePayload:
RuleDisabled: false
Sql: !Sub "SELECT * FROM 'some-topic'"
Actions:
-
Kinesis:
RoleArn: !Sub '${topicRole.Arn}'
StreamName: !Ref MyKinesisStream
PartitionKey: "${topic()}" # error happens here
```
* If present, please add links to the (official) documentation for clarification.
AWS IoT substitution templates are explained here: https://docs.aws.amazon.com/iot/latest/developerguide/iot-substitution-templates.html
How !Sub uses variables (which `cfn-lint` looks for) is found here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html#w2ab1c21c24c59b7
* Validate if the issue still exists with the latest version of `cfn-lint` and/or the latest Spec files
Yes
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/functions/SubNeeded.py`
Content:
```
1 """
2 Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3
4 Permission is hereby granted, free of charge, to any person obtaining a copy of this
5 software and associated documentation files (the "Software"), to deal in the Software
6 without restriction, including without limitation the rights to use, copy, modify,
7 merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
8 permit persons to whom the Software is furnished to do so.
9
10 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
11 INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
12 PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
13 HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
14 OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
15 SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16 """
17 import re
18 from cfnlint import CloudFormationLintRule
19 from cfnlint import RuleMatch
20
21 class SubNeeded(CloudFormationLintRule):
22 """Check if a substitution string exists without a substitution function"""
23 id = 'E1029'
24 shortdesc = 'Sub is required if a variable is used in a string'
25 description = 'If a substitution variable exists in a string but isn\'t wrapped with the Fn::Sub function the deployment will fail.'
26 source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html'
27 tags = ['functions', 'sub']
28
29 # Free-form text properties to exclude from this rule
30 # content is part of AWS::CloudFormation::Init
31 excludes = ['UserData', 'ZipFile', 'Condition', 'AWS::CloudFormation::Init', 'CloudWatchAlarmDefinition']
32 api_excludes = ['Uri', 'Body']
33
34 # IAM Policy has special variables that don't require !Sub, Check for these
35 # https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_variables.html
36 # https://docs.aws.amazon.com/iot/latest/developerguide/basic-policy-variables.html
37 # https://docs.aws.amazon.com/iot/latest/developerguide/thing-policy-variables.html
38 # https://docs.aws.amazon.com/transfer/latest/userguide/users.html#users-policies-scope-down
39 resource_excludes = ['${aws:CurrentTime}', '${aws:EpochTime}', '${aws:TokenIssueTime}', '${aws:principaltype}',
40 '${aws:SecureTransport}', '${aws:SourceIp}', '${aws:UserAgent}', '${aws:userid}',
41 '${aws:username}', '${ec2:SourceInstanceARN}',
42 '${iot:Connection.Thing.ThingName}', '${iot:Connection.Thing.ThingTypeName}',
43 '${iot:Connection.Thing.IsAttached}', '${iot:ClientId}', '${transfer:HomeBucket}',
44 '${transfer:HomeDirectory}', '${transfer:HomeFolder}', '${transfer:UserName}']
45
46 def _match_values(self, searchRegex, cfnelem, path):
47 """Recursively search for values matching the searchRegex"""
48 values = []
49 if isinstance(cfnelem, dict):
50 for key in cfnelem:
51 pathprop = path[:]
52 pathprop.append(key)
53 values.extend(self._match_values(searchRegex, cfnelem[key], pathprop))
54 elif isinstance(cfnelem, list):
55 for index, item in enumerate(cfnelem):
56 pathprop = path[:]
57 pathprop.append(index)
58 values.extend(self._match_values(searchRegex, item, pathprop))
59 else:
60 # Leaf node
61 if isinstance(cfnelem, str) and re.match(searchRegex, cfnelem):
62 # Get all variables as seperate paths
63 regex = re.compile(r'(\$\{.*?\.?.*?})')
64 for variable in re.findall(regex, cfnelem):
65 values.append(path + [variable])
66
67 return values
68
69 def match_values(self, searchRegex, cfn):
70 """
71 Search for values in all parts of the templates that match the searchRegex
72 """
73 results = []
74 results.extend(self._match_values(searchRegex, cfn.template, []))
75 # Globals are removed during a transform. They need to be checked manually
76 results.extend(self._match_values(searchRegex, cfn.template.get('Globals', {}), []))
77 return results
78
79 def _api_exceptions(self, value):
80 """ Key value exceptions """
81 parameter_search = re.compile(r'^\$\{stageVariables\..*\}$')
82 return re.match(parameter_search, value)
83
84 def match(self, cfn):
85 """Basic Rule Matching"""
86
87 matches = []
88
89 # Generic regex to match a string containing at least one ${parameter}
90 parameter_search = re.compile(r'^.*(\$\{.*\}.*(\$\{.*\}.*)*)$')
91
92 # Get a list of paths to every leaf node string containing at least one ${parameter}
93 parameter_string_paths = self.match_values(parameter_search, cfn)
94
95 # We want to search all of the paths to check if each one contains an 'Fn::Sub'
96 for parameter_string_path in parameter_string_paths:
97
98 # Exxclude the special IAM variables
99 variable = parameter_string_path[-1]
100
101 if 'Resource' in parameter_string_path:
102 if variable in self.resource_excludes:
103 continue
104
105 # Exclude literals (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html)
106 if variable.startswith('${!'):
107 continue
108
109 found_sub = False
110 # Does the path contain an 'Fn::Sub'?
111 for step in parameter_string_path:
112 if step in self.api_excludes:
113 if self._api_exceptions(parameter_string_path[-1]):
114 found_sub = True
115 elif step == 'Fn::Sub' or step in self.excludes:
116 found_sub = True
117
118 # If we didn't find an 'Fn::Sub' it means a string containing a ${parameter} may not be evaluated correctly
119 if not found_sub:
120 # Remove the last item (the variable) to prevent multiple errors on 1 line errors
121 path = parameter_string_path[:-1]
122 message = 'Found an embedded parameter outside of an "Fn::Sub" at {}'.format('/'.join(map(str, path)))
123 matches.append(RuleMatch(path, message))
124
125 return matches
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cfnlint/rules/functions/SubNeeded.py b/src/cfnlint/rules/functions/SubNeeded.py
--- a/src/cfnlint/rules/functions/SubNeeded.py
+++ b/src/cfnlint/rules/functions/SubNeeded.py
@@ -28,7 +28,7 @@
# Free-form text properties to exclude from this rule
# content is part of AWS::CloudFormation::Init
- excludes = ['UserData', 'ZipFile', 'Condition', 'AWS::CloudFormation::Init', 'CloudWatchAlarmDefinition']
+ excludes = ['UserData', 'ZipFile', 'Condition', 'AWS::CloudFormation::Init', 'CloudWatchAlarmDefinition', 'TopicRulePayload']
api_excludes = ['Uri', 'Body']
# IAM Policy has special variables that don't require !Sub, Check for these
| {"golden_diff": "diff --git a/src/cfnlint/rules/functions/SubNeeded.py b/src/cfnlint/rules/functions/SubNeeded.py\n--- a/src/cfnlint/rules/functions/SubNeeded.py\n+++ b/src/cfnlint/rules/functions/SubNeeded.py\n@@ -28,7 +28,7 @@\n \n # Free-form text properties to exclude from this rule\n # content is part of AWS::CloudFormation::Init\n- excludes = ['UserData', 'ZipFile', 'Condition', 'AWS::CloudFormation::Init', 'CloudWatchAlarmDefinition']\n+ excludes = ['UserData', 'ZipFile', 'Condition', 'AWS::CloudFormation::Init', 'CloudWatchAlarmDefinition', 'TopicRulePayload']\n api_excludes = ['Uri', 'Body']\n \n # IAM Policy has special variables that don't require !Sub, Check for these\n", "issue": "IoT Topic Rule fails with E1029\n*cfn-lint version: 0.22.4*\r\n\r\n*Description of issue.*\r\n\r\nWhen using AWS IoT substitution templates (in my case, for IoT SQL functions) within Cloud Formation, it is necessary to use the dollar sign and curly braces (For example `${topic()}`). This gets misinterpreted as a Fn::Sub Parameter which throws an E1029 error.\r\n\r\nPlease provide as much information as possible:\r\n* Template linting issues:\r\n * Please provide a CloudFormation sample that generated the issue.\r\n```yaml\r\n IotTopicRule: \r\n Type: AWS::IoT::TopicRule\r\n Properties: \r\n RuleName: IotTopicRule\r\n TopicRulePayload:\r\n RuleDisabled: false\r\n Sql: !Sub \"SELECT * FROM 'some-topic'\"\r\n Actions: \r\n - \r\n Kinesis: \r\n RoleArn: !Sub '${topicRole.Arn}'\r\n StreamName: !Ref MyKinesisStream\r\n PartitionKey: \"${topic()}\" # error happens here\r\n```\r\n * If present, please add links to the (official) documentation for clarification.\r\n\r\nAWS IoT substitution templates are explained here: https://docs.aws.amazon.com/iot/latest/developerguide/iot-substitution-templates.html\r\n\r\nHow !Sub uses variables (which `cfn-lint` looks for) is found here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html#w2ab1c21c24c59b7\r\n\r\n * Validate if the issue still exists with the latest version of `cfn-lint` and/or the latest Spec files\r\n\r\nYes\n", "before_files": [{"content": "\"\"\"\n Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nimport re\nfrom cfnlint import CloudFormationLintRule\nfrom cfnlint import RuleMatch\n\nclass SubNeeded(CloudFormationLintRule):\n \"\"\"Check if a substitution string exists without a substitution function\"\"\"\n id = 'E1029'\n shortdesc = 'Sub is required if a variable is used in a string'\n description = 'If a substitution variable exists in a string but isn\\'t wrapped with the Fn::Sub function the deployment will fail.'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html'\n tags = ['functions', 'sub']\n\n # Free-form text properties to exclude from this rule\n # content is part of AWS::CloudFormation::Init\n excludes = ['UserData', 'ZipFile', 'Condition', 'AWS::CloudFormation::Init', 'CloudWatchAlarmDefinition']\n api_excludes = ['Uri', 'Body']\n\n # IAM Policy has special variables that don't require !Sub, Check for these\n # https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_variables.html\n # https://docs.aws.amazon.com/iot/latest/developerguide/basic-policy-variables.html\n # https://docs.aws.amazon.com/iot/latest/developerguide/thing-policy-variables.html\n # https://docs.aws.amazon.com/transfer/latest/userguide/users.html#users-policies-scope-down\n resource_excludes = ['${aws:CurrentTime}', '${aws:EpochTime}', '${aws:TokenIssueTime}', '${aws:principaltype}',\n '${aws:SecureTransport}', '${aws:SourceIp}', '${aws:UserAgent}', '${aws:userid}',\n '${aws:username}', '${ec2:SourceInstanceARN}',\n '${iot:Connection.Thing.ThingName}', '${iot:Connection.Thing.ThingTypeName}',\n '${iot:Connection.Thing.IsAttached}', '${iot:ClientId}', '${transfer:HomeBucket}',\n '${transfer:HomeDirectory}', '${transfer:HomeFolder}', '${transfer:UserName}']\n\n def _match_values(self, searchRegex, cfnelem, path):\n \"\"\"Recursively search for values matching the searchRegex\"\"\"\n values = []\n if isinstance(cfnelem, dict):\n for key in cfnelem:\n pathprop = path[:]\n pathprop.append(key)\n values.extend(self._match_values(searchRegex, cfnelem[key], pathprop))\n elif isinstance(cfnelem, list):\n for index, item in enumerate(cfnelem):\n pathprop = path[:]\n pathprop.append(index)\n values.extend(self._match_values(searchRegex, item, pathprop))\n else:\n # Leaf node\n if isinstance(cfnelem, str) and re.match(searchRegex, cfnelem):\n # Get all variables as seperate paths\n regex = re.compile(r'(\\$\\{.*?\\.?.*?})')\n for variable in re.findall(regex, cfnelem):\n values.append(path + [variable])\n\n return values\n\n def match_values(self, searchRegex, cfn):\n \"\"\"\n Search for values in all parts of the templates that match the searchRegex\n \"\"\"\n results = []\n results.extend(self._match_values(searchRegex, cfn.template, []))\n # Globals are removed during a transform. They need to be checked manually\n results.extend(self._match_values(searchRegex, cfn.template.get('Globals', {}), []))\n return results\n\n def _api_exceptions(self, value):\n \"\"\" Key value exceptions \"\"\"\n parameter_search = re.compile(r'^\\$\\{stageVariables\\..*\\}$')\n return re.match(parameter_search, value)\n\n def match(self, cfn):\n \"\"\"Basic Rule Matching\"\"\"\n\n matches = []\n\n # Generic regex to match a string containing at least one ${parameter}\n parameter_search = re.compile(r'^.*(\\$\\{.*\\}.*(\\$\\{.*\\}.*)*)$')\n\n # Get a list of paths to every leaf node string containing at least one ${parameter}\n parameter_string_paths = self.match_values(parameter_search, cfn)\n\n # We want to search all of the paths to check if each one contains an 'Fn::Sub'\n for parameter_string_path in parameter_string_paths:\n\n # Exxclude the special IAM variables\n variable = parameter_string_path[-1]\n\n if 'Resource' in parameter_string_path:\n if variable in self.resource_excludes:\n continue\n\n # Exclude literals (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html)\n if variable.startswith('${!'):\n continue\n\n found_sub = False\n # Does the path contain an 'Fn::Sub'?\n for step in parameter_string_path:\n if step in self.api_excludes:\n if self._api_exceptions(parameter_string_path[-1]):\n found_sub = True\n elif step == 'Fn::Sub' or step in self.excludes:\n found_sub = True\n\n # If we didn't find an 'Fn::Sub' it means a string containing a ${parameter} may not be evaluated correctly\n if not found_sub:\n # Remove the last item (the variable) to prevent multiple errors on 1 line errors\n path = parameter_string_path[:-1]\n message = 'Found an embedded parameter outside of an \"Fn::Sub\" at {}'.format('/'.join(map(str, path)))\n matches.append(RuleMatch(path, message))\n\n return matches\n", "path": "src/cfnlint/rules/functions/SubNeeded.py"}], "after_files": [{"content": "\"\"\"\n Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nimport re\nfrom cfnlint import CloudFormationLintRule\nfrom cfnlint import RuleMatch\n\nclass SubNeeded(CloudFormationLintRule):\n \"\"\"Check if a substitution string exists without a substitution function\"\"\"\n id = 'E1029'\n shortdesc = 'Sub is required if a variable is used in a string'\n description = 'If a substitution variable exists in a string but isn\\'t wrapped with the Fn::Sub function the deployment will fail.'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html'\n tags = ['functions', 'sub']\n\n # Free-form text properties to exclude from this rule\n # content is part of AWS::CloudFormation::Init\n excludes = ['UserData', 'ZipFile', 'Condition', 'AWS::CloudFormation::Init', 'CloudWatchAlarmDefinition', 'TopicRulePayload']\n api_excludes = ['Uri', 'Body']\n\n # IAM Policy has special variables that don't require !Sub, Check for these\n # https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_variables.html\n # https://docs.aws.amazon.com/iot/latest/developerguide/basic-policy-variables.html\n # https://docs.aws.amazon.com/iot/latest/developerguide/thing-policy-variables.html\n # https://docs.aws.amazon.com/transfer/latest/userguide/users.html#users-policies-scope-down\n resource_excludes = ['${aws:CurrentTime}', '${aws:EpochTime}', '${aws:TokenIssueTime}', '${aws:principaltype}',\n '${aws:SecureTransport}', '${aws:SourceIp}', '${aws:UserAgent}', '${aws:userid}',\n '${aws:username}', '${ec2:SourceInstanceARN}',\n '${iot:Connection.Thing.ThingName}', '${iot:Connection.Thing.ThingTypeName}',\n '${iot:Connection.Thing.IsAttached}', '${iot:ClientId}', '${transfer:HomeBucket}',\n '${transfer:HomeDirectory}', '${transfer:HomeFolder}', '${transfer:UserName}']\n\n def _match_values(self, searchRegex, cfnelem, path):\n \"\"\"Recursively search for values matching the searchRegex\"\"\"\n values = []\n if isinstance(cfnelem, dict):\n for key in cfnelem:\n pathprop = path[:]\n pathprop.append(key)\n values.extend(self._match_values(searchRegex, cfnelem[key], pathprop))\n elif isinstance(cfnelem, list):\n for index, item in enumerate(cfnelem):\n pathprop = path[:]\n pathprop.append(index)\n values.extend(self._match_values(searchRegex, item, pathprop))\n else:\n # Leaf node\n if isinstance(cfnelem, str) and re.match(searchRegex, cfnelem):\n # Get all variables as seperate paths\n regex = re.compile(r'(\\$\\{.*?\\.?.*?})')\n for variable in re.findall(regex, cfnelem):\n values.append(path + [variable])\n\n return values\n\n def match_values(self, searchRegex, cfn):\n \"\"\"\n Search for values in all parts of the templates that match the searchRegex\n \"\"\"\n results = []\n results.extend(self._match_values(searchRegex, cfn.template, []))\n # Globals are removed during a transform. They need to be checked manually\n results.extend(self._match_values(searchRegex, cfn.template.get('Globals', {}), []))\n return results\n\n def _api_exceptions(self, value):\n \"\"\" Key value exceptions \"\"\"\n parameter_search = re.compile(r'^\\$\\{stageVariables\\..*\\}$')\n return re.match(parameter_search, value)\n\n def match(self, cfn):\n \"\"\"Basic Rule Matching\"\"\"\n\n matches = []\n\n # Generic regex to match a string containing at least one ${parameter}\n parameter_search = re.compile(r'^.*(\\$\\{.*\\}.*(\\$\\{.*\\}.*)*)$')\n\n # Get a list of paths to every leaf node string containing at least one ${parameter}\n parameter_string_paths = self.match_values(parameter_search, cfn)\n\n # We want to search all of the paths to check if each one contains an 'Fn::Sub'\n for parameter_string_path in parameter_string_paths:\n\n # Exxclude the special IAM variables\n variable = parameter_string_path[-1]\n\n if 'Resource' in parameter_string_path:\n if variable in self.resource_excludes:\n continue\n\n # Exclude literals (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html)\n if variable.startswith('${!'):\n continue\n\n found_sub = False\n # Does the path contain an 'Fn::Sub'?\n for step in parameter_string_path:\n if step in self.api_excludes:\n if self._api_exceptions(parameter_string_path[-1]):\n found_sub = True\n elif step == 'Fn::Sub' or step in self.excludes:\n found_sub = True\n\n # If we didn't find an 'Fn::Sub' it means a string containing a ${parameter} may not be evaluated correctly\n if not found_sub:\n # Remove the last item (the variable) to prevent multiple errors on 1 line errors\n path = parameter_string_path[:-1]\n message = 'Found an embedded parameter outside of an \"Fn::Sub\" at {}'.format('/'.join(map(str, path)))\n matches.append(RuleMatch(path, message))\n\n return matches\n", "path": "src/cfnlint/rules/functions/SubNeeded.py"}]} | 2,263 | 177 |
gh_patches_debug_20846 | rasdani/github-patches | git_diff | wagtail__wagtail-1147 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Wagtail doesn't gracefully support session invalidation on password change
According to [Django's documentation](https://docs.djangoproject.com/en/1.7/topics/auth/default/#session-invalidation-on-password-change), SessionAuthenticationMiddleware is new in Django 1.7, enabled by default, and will be mandatory in Django 2.0.
Currently, when the middleware is loaded and the user changes their password, they are immediately kicked out to the sign in screen. The user's session is most likely invalidated. This is very obtrusive and the user is not informed if their password was successfully updated. I believe the offending code is in
[account.py](https://github.com/torchbox/wagtail/blob/master/wagtail/wagtailadmin/views/account.py#L26) and attempted to modify the code from the example to make it work, but the outcome was the same:
``` python
# ...
from django.contrib.auth import update_session_auth_hash # new code
# ...
def change_password(request):
can_change_password = request.user.has_usable_password()
if can_change_password:
if request.POST:
form = SetPasswordForm(request.user, request.POST)
if form.is_valid():
form.save()
update_session_auth_hash(request, form.user) # new code
messages.success(request, _("Your password has been changed successfully!"))
return redirect('wagtailadmin_account')
else:
form = SetPasswordForm(request.user)
else:
form = None
return render(request, 'wagtailadmin/account/change_password.html', {
'form': form,
'can_change_password': can_change_password,
})
```
I am, currently, a Django novice, so that's as far as I was able to get. Hope this is an easy fix!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/wagtailadmin/views/account.py`
Content:
```
1 from django.conf import settings
2 from django.shortcuts import render, redirect
3 from django.contrib import messages
4 from django.contrib.auth.forms import SetPasswordForm
5 from django.contrib.auth.views import logout as auth_logout, login as auth_login
6 from django.utils.translation import ugettext as _
7 from django.views.decorators.debug import sensitive_post_parameters
8 from django.views.decorators.cache import never_cache
9
10 from wagtail.wagtailadmin import forms
11 from wagtail.wagtailusers.forms import NotificationPreferencesForm
12 from wagtail.wagtailusers.models import UserProfile
13 from wagtail.wagtailcore.models import UserPagePermissionsProxy
14
15
16 def account(request):
17 user_perms = UserPagePermissionsProxy(request.user)
18 show_notification_preferences = user_perms.can_edit_pages() or user_perms.can_publish_pages()
19
20 return render(request, 'wagtailadmin/account/account.html', {
21 'show_change_password': getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True) and request.user.has_usable_password(),
22 'show_notification_preferences': show_notification_preferences
23 })
24
25
26 def change_password(request):
27 can_change_password = request.user.has_usable_password()
28
29 if can_change_password:
30 if request.POST:
31 form = SetPasswordForm(request.user, request.POST)
32
33 if form.is_valid():
34 form.save()
35
36 messages.success(request, _("Your password has been changed successfully!"))
37 return redirect('wagtailadmin_account')
38 else:
39 form = SetPasswordForm(request.user)
40 else:
41 form = None
42
43 return render(request, 'wagtailadmin/account/change_password.html', {
44 'form': form,
45 'can_change_password': can_change_password,
46 })
47
48
49 def notification_preferences(request):
50
51 if request.POST:
52 form = NotificationPreferencesForm(request.POST, instance=UserProfile.get_for_user(request.user))
53
54 if form.is_valid():
55 form.save()
56 messages.success(request, _("Your preferences have been updated successfully!"))
57 return redirect('wagtailadmin_account')
58 else:
59 form = NotificationPreferencesForm(instance=UserProfile.get_for_user(request.user))
60
61 # quick-and-dirty catch-all in case the form has been rendered with no
62 # fields, as the user has no customisable permissions
63 if not form.fields:
64 return redirect('wagtailadmin_account')
65
66 return render(request, 'wagtailadmin/account/notification_preferences.html', {
67 'form': form,
68 })
69
70
71 @sensitive_post_parameters()
72 @never_cache
73 def login(request):
74 if request.user.is_authenticated() and request.user.has_perm('wagtailadmin.access_admin'):
75 return redirect('wagtailadmin_home')
76 else:
77 from django.contrib.auth import get_user_model
78 return auth_login(request,
79 template_name='wagtailadmin/login.html',
80 authentication_form=forms.LoginForm,
81 extra_context={
82 'show_password_reset': getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True),
83 'username_field': get_user_model().USERNAME_FIELD,
84 },
85 )
86
87
88 def logout(request):
89 response = auth_logout(request, next_page='wagtailadmin_login')
90
91 # By default, logging out will generate a fresh sessionid cookie. We want to use the
92 # absence of sessionid as an indication that front-end pages are being viewed by a
93 # non-logged-in user and are therefore cacheable, so we forcibly delete the cookie here.
94 response.delete_cookie(settings.SESSION_COOKIE_NAME,
95 domain=settings.SESSION_COOKIE_DOMAIN,
96 path=settings.SESSION_COOKIE_PATH)
97
98 # HACK: pretend that the session hasn't been modified, so that SessionMiddleware
99 # won't override the above and write a new cookie.
100 request.session.modified = False
101
102 return response
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wagtail/wagtailadmin/views/account.py b/wagtail/wagtailadmin/views/account.py
--- a/wagtail/wagtailadmin/views/account.py
+++ b/wagtail/wagtailadmin/views/account.py
@@ -3,6 +3,7 @@
from django.contrib import messages
from django.contrib.auth.forms import SetPasswordForm
from django.contrib.auth.views import logout as auth_logout, login as auth_login
+from django.contrib.auth import update_session_auth_hash
from django.utils.translation import ugettext as _
from django.views.decorators.debug import sensitive_post_parameters
from django.views.decorators.cache import never_cache
@@ -32,6 +33,7 @@
if form.is_valid():
form.save()
+ update_session_auth_hash(request, form.user)
messages.success(request, _("Your password has been changed successfully!"))
return redirect('wagtailadmin_account')
| {"golden_diff": "diff --git a/wagtail/wagtailadmin/views/account.py b/wagtail/wagtailadmin/views/account.py\n--- a/wagtail/wagtailadmin/views/account.py\n+++ b/wagtail/wagtailadmin/views/account.py\n@@ -3,6 +3,7 @@\n from django.contrib import messages\n from django.contrib.auth.forms import SetPasswordForm\n from django.contrib.auth.views import logout as auth_logout, login as auth_login\n+from django.contrib.auth import update_session_auth_hash\n from django.utils.translation import ugettext as _ \n from django.views.decorators.debug import sensitive_post_parameters\n from django.views.decorators.cache import never_cache\n@@ -32,6 +33,7 @@\n \n if form.is_valid():\n form.save()\n+ update_session_auth_hash(request, form.user)\n \n messages.success(request, _(\"Your password has been changed successfully!\"))\n return redirect('wagtailadmin_account')\n", "issue": "Wagtail doesn't gracefully support session invalidation on password change\nAccording to [Django's documentation](https://docs.djangoproject.com/en/1.7/topics/auth/default/#session-invalidation-on-password-change), SessionAuthenticationMiddleware is new in Django 1.7, enabled by default, and will be mandatory in Django 2.0.\n\nCurrently, when the middleware is loaded and the user changes their password, they are immediately kicked out to the sign in screen. The user's session is most likely invalidated. This is very obtrusive and the user is not informed if their password was successfully updated. I believe the offending code is in\n[account.py](https://github.com/torchbox/wagtail/blob/master/wagtail/wagtailadmin/views/account.py#L26) and attempted to modify the code from the example to make it work, but the outcome was the same:\n\n``` python\n# ...\nfrom django.contrib.auth import update_session_auth_hash # new code\n# ...\ndef change_password(request):\n can_change_password = request.user.has_usable_password()\n\n if can_change_password:\n if request.POST:\n form = SetPasswordForm(request.user, request.POST)\n\n if form.is_valid():\n form.save()\n update_session_auth_hash(request, form.user) # new code\n\n messages.success(request, _(\"Your password has been changed successfully!\"))\n return redirect('wagtailadmin_account')\n else:\n form = SetPasswordForm(request.user)\n else:\n form = None\n\n return render(request, 'wagtailadmin/account/change_password.html', {\n 'form': form,\n 'can_change_password': can_change_password,\n })\n```\n\nI am, currently, a Django novice, so that's as far as I was able to get. Hope this is an easy fix!\n\n", "before_files": [{"content": "from django.conf import settings\nfrom django.shortcuts import render, redirect\nfrom django.contrib import messages\nfrom django.contrib.auth.forms import SetPasswordForm\nfrom django.contrib.auth.views import logout as auth_logout, login as auth_login\nfrom django.utils.translation import ugettext as _ \nfrom django.views.decorators.debug import sensitive_post_parameters\nfrom django.views.decorators.cache import never_cache\n\nfrom wagtail.wagtailadmin import forms\nfrom wagtail.wagtailusers.forms import NotificationPreferencesForm\nfrom wagtail.wagtailusers.models import UserProfile\nfrom wagtail.wagtailcore.models import UserPagePermissionsProxy\n\n\ndef account(request):\n user_perms = UserPagePermissionsProxy(request.user)\n show_notification_preferences = user_perms.can_edit_pages() or user_perms.can_publish_pages()\n\n return render(request, 'wagtailadmin/account/account.html', {\n 'show_change_password': getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True) and request.user.has_usable_password(),\n 'show_notification_preferences': show_notification_preferences\n })\n\n\ndef change_password(request):\n can_change_password = request.user.has_usable_password()\n\n if can_change_password:\n if request.POST:\n form = SetPasswordForm(request.user, request.POST)\n\n if form.is_valid():\n form.save()\n\n messages.success(request, _(\"Your password has been changed successfully!\"))\n return redirect('wagtailadmin_account')\n else:\n form = SetPasswordForm(request.user)\n else:\n form = None\n\n return render(request, 'wagtailadmin/account/change_password.html', {\n 'form': form,\n 'can_change_password': can_change_password,\n })\n\n\ndef notification_preferences(request):\n\n if request.POST:\n form = NotificationPreferencesForm(request.POST, instance=UserProfile.get_for_user(request.user))\n\n if form.is_valid():\n form.save()\n messages.success(request, _(\"Your preferences have been updated successfully!\"))\n return redirect('wagtailadmin_account')\n else:\n form = NotificationPreferencesForm(instance=UserProfile.get_for_user(request.user))\n\n # quick-and-dirty catch-all in case the form has been rendered with no\n # fields, as the user has no customisable permissions\n if not form.fields:\n return redirect('wagtailadmin_account')\n\n return render(request, 'wagtailadmin/account/notification_preferences.html', {\n 'form': form,\n })\n\n\n@sensitive_post_parameters()\n@never_cache\ndef login(request):\n if request.user.is_authenticated() and request.user.has_perm('wagtailadmin.access_admin'):\n return redirect('wagtailadmin_home')\n else:\n from django.contrib.auth import get_user_model\n return auth_login(request,\n template_name='wagtailadmin/login.html',\n authentication_form=forms.LoginForm,\n extra_context={\n 'show_password_reset': getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True),\n 'username_field': get_user_model().USERNAME_FIELD,\n },\n )\n\n\ndef logout(request):\n response = auth_logout(request, next_page='wagtailadmin_login')\n\n # By default, logging out will generate a fresh sessionid cookie. We want to use the\n # absence of sessionid as an indication that front-end pages are being viewed by a\n # non-logged-in user and are therefore cacheable, so we forcibly delete the cookie here.\n response.delete_cookie(settings.SESSION_COOKIE_NAME,\n domain=settings.SESSION_COOKIE_DOMAIN,\n path=settings.SESSION_COOKIE_PATH)\n\n # HACK: pretend that the session hasn't been modified, so that SessionMiddleware\n # won't override the above and write a new cookie.\n request.session.modified = False\n\n return response\n", "path": "wagtail/wagtailadmin/views/account.py"}], "after_files": [{"content": "from django.conf import settings\nfrom django.shortcuts import render, redirect\nfrom django.contrib import messages\nfrom django.contrib.auth.forms import SetPasswordForm\nfrom django.contrib.auth.views import logout as auth_logout, login as auth_login\nfrom django.contrib.auth import update_session_auth_hash\nfrom django.utils.translation import ugettext as _ \nfrom django.views.decorators.debug import sensitive_post_parameters\nfrom django.views.decorators.cache import never_cache\n\nfrom wagtail.wagtailadmin import forms\nfrom wagtail.wagtailusers.forms import NotificationPreferencesForm\nfrom wagtail.wagtailusers.models import UserProfile\nfrom wagtail.wagtailcore.models import UserPagePermissionsProxy\n\n\ndef account(request):\n user_perms = UserPagePermissionsProxy(request.user)\n show_notification_preferences = user_perms.can_edit_pages() or user_perms.can_publish_pages()\n\n return render(request, 'wagtailadmin/account/account.html', {\n 'show_change_password': getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True) and request.user.has_usable_password(),\n 'show_notification_preferences': show_notification_preferences\n })\n\n\ndef change_password(request):\n can_change_password = request.user.has_usable_password()\n\n if can_change_password:\n if request.POST:\n form = SetPasswordForm(request.user, request.POST)\n\n if form.is_valid():\n form.save()\n update_session_auth_hash(request, form.user)\n\n messages.success(request, _(\"Your password has been changed successfully!\"))\n return redirect('wagtailadmin_account')\n else:\n form = SetPasswordForm(request.user)\n else:\n form = None\n\n return render(request, 'wagtailadmin/account/change_password.html', {\n 'form': form,\n 'can_change_password': can_change_password,\n })\n\n\ndef notification_preferences(request):\n\n if request.POST:\n form = NotificationPreferencesForm(request.POST, instance=UserProfile.get_for_user(request.user))\n\n if form.is_valid():\n form.save()\n messages.success(request, _(\"Your preferences have been updated successfully!\"))\n return redirect('wagtailadmin_account')\n else:\n form = NotificationPreferencesForm(instance=UserProfile.get_for_user(request.user))\n\n # quick-and-dirty catch-all in case the form has been rendered with no\n # fields, as the user has no customisable permissions\n if not form.fields:\n return redirect('wagtailadmin_account')\n\n return render(request, 'wagtailadmin/account/notification_preferences.html', {\n 'form': form,\n })\n\n\n@sensitive_post_parameters()\n@never_cache\ndef login(request):\n if request.user.is_authenticated() and request.user.has_perm('wagtailadmin.access_admin'):\n return redirect('wagtailadmin_home')\n else:\n from django.contrib.auth import get_user_model\n return auth_login(request,\n template_name='wagtailadmin/login.html',\n authentication_form=forms.LoginForm,\n extra_context={\n 'show_password_reset': getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True),\n 'username_field': get_user_model().USERNAME_FIELD,\n },\n )\n\n\ndef logout(request):\n response = auth_logout(request, next_page='wagtailadmin_login')\n\n # By default, logging out will generate a fresh sessionid cookie. We want to use the\n # absence of sessionid as an indication that front-end pages are being viewed by a\n # non-logged-in user and are therefore cacheable, so we forcibly delete the cookie here.\n response.delete_cookie(settings.SESSION_COOKIE_NAME,\n domain=settings.SESSION_COOKIE_DOMAIN,\n path=settings.SESSION_COOKIE_PATH)\n\n # HACK: pretend that the session hasn't been modified, so that SessionMiddleware\n # won't override the above and write a new cookie.\n request.session.modified = False\n\n return response\n", "path": "wagtail/wagtailadmin/views/account.py"}]} | 1,623 | 193 |
gh_patches_debug_21663 | rasdani/github-patches | git_diff | boto__boto-2489 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
boto.glacier.utils.compute_hashes_from_fileobj No Longer Works with Binary Files
Commit a4c9a781f47a61ddde2b3a3802b93c1ed29cdf16 added a `.encode('utf-8')` to the result of the first read in `boto.glacier.utils.compute_hashes_from_fileobj` (although it doesn't attempt to encode the results of the reads in the loop below that).
This breaks for binary files for me with python 2.6.6:
```
(glacier)[cperl@localhost ~]$ dd if=/dev/urandom of=/tmp/foo.bin bs=1M count=1
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.110299 s, 9.5 MB/s
(glacier)[cperl@localhost ~]$ python
Python 2.6.6 (r266:84292, Nov 22 2013, 12:16:22)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import boto.glacier.utils
>>> with open("/tmp/foo.bin", 'r') as f:
... boto.glacier.utils.compute_hashes_from_fileobj(f)
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/home/cperl/virtualenv/glacier/lib/python2.6/site-packages/boto/glacier/utils.py", line 127, in compute_hashes_from_fileobj
chunk = fileobj.read(chunk_size).encode('utf-8')
UnicodeDecodeError: 'ascii' codec can't decode byte 0xf4 in position 0: ordinal not in range(128)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `boto/glacier/utils.py`
Content:
```
1 # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a
4 # copy of this software and associated documentation files (the
5 # "Software"), to deal in the Software without restriction, including
6 # without limitation the rights to use, copy, modify, merge, publish, dis-
7 # tribute, sublicense, and/or sell copies of the Software, and to permit
8 # persons to whom the Software is furnished to do so, subject to the fol-
9 # lowing conditions:
10 #
11 # The above copyright notice and this permission notice shall be included
12 # in all copies or substantial portions of the Software.
13 #
14 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
15 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
16 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
17 # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
18 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
19 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
20 # IN THE SOFTWARE.
21 #
22 import hashlib
23 import math
24 import binascii
25
26
27 _MEGABYTE = 1024 * 1024
28 DEFAULT_PART_SIZE = 4 * _MEGABYTE
29 MAXIMUM_NUMBER_OF_PARTS = 10000
30
31
32 def minimum_part_size(size_in_bytes, default_part_size=DEFAULT_PART_SIZE):
33 """Calculate the minimum part size needed for a multipart upload.
34
35 Glacier allows a maximum of 10,000 parts per upload. It also
36 states that the maximum archive size is 10,000 * 4 GB, which means
37 the part size can range from 1MB to 4GB (provided it is one 1MB
38 multiplied by a power of 2).
39
40 This function will compute what the minimum part size must be in
41 order to upload a file of size ``size_in_bytes``.
42
43 It will first check if ``default_part_size`` is sufficient for
44 a part size given the ``size_in_bytes``. If this is not the case,
45 then the smallest part size than can accomodate a file of size
46 ``size_in_bytes`` will be returned.
47
48 If the file size is greater than the maximum allowed archive
49 size of 10,000 * 4GB, a ``ValueError`` will be raised.
50
51 """
52 # The default part size (4 MB) will be too small for a very large
53 # archive, as there is a limit of 10,000 parts in a multipart upload.
54 # This puts the maximum allowed archive size with the default part size
55 # at 40,000 MB. We need to do a sanity check on the part size, and find
56 # one that works if the default is too small.
57 part_size = _MEGABYTE
58 if (default_part_size * MAXIMUM_NUMBER_OF_PARTS) < size_in_bytes:
59 if size_in_bytes > (4096 * _MEGABYTE * 10000):
60 raise ValueError("File size too large: %s" % size_in_bytes)
61 min_part_size = size_in_bytes / 10000
62 power = 3
63 while part_size < min_part_size:
64 part_size = math.ldexp(_MEGABYTE, power)
65 power += 1
66 part_size = int(part_size)
67 else:
68 part_size = default_part_size
69 return part_size
70
71
72 def chunk_hashes(bytestring, chunk_size=_MEGABYTE):
73 chunk_count = int(math.ceil(len(bytestring) / float(chunk_size)))
74 hashes = []
75 for i in range(chunk_count):
76 start = i * chunk_size
77 end = (i + 1) * chunk_size
78 hashes.append(hashlib.sha256(bytestring[start:end]).digest())
79 if not hashes:
80 return [hashlib.sha256(b'').digest()]
81 return hashes
82
83
84 def tree_hash(fo):
85 """
86 Given a hash of each 1MB chunk (from chunk_hashes) this will hash
87 together adjacent hashes until it ends up with one big one. So a
88 tree of hashes.
89 """
90 hashes = []
91 hashes.extend(fo)
92 while len(hashes) > 1:
93 new_hashes = []
94 while True:
95 if len(hashes) > 1:
96 first = hashes.pop(0)
97 second = hashes.pop(0)
98 new_hashes.append(hashlib.sha256(first + second).digest())
99 elif len(hashes) == 1:
100 only = hashes.pop(0)
101 new_hashes.append(only)
102 else:
103 break
104 hashes.extend(new_hashes)
105 return hashes[0]
106
107
108 def compute_hashes_from_fileobj(fileobj, chunk_size=1024 * 1024):
109 """Compute the linear and tree hash from a fileobj.
110
111 This function will compute the linear/tree hash of a fileobj
112 in a single pass through the fileobj.
113
114 :param fileobj: A file like object.
115
116 :param chunk_size: The size of the chunks to use for the tree
117 hash. This is also the buffer size used to read from
118 `fileobj`.
119
120 :rtype: tuple
121 :return: A tuple of (linear_hash, tree_hash). Both hashes
122 are returned in hex.
123
124 """
125 linear_hash = hashlib.sha256()
126 chunks = []
127 chunk = fileobj.read(chunk_size).encode('utf-8')
128 while chunk:
129 linear_hash.update(chunk)
130 chunks.append(hashlib.sha256(chunk).digest())
131 chunk = fileobj.read(chunk_size)
132 if not chunks:
133 chunks = [hashlib.sha256(b'').digest()]
134 return linear_hash.hexdigest(), bytes_to_hex(tree_hash(chunks))
135
136
137 def bytes_to_hex(str_as_bytes):
138 return binascii.hexlify(str_as_bytes)
139
140
141 def tree_hash_from_str(str_as_bytes):
142 """
143
144 :type str_as_bytes: str
145 :param str_as_bytes: The string for which to compute the tree hash.
146
147 :rtype: str
148 :return: The computed tree hash, returned as hex.
149
150 """
151 return bytes_to_hex(tree_hash(chunk_hashes(str_as_bytes)))
152
153
154 class ResettingFileSender(object):
155 def __init__(self, archive):
156 self._archive = archive
157 self._starting_offset = archive.tell()
158
159 def __call__(self, connection, method, path, body, headers):
160 try:
161 connection.request(method, path, self._archive, headers)
162 return connection.getresponse()
163 finally:
164 self._archive.seek(self._starting_offset)
165
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/boto/glacier/utils.py b/boto/glacier/utils.py
--- a/boto/glacier/utils.py
+++ b/boto/glacier/utils.py
@@ -23,6 +23,8 @@
import math
import binascii
+from boto.compat import six
+
_MEGABYTE = 1024 * 1024
DEFAULT_PART_SIZE = 4 * _MEGABYTE
@@ -122,10 +124,19 @@
are returned in hex.
"""
+ # Python 3+, not binary
+ if six.PY3 and hasattr(fileobj, 'mode') and 'b' not in fileobj.mode:
+ raise ValueError('File-like object must be opened in binary mode!')
+
linear_hash = hashlib.sha256()
chunks = []
- chunk = fileobj.read(chunk_size).encode('utf-8')
+ chunk = fileobj.read(chunk_size)
while chunk:
+ # It's possible to get a file-like object that has no mode (checked
+ # above) and returns something other than bytes (e.g. str). So here
+ # we try to catch that and encode to bytes.
+ if not isinstance(chunk, bytes):
+ chunk = chunk.encode(getattr(fileobj, 'encoding', '') or 'utf-8')
linear_hash.update(chunk)
chunks.append(hashlib.sha256(chunk).digest())
chunk = fileobj.read(chunk_size)
| {"golden_diff": "diff --git a/boto/glacier/utils.py b/boto/glacier/utils.py\n--- a/boto/glacier/utils.py\n+++ b/boto/glacier/utils.py\n@@ -23,6 +23,8 @@\n import math\n import binascii\n \n+from boto.compat import six\n+\n \n _MEGABYTE = 1024 * 1024\n DEFAULT_PART_SIZE = 4 * _MEGABYTE\n@@ -122,10 +124,19 @@\n are returned in hex.\n \n \"\"\"\n+ # Python 3+, not binary\n+ if six.PY3 and hasattr(fileobj, 'mode') and 'b' not in fileobj.mode:\n+ raise ValueError('File-like object must be opened in binary mode!')\n+\n linear_hash = hashlib.sha256()\n chunks = []\n- chunk = fileobj.read(chunk_size).encode('utf-8')\n+ chunk = fileobj.read(chunk_size)\n while chunk:\n+ # It's possible to get a file-like object that has no mode (checked\n+ # above) and returns something other than bytes (e.g. str). So here\n+ # we try to catch that and encode to bytes.\n+ if not isinstance(chunk, bytes):\n+ chunk = chunk.encode(getattr(fileobj, 'encoding', '') or 'utf-8')\n linear_hash.update(chunk)\n chunks.append(hashlib.sha256(chunk).digest())\n chunk = fileobj.read(chunk_size)\n", "issue": "boto.glacier.utils.compute_hashes_from_fileobj No Longer Works with Binary Files\nCommit a4c9a781f47a61ddde2b3a3802b93c1ed29cdf16 added a `.encode('utf-8')` to the result of the first read in `boto.glacier.utils.compute_hashes_from_fileobj` (although it doesn't attempt to encode the results of the reads in the loop below that).\n\nThis breaks for binary files for me with python 2.6.6:\n\n```\n(glacier)[cperl@localhost ~]$ dd if=/dev/urandom of=/tmp/foo.bin bs=1M count=1\n1+0 records in\n1+0 records out\n1048576 bytes (1.0 MB) copied, 0.110299 s, 9.5 MB/s\n\n(glacier)[cperl@localhost ~]$ python\nPython 2.6.6 (r266:84292, Nov 22 2013, 12:16:22) \n[GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import boto.glacier.utils\n>>> with open(\"/tmp/foo.bin\", 'r') as f:\n... boto.glacier.utils.compute_hashes_from_fileobj(f)\n... \nTraceback (most recent call last):\n File \"<stdin>\", line 2, in <module>\n File \"/home/cperl/virtualenv/glacier/lib/python2.6/site-packages/boto/glacier/utils.py\", line 127, in compute_hashes_from_fileobj\n chunk = fileobj.read(chunk_size).encode('utf-8')\nUnicodeDecodeError: 'ascii' codec can't decode byte 0xf4 in position 0: ordinal not in range(128)\n```\n\n", "before_files": [{"content": "# Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish, dis-\n# tribute, sublicense, and/or sell copies of the Software, and to permit\n# persons to whom the Software is furnished to do so, subject to the fol-\n# lowing conditions:\n#\n# The above copyright notice and this permission notice shall be included\n# in all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\n# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-\n# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT\n# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,\n# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n# IN THE SOFTWARE.\n#\nimport hashlib\nimport math\nimport binascii\n\n\n_MEGABYTE = 1024 * 1024\nDEFAULT_PART_SIZE = 4 * _MEGABYTE\nMAXIMUM_NUMBER_OF_PARTS = 10000\n\n\ndef minimum_part_size(size_in_bytes, default_part_size=DEFAULT_PART_SIZE):\n \"\"\"Calculate the minimum part size needed for a multipart upload.\n\n Glacier allows a maximum of 10,000 parts per upload. It also\n states that the maximum archive size is 10,000 * 4 GB, which means\n the part size can range from 1MB to 4GB (provided it is one 1MB\n multiplied by a power of 2).\n\n This function will compute what the minimum part size must be in\n order to upload a file of size ``size_in_bytes``.\n\n It will first check if ``default_part_size`` is sufficient for\n a part size given the ``size_in_bytes``. If this is not the case,\n then the smallest part size than can accomodate a file of size\n ``size_in_bytes`` will be returned.\n\n If the file size is greater than the maximum allowed archive\n size of 10,000 * 4GB, a ``ValueError`` will be raised.\n\n \"\"\"\n # The default part size (4 MB) will be too small for a very large\n # archive, as there is a limit of 10,000 parts in a multipart upload.\n # This puts the maximum allowed archive size with the default part size\n # at 40,000 MB. We need to do a sanity check on the part size, and find\n # one that works if the default is too small.\n part_size = _MEGABYTE\n if (default_part_size * MAXIMUM_NUMBER_OF_PARTS) < size_in_bytes:\n if size_in_bytes > (4096 * _MEGABYTE * 10000):\n raise ValueError(\"File size too large: %s\" % size_in_bytes)\n min_part_size = size_in_bytes / 10000\n power = 3\n while part_size < min_part_size:\n part_size = math.ldexp(_MEGABYTE, power)\n power += 1\n part_size = int(part_size)\n else:\n part_size = default_part_size\n return part_size\n\n\ndef chunk_hashes(bytestring, chunk_size=_MEGABYTE):\n chunk_count = int(math.ceil(len(bytestring) / float(chunk_size)))\n hashes = []\n for i in range(chunk_count):\n start = i * chunk_size\n end = (i + 1) * chunk_size\n hashes.append(hashlib.sha256(bytestring[start:end]).digest())\n if not hashes:\n return [hashlib.sha256(b'').digest()]\n return hashes\n\n\ndef tree_hash(fo):\n \"\"\"\n Given a hash of each 1MB chunk (from chunk_hashes) this will hash\n together adjacent hashes until it ends up with one big one. So a\n tree of hashes.\n \"\"\"\n hashes = []\n hashes.extend(fo)\n while len(hashes) > 1:\n new_hashes = []\n while True:\n if len(hashes) > 1:\n first = hashes.pop(0)\n second = hashes.pop(0)\n new_hashes.append(hashlib.sha256(first + second).digest())\n elif len(hashes) == 1:\n only = hashes.pop(0)\n new_hashes.append(only)\n else:\n break\n hashes.extend(new_hashes)\n return hashes[0]\n\n\ndef compute_hashes_from_fileobj(fileobj, chunk_size=1024 * 1024):\n \"\"\"Compute the linear and tree hash from a fileobj.\n\n This function will compute the linear/tree hash of a fileobj\n in a single pass through the fileobj.\n\n :param fileobj: A file like object.\n\n :param chunk_size: The size of the chunks to use for the tree\n hash. This is also the buffer size used to read from\n `fileobj`.\n\n :rtype: tuple\n :return: A tuple of (linear_hash, tree_hash). Both hashes\n are returned in hex.\n\n \"\"\"\n linear_hash = hashlib.sha256()\n chunks = []\n chunk = fileobj.read(chunk_size).encode('utf-8')\n while chunk:\n linear_hash.update(chunk)\n chunks.append(hashlib.sha256(chunk).digest())\n chunk = fileobj.read(chunk_size)\n if not chunks:\n chunks = [hashlib.sha256(b'').digest()]\n return linear_hash.hexdigest(), bytes_to_hex(tree_hash(chunks))\n\n\ndef bytes_to_hex(str_as_bytes):\n return binascii.hexlify(str_as_bytes)\n\n\ndef tree_hash_from_str(str_as_bytes):\n \"\"\"\n\n :type str_as_bytes: str\n :param str_as_bytes: The string for which to compute the tree hash.\n\n :rtype: str\n :return: The computed tree hash, returned as hex.\n\n \"\"\"\n return bytes_to_hex(tree_hash(chunk_hashes(str_as_bytes)))\n\n\nclass ResettingFileSender(object):\n def __init__(self, archive):\n self._archive = archive\n self._starting_offset = archive.tell()\n\n def __call__(self, connection, method, path, body, headers):\n try:\n connection.request(method, path, self._archive, headers)\n return connection.getresponse()\n finally:\n self._archive.seek(self._starting_offset)\n", "path": "boto/glacier/utils.py"}], "after_files": [{"content": "# Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish, dis-\n# tribute, sublicense, and/or sell copies of the Software, and to permit\n# persons to whom the Software is furnished to do so, subject to the fol-\n# lowing conditions:\n#\n# The above copyright notice and this permission notice shall be included\n# in all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\n# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-\n# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT\n# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,\n# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n# IN THE SOFTWARE.\n#\nimport hashlib\nimport math\nimport binascii\n\nfrom boto.compat import six\n\n\n_MEGABYTE = 1024 * 1024\nDEFAULT_PART_SIZE = 4 * _MEGABYTE\nMAXIMUM_NUMBER_OF_PARTS = 10000\n\n\ndef minimum_part_size(size_in_bytes, default_part_size=DEFAULT_PART_SIZE):\n \"\"\"Calculate the minimum part size needed for a multipart upload.\n\n Glacier allows a maximum of 10,000 parts per upload. It also\n states that the maximum archive size is 10,000 * 4 GB, which means\n the part size can range from 1MB to 4GB (provided it is one 1MB\n multiplied by a power of 2).\n\n This function will compute what the minimum part size must be in\n order to upload a file of size ``size_in_bytes``.\n\n It will first check if ``default_part_size`` is sufficient for\n a part size given the ``size_in_bytes``. If this is not the case,\n then the smallest part size than can accomodate a file of size\n ``size_in_bytes`` will be returned.\n\n If the file size is greater than the maximum allowed archive\n size of 10,000 * 4GB, a ``ValueError`` will be raised.\n\n \"\"\"\n # The default part size (4 MB) will be too small for a very large\n # archive, as there is a limit of 10,000 parts in a multipart upload.\n # This puts the maximum allowed archive size with the default part size\n # at 40,000 MB. We need to do a sanity check on the part size, and find\n # one that works if the default is too small.\n part_size = _MEGABYTE\n if (default_part_size * MAXIMUM_NUMBER_OF_PARTS) < size_in_bytes:\n if size_in_bytes > (4096 * _MEGABYTE * 10000):\n raise ValueError(\"File size too large: %s\" % size_in_bytes)\n min_part_size = size_in_bytes / 10000\n power = 3\n while part_size < min_part_size:\n part_size = math.ldexp(_MEGABYTE, power)\n power += 1\n part_size = int(part_size)\n else:\n part_size = default_part_size\n return part_size\n\n\ndef chunk_hashes(bytestring, chunk_size=_MEGABYTE):\n chunk_count = int(math.ceil(len(bytestring) / float(chunk_size)))\n hashes = []\n for i in range(chunk_count):\n start = i * chunk_size\n end = (i + 1) * chunk_size\n hashes.append(hashlib.sha256(bytestring[start:end]).digest())\n if not hashes:\n return [hashlib.sha256(b'').digest()]\n return hashes\n\n\ndef tree_hash(fo):\n \"\"\"\n Given a hash of each 1MB chunk (from chunk_hashes) this will hash\n together adjacent hashes until it ends up with one big one. So a\n tree of hashes.\n \"\"\"\n hashes = []\n hashes.extend(fo)\n while len(hashes) > 1:\n new_hashes = []\n while True:\n if len(hashes) > 1:\n first = hashes.pop(0)\n second = hashes.pop(0)\n new_hashes.append(hashlib.sha256(first + second).digest())\n elif len(hashes) == 1:\n only = hashes.pop(0)\n new_hashes.append(only)\n else:\n break\n hashes.extend(new_hashes)\n return hashes[0]\n\n\ndef compute_hashes_from_fileobj(fileobj, chunk_size=1024 * 1024):\n \"\"\"Compute the linear and tree hash from a fileobj.\n\n This function will compute the linear/tree hash of a fileobj\n in a single pass through the fileobj.\n\n :param fileobj: A file like object.\n\n :param chunk_size: The size of the chunks to use for the tree\n hash. This is also the buffer size used to read from\n `fileobj`.\n\n :rtype: tuple\n :return: A tuple of (linear_hash, tree_hash). Both hashes\n are returned in hex.\n\n \"\"\"\n # Python 3+, not binary\n if six.PY3 and hasattr(fileobj, 'mode') and 'b' not in fileobj.mode:\n raise ValueError('File-like object must be opened in binary mode!')\n\n linear_hash = hashlib.sha256()\n chunks = []\n chunk = fileobj.read(chunk_size)\n while chunk:\n # It's possible to get a file-like object that has no mode (checked\n # above) and returns something other than bytes (e.g. str). So here\n # we try to catch that and encode to bytes.\n if not isinstance(chunk, bytes):\n chunk = chunk.encode(getattr(fileobj, 'encoding', '') or 'utf-8')\n linear_hash.update(chunk)\n chunks.append(hashlib.sha256(chunk).digest())\n chunk = fileobj.read(chunk_size)\n if not chunks:\n chunks = [hashlib.sha256(b'').digest()]\n return linear_hash.hexdigest(), bytes_to_hex(tree_hash(chunks))\n\n\ndef bytes_to_hex(str_as_bytes):\n return binascii.hexlify(str_as_bytes)\n\n\ndef tree_hash_from_str(str_as_bytes):\n \"\"\"\n\n :type str_as_bytes: str\n :param str_as_bytes: The string for which to compute the tree hash.\n\n :rtype: str\n :return: The computed tree hash, returned as hex.\n\n \"\"\"\n return bytes_to_hex(tree_hash(chunk_hashes(str_as_bytes)))\n\n\nclass ResettingFileSender(object):\n def __init__(self, archive):\n self._archive = archive\n self._starting_offset = archive.tell()\n\n def __call__(self, connection, method, path, body, headers):\n try:\n connection.request(method, path, self._archive, headers)\n return connection.getresponse()\n finally:\n self._archive.seek(self._starting_offset)\n", "path": "boto/glacier/utils.py"}]} | 2,576 | 325 |
gh_patches_debug_25198 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-contrib-890 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
error with logging instrumentation - AttributeError: 'ProxyTracerProvider' object has no attribute 'resource'
**Describe your environment**
LoggingInstrumentor().instrument() is throwing an error
```
Traceback (most recent call last):
File "manage.py", line 30, in <module>
main()
File "manage.py", line 14, in main
LoggingInstrumentor().instrument(set_logging_format=True)
File "/home/vamsikrishnam/otel/lib/python3.8/site-packages/opentelemetry/instrumentation/instrumentor.py", line 109, in instrument
result = self._instrument( # pylint: disable=assignment-from-no-return
File "/home/vamsikrishnam/otel/lib/python3.8/site-packages/opentelemetry/instrumentation/logging/__init__.py", line 81, in _instrument
resource = provider.resource if provider else None
AttributeError: 'ProxyTracerProvider' object has no attribute 'resource'
```
**Steps to reproduce**
Below packages installed and trying to instrument with below two lines:
> LoggingInstrumentor().instrument(set_logging_format=True)
> DjangoInstrumentor().instrument()
```
(otel) vamsikrishnam@NHHYDL-00217:~/django$ pip list | grep opentele
opentelemetry-api 1.7.1
opentelemetry-exporter-otlp 1.7.1
opentelemetry-exporter-otlp-proto-grpc 1.7.1
opentelemetry-exporter-otlp-proto-http 1.7.1
opentelemetry-instrumentation 0.26b1
opentelemetry-instrumentation-django 0.26b1
opentelemetry-instrumentation-logging 0.26b1
opentelemetry-instrumentation-wsgi 0.26b1
opentelemetry-propagator-b3 1.7.1
opentelemetry-proto 1.7.1
opentelemetry-sdk 1.7.1
opentelemetry-semantic-conventions 0.26b1
opentelemetry-util-http 0.26b1
```
**What is the expected behavior?**
What did you expect to see?
logging should be instrumented properly.
**What is the actual behavior?**
What did you see instead?
logging should be instrumented properly and populate the otelTraceID and otelSpanID in the logs.
**Additional context**
Add any other context about the problem here.
$ python3 --version
Python 3.8.10
manage.py:
```
#!/usr/bin/env python
"""Django's command-line utility for administrative tasks."""
import os
import sys
import logging
from opentelemetry.instrumentation.django import DjangoInstrumentor
from opentelemetry.instrumentation.logging import LoggingInstrumentor
def main():
"""Run administrative tasks."""
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'django_project.settings')
logging.basicConfig(level = logging.DEBUG)
LoggingInstrumentor().instrument(set_logging_format=True)
DjangoInstrumentor().instrument()
# LoggingInstrumentor().instrument(set_logging_format=True,log_level=logging.DEBUG)
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
if __name__ == '__main__':
main()
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # pylint: disable=empty-docstring,no-value-for-parameter,no-member,no-name-in-module
16
17 import logging # pylint: disable=import-self
18 from os import environ
19 from typing import Collection
20
21 from opentelemetry.instrumentation.instrumentor import BaseInstrumentor
22 from opentelemetry.instrumentation.logging.constants import (
23 _MODULE_DOC,
24 DEFAULT_LOGGING_FORMAT,
25 )
26 from opentelemetry.instrumentation.logging.environment_variables import (
27 OTEL_PYTHON_LOG_CORRELATION,
28 OTEL_PYTHON_LOG_FORMAT,
29 OTEL_PYTHON_LOG_LEVEL,
30 )
31 from opentelemetry.instrumentation.logging.package import _instruments
32 from opentelemetry.trace import (
33 INVALID_SPAN,
34 INVALID_SPAN_CONTEXT,
35 get_current_span,
36 get_tracer_provider,
37 )
38
39 __doc__ = _MODULE_DOC
40
41 LEVELS = {
42 "debug": logging.DEBUG,
43 "info": logging.INFO,
44 "warning": logging.WARNING,
45 "error": logging.ERROR,
46 }
47
48
49 class LoggingInstrumentor(BaseInstrumentor): # pylint: disable=empty-docstring
50 __doc__ = f"""An instrumentor for stdlib logging module.
51
52 This instrumentor injects tracing context into logging records and optionally sets the global logging format to the following:
53
54 .. code-block::
55
56 {DEFAULT_LOGGING_FORMAT}
57
58 Args:
59 tracer_provider: Tracer provider instance that can be used to fetch a tracer.
60 set_logging_format: When set to True, it calls logging.basicConfig() and sets a logging format.
61 logging_format: Accepts a string and sets it as the logging format when set_logging_format
62 is set to True.
63 log_level: Accepts one of the following values and sets the logging level to it.
64 logging.INFO
65 logging.DEBUG
66 logging.WARN
67 logging.ERROR
68 logging.FATAL
69
70 See `BaseInstrumentor`
71 """
72
73 _old_factory = None
74
75 def instrumentation_dependencies(self) -> Collection[str]:
76 return _instruments
77
78 def _instrument(self, **kwargs):
79 service_name = ""
80 provider = kwargs.get("tracer_provider", None) or get_tracer_provider()
81 resource = provider.resource if provider else None
82 if resource:
83 service_name = resource.attributes.get("service.name")
84
85 old_factory = logging.getLogRecordFactory()
86 LoggingInstrumentor._old_factory = old_factory
87
88 def record_factory(*args, **kwargs):
89 record = old_factory(*args, **kwargs)
90
91 record.otelSpanID = "0"
92 record.otelTraceID = "0"
93 record.otelServiceName = service_name
94
95 span = get_current_span()
96 if span != INVALID_SPAN:
97 ctx = span.get_span_context()
98 if ctx != INVALID_SPAN_CONTEXT:
99 record.otelSpanID = format(ctx.span_id, "016x")
100 record.otelTraceID = format(ctx.trace_id, "032x")
101 return record
102
103 logging.setLogRecordFactory(record_factory)
104
105 set_logging_format = kwargs.get(
106 "set_logging_format",
107 environ.get(OTEL_PYTHON_LOG_CORRELATION, "false").lower()
108 == "true",
109 )
110
111 if set_logging_format:
112 log_format = kwargs.get(
113 "logging_format", environ.get(OTEL_PYTHON_LOG_FORMAT, None)
114 )
115 log_format = log_format or DEFAULT_LOGGING_FORMAT
116
117 log_level = kwargs.get(
118 "log_level", LEVELS.get(environ.get(OTEL_PYTHON_LOG_LEVEL))
119 )
120 log_level = log_level or logging.INFO
121
122 logging.basicConfig(format=log_format, level=log_level)
123
124 def _uninstrument(self, **kwargs):
125 if LoggingInstrumentor._old_factory:
126 logging.setLogRecordFactory(LoggingInstrumentor._old_factory)
127 LoggingInstrumentor._old_factory = None
128
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py b/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py
--- a/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py
+++ b/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py
@@ -76,20 +76,29 @@
return _instruments
def _instrument(self, **kwargs):
- service_name = ""
- provider = kwargs.get("tracer_provider", None) or get_tracer_provider()
- resource = provider.resource if provider else None
- if resource:
- service_name = resource.attributes.get("service.name")
+ provider = kwargs.get("tracer_provider", None) or get_tracer_provider()
old_factory = logging.getLogRecordFactory()
LoggingInstrumentor._old_factory = old_factory
+ service_name = None
+
def record_factory(*args, **kwargs):
record = old_factory(*args, **kwargs)
record.otelSpanID = "0"
record.otelTraceID = "0"
+
+ nonlocal service_name
+ if service_name is None:
+ resource = getattr(provider, "resource", None)
+ if resource:
+ service_name = (
+ resource.attributes.get("service.name") or ""
+ )
+ else:
+ service_name = ""
+
record.otelServiceName = service_name
span = get_current_span()
| {"golden_diff": "diff --git a/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py b/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py\n--- a/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py\n+++ b/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py\n@@ -76,20 +76,29 @@\n return _instruments\n \n def _instrument(self, **kwargs):\n- service_name = \"\"\n- provider = kwargs.get(\"tracer_provider\", None) or get_tracer_provider()\n- resource = provider.resource if provider else None\n- if resource:\n- service_name = resource.attributes.get(\"service.name\")\n \n+ provider = kwargs.get(\"tracer_provider\", None) or get_tracer_provider()\n old_factory = logging.getLogRecordFactory()\n LoggingInstrumentor._old_factory = old_factory\n \n+ service_name = None\n+\n def record_factory(*args, **kwargs):\n record = old_factory(*args, **kwargs)\n \n record.otelSpanID = \"0\"\n record.otelTraceID = \"0\"\n+\n+ nonlocal service_name\n+ if service_name is None:\n+ resource = getattr(provider, \"resource\", None)\n+ if resource:\n+ service_name = (\n+ resource.attributes.get(\"service.name\") or \"\"\n+ )\n+ else:\n+ service_name = \"\"\n+\n record.otelServiceName = service_name\n \n span = get_current_span()\n", "issue": "error with logging instrumentation - AttributeError: 'ProxyTracerProvider' object has no attribute 'resource'\n**Describe your environment** \r\n\r\nLoggingInstrumentor().instrument() is throwing an error\r\n```\r\nTraceback (most recent call last):\r\n File \"manage.py\", line 30, in <module>\r\n main()\r\n File \"manage.py\", line 14, in main\r\n LoggingInstrumentor().instrument(set_logging_format=True)\r\n File \"/home/vamsikrishnam/otel/lib/python3.8/site-packages/opentelemetry/instrumentation/instrumentor.py\", line 109, in instrument\r\n result = self._instrument( # pylint: disable=assignment-from-no-return\r\n File \"/home/vamsikrishnam/otel/lib/python3.8/site-packages/opentelemetry/instrumentation/logging/__init__.py\", line 81, in _instrument\r\n resource = provider.resource if provider else None\r\nAttributeError: 'ProxyTracerProvider' object has no attribute 'resource'\r\n```\r\n\r\n**Steps to reproduce**\r\nBelow packages installed and trying to instrument with below two lines:\r\n\r\n> LoggingInstrumentor().instrument(set_logging_format=True)\r\n> DjangoInstrumentor().instrument()\r\n\r\n```\r\n(otel) vamsikrishnam@NHHYDL-00217:~/django$ pip list | grep opentele\r\nopentelemetry-api 1.7.1\r\nopentelemetry-exporter-otlp 1.7.1\r\nopentelemetry-exporter-otlp-proto-grpc 1.7.1\r\nopentelemetry-exporter-otlp-proto-http 1.7.1\r\nopentelemetry-instrumentation 0.26b1\r\nopentelemetry-instrumentation-django 0.26b1\r\nopentelemetry-instrumentation-logging 0.26b1\r\nopentelemetry-instrumentation-wsgi 0.26b1\r\nopentelemetry-propagator-b3 1.7.1\r\nopentelemetry-proto 1.7.1\r\nopentelemetry-sdk 1.7.1\r\nopentelemetry-semantic-conventions 0.26b1\r\nopentelemetry-util-http 0.26b1\r\n```\r\n\r\n**What is the expected behavior?**\r\nWhat did you expect to see?\r\nlogging should be instrumented properly.\r\n\r\n**What is the actual behavior?**\r\nWhat did you see instead?\r\nlogging should be instrumented properly and populate the otelTraceID and otelSpanID in the logs.\r\n\r\n**Additional context**\r\nAdd any other context about the problem here.\r\n\r\n$ python3 --version\r\nPython 3.8.10\r\n\r\nmanage.py:\r\n\r\n```\r\n#!/usr/bin/env python\r\n\"\"\"Django's command-line utility for administrative tasks.\"\"\"\r\nimport os\r\nimport sys\r\nimport logging\r\nfrom opentelemetry.instrumentation.django import DjangoInstrumentor\r\nfrom opentelemetry.instrumentation.logging import LoggingInstrumentor\r\n\r\n\r\ndef main():\r\n \"\"\"Run administrative tasks.\"\"\"\r\n os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'django_project.settings')\r\n logging.basicConfig(level = logging.DEBUG)\r\n LoggingInstrumentor().instrument(set_logging_format=True)\r\n DjangoInstrumentor().instrument()\r\n # LoggingInstrumentor().instrument(set_logging_format=True,log_level=logging.DEBUG)\r\n\r\n try:\r\n from django.core.management import execute_from_command_line\r\n except ImportError as exc:\r\n raise ImportError(\r\n \"Couldn't import Django. Are you sure it's installed and \"\r\n \"available on your PYTHONPATH environment variable? Did you \"\r\n \"forget to activate a virtual environment?\"\r\n ) from exc\r\n execute_from_command_line(sys.argv)\r\n\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```\r\n\r\n\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# pylint: disable=empty-docstring,no-value-for-parameter,no-member,no-name-in-module\n\nimport logging # pylint: disable=import-self\nfrom os import environ\nfrom typing import Collection\n\nfrom opentelemetry.instrumentation.instrumentor import BaseInstrumentor\nfrom opentelemetry.instrumentation.logging.constants import (\n _MODULE_DOC,\n DEFAULT_LOGGING_FORMAT,\n)\nfrom opentelemetry.instrumentation.logging.environment_variables import (\n OTEL_PYTHON_LOG_CORRELATION,\n OTEL_PYTHON_LOG_FORMAT,\n OTEL_PYTHON_LOG_LEVEL,\n)\nfrom opentelemetry.instrumentation.logging.package import _instruments\nfrom opentelemetry.trace import (\n INVALID_SPAN,\n INVALID_SPAN_CONTEXT,\n get_current_span,\n get_tracer_provider,\n)\n\n__doc__ = _MODULE_DOC\n\nLEVELS = {\n \"debug\": logging.DEBUG,\n \"info\": logging.INFO,\n \"warning\": logging.WARNING,\n \"error\": logging.ERROR,\n}\n\n\nclass LoggingInstrumentor(BaseInstrumentor): # pylint: disable=empty-docstring\n __doc__ = f\"\"\"An instrumentor for stdlib logging module.\n\n This instrumentor injects tracing context into logging records and optionally sets the global logging format to the following:\n\n .. code-block::\n\n {DEFAULT_LOGGING_FORMAT}\n\n Args:\n tracer_provider: Tracer provider instance that can be used to fetch a tracer.\n set_logging_format: When set to True, it calls logging.basicConfig() and sets a logging format.\n logging_format: Accepts a string and sets it as the logging format when set_logging_format\n is set to True.\n log_level: Accepts one of the following values and sets the logging level to it.\n logging.INFO\n logging.DEBUG\n logging.WARN\n logging.ERROR\n logging.FATAL\n\n See `BaseInstrumentor`\n \"\"\"\n\n _old_factory = None\n\n def instrumentation_dependencies(self) -> Collection[str]:\n return _instruments\n\n def _instrument(self, **kwargs):\n service_name = \"\"\n provider = kwargs.get(\"tracer_provider\", None) or get_tracer_provider()\n resource = provider.resource if provider else None\n if resource:\n service_name = resource.attributes.get(\"service.name\")\n\n old_factory = logging.getLogRecordFactory()\n LoggingInstrumentor._old_factory = old_factory\n\n def record_factory(*args, **kwargs):\n record = old_factory(*args, **kwargs)\n\n record.otelSpanID = \"0\"\n record.otelTraceID = \"0\"\n record.otelServiceName = service_name\n\n span = get_current_span()\n if span != INVALID_SPAN:\n ctx = span.get_span_context()\n if ctx != INVALID_SPAN_CONTEXT:\n record.otelSpanID = format(ctx.span_id, \"016x\")\n record.otelTraceID = format(ctx.trace_id, \"032x\")\n return record\n\n logging.setLogRecordFactory(record_factory)\n\n set_logging_format = kwargs.get(\n \"set_logging_format\",\n environ.get(OTEL_PYTHON_LOG_CORRELATION, \"false\").lower()\n == \"true\",\n )\n\n if set_logging_format:\n log_format = kwargs.get(\n \"logging_format\", environ.get(OTEL_PYTHON_LOG_FORMAT, None)\n )\n log_format = log_format or DEFAULT_LOGGING_FORMAT\n\n log_level = kwargs.get(\n \"log_level\", LEVELS.get(environ.get(OTEL_PYTHON_LOG_LEVEL))\n )\n log_level = log_level or logging.INFO\n\n logging.basicConfig(format=log_format, level=log_level)\n\n def _uninstrument(self, **kwargs):\n if LoggingInstrumentor._old_factory:\n logging.setLogRecordFactory(LoggingInstrumentor._old_factory)\n LoggingInstrumentor._old_factory = None\n", "path": "instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# pylint: disable=empty-docstring,no-value-for-parameter,no-member,no-name-in-module\n\nimport logging # pylint: disable=import-self\nfrom os import environ\nfrom typing import Collection\n\nfrom opentelemetry.instrumentation.instrumentor import BaseInstrumentor\nfrom opentelemetry.instrumentation.logging.constants import (\n _MODULE_DOC,\n DEFAULT_LOGGING_FORMAT,\n)\nfrom opentelemetry.instrumentation.logging.environment_variables import (\n OTEL_PYTHON_LOG_CORRELATION,\n OTEL_PYTHON_LOG_FORMAT,\n OTEL_PYTHON_LOG_LEVEL,\n)\nfrom opentelemetry.instrumentation.logging.package import _instruments\nfrom opentelemetry.trace import (\n INVALID_SPAN,\n INVALID_SPAN_CONTEXT,\n get_current_span,\n get_tracer_provider,\n)\n\n__doc__ = _MODULE_DOC\n\nLEVELS = {\n \"debug\": logging.DEBUG,\n \"info\": logging.INFO,\n \"warning\": logging.WARNING,\n \"error\": logging.ERROR,\n}\n\n\nclass LoggingInstrumentor(BaseInstrumentor): # pylint: disable=empty-docstring\n __doc__ = f\"\"\"An instrumentor for stdlib logging module.\n\n This instrumentor injects tracing context into logging records and optionally sets the global logging format to the following:\n\n .. code-block::\n\n {DEFAULT_LOGGING_FORMAT}\n\n Args:\n tracer_provider: Tracer provider instance that can be used to fetch a tracer.\n set_logging_format: When set to True, it calls logging.basicConfig() and sets a logging format.\n logging_format: Accepts a string and sets it as the logging format when set_logging_format\n is set to True.\n log_level: Accepts one of the following values and sets the logging level to it.\n logging.INFO\n logging.DEBUG\n logging.WARN\n logging.ERROR\n logging.FATAL\n\n See `BaseInstrumentor`\n \"\"\"\n\n _old_factory = None\n\n def instrumentation_dependencies(self) -> Collection[str]:\n return _instruments\n\n def _instrument(self, **kwargs):\n\n provider = kwargs.get(\"tracer_provider\", None) or get_tracer_provider()\n old_factory = logging.getLogRecordFactory()\n LoggingInstrumentor._old_factory = old_factory\n\n service_name = None\n\n def record_factory(*args, **kwargs):\n record = old_factory(*args, **kwargs)\n\n record.otelSpanID = \"0\"\n record.otelTraceID = \"0\"\n\n nonlocal service_name\n if service_name is None:\n resource = getattr(provider, \"resource\", None)\n if resource:\n service_name = (\n resource.attributes.get(\"service.name\") or \"\"\n )\n else:\n service_name = \"\"\n\n record.otelServiceName = service_name\n\n span = get_current_span()\n if span != INVALID_SPAN:\n ctx = span.get_span_context()\n if ctx != INVALID_SPAN_CONTEXT:\n record.otelSpanID = format(ctx.span_id, \"016x\")\n record.otelTraceID = format(ctx.trace_id, \"032x\")\n return record\n\n logging.setLogRecordFactory(record_factory)\n\n set_logging_format = kwargs.get(\n \"set_logging_format\",\n environ.get(OTEL_PYTHON_LOG_CORRELATION, \"false\").lower()\n == \"true\",\n )\n\n if set_logging_format:\n log_format = kwargs.get(\n \"logging_format\", environ.get(OTEL_PYTHON_LOG_FORMAT, None)\n )\n log_format = log_format or DEFAULT_LOGGING_FORMAT\n\n log_level = kwargs.get(\n \"log_level\", LEVELS.get(environ.get(OTEL_PYTHON_LOG_LEVEL))\n )\n log_level = log_level or logging.INFO\n\n logging.basicConfig(format=log_format, level=log_level)\n\n def _uninstrument(self, **kwargs):\n if LoggingInstrumentor._old_factory:\n logging.setLogRecordFactory(LoggingInstrumentor._old_factory)\n LoggingInstrumentor._old_factory = None\n", "path": "instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py"}]} | 2,281 | 355 |
gh_patches_debug_28162 | rasdani/github-patches | git_diff | Qiskit__qiskit-12069 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Documentation of RVGate is incorrect
### Environment
N/A
### What is happening?
Received this in an email:
>Hi, I think I found some errors in the Qiskit documentation at
<https://docs.quantum.ibm.com/api/qiskit/qiskit.circuit.library.RVGate>
and I'm contacting you because you look like the two people who most recently edited the source file at
<https://github.com/Qiskit/qiskit/blob/stable/0.46/qiskit/circuit/library/generalized_gates/rv.py>
The matrix representation given in the documentation seems to be wrong. I compared it to the definition given in
<https://arxiv.org/pdf/2104.14875.pdf>
on page 4, equation 1, we see the definition of the rotation matrix. It almost matches the definition given in the documentation at
<https://docs.quantum.ibm.com/api/qiskit/qiskit.circuit.library.RVGate>
except for two mistakes: the "sinc" function should be "sin", and the angle should be divided by two. This can be compared to the source code at
<https://github.com/Qiskit/qiskit/blob/stable/0.46/qiskit/circuit/library/generalized_gates/rv.py>
at lines 86 and 87, where we see the angle divided by two, and we see the use of the sin and cos functions.
### How can we reproduce the issue?
N/A
### What should happen?
N/A
### Any suggestions?
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qiskit/circuit/library/generalized_gates/rv.py`
Content:
```
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2017, 2020
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12
13 """Rotation around an arbitrary axis on the Bloch sphere."""
14
15 import numpy
16 from qiskit.circuit.gate import Gate
17 from qiskit.circuit.exceptions import CircuitError
18
19
20 class RVGate(Gate):
21 r"""Rotation around arbitrary rotation axis :math:`v` where :math:`|v|` is
22 angle of rotation in radians.
23
24 Can be applied to a :class:`~qiskit.circuit.QuantumCircuit`
25 with the :meth:`~qiskit.circuit.QuantumCircuit.rv` method.
26
27 **Circuit symbol:**
28
29 .. parsed-literal::
30
31 ┌─────────────────┐
32 q_0: ┤ RV(v_x,v_y,v_z) ├
33 └─────────────────┘
34
35 **Matrix Representation:**
36
37 .. math::
38
39 \newcommand{\rotationangle}{|\vec{v}|}
40 \newcommand{\sinc}{\text{sinc}}
41 R(\vec{v}) = e^{-i \vec{v}\cdot\vec{\sigma}} =
42 \begin{pmatrix}
43 \cos\left(\rotationangle\right) -i v_z \sinc\left(\rotationangle\right)
44 & -(i v_x + v_y) \sinc\left(\rotationangle\right) \\
45 -(i v_x - v_y) \sinc\left(\rotationangle\right)
46 & \cos\left(\rotationangle\right) + i v_z \sinc\left(\rotationangle\right)
47 \end{pmatrix}
48 """
49
50 def __init__(self, v_x, v_y, v_z, basis="U"):
51 """Create new rv single-qubit gate.
52
53 Args:
54 v_x (float): x-component
55 v_y (float): y-component
56 v_z (float): z-component
57 basis (str, optional): basis (see
58 :class:`~qiskit.synthesis.one_qubit.one_qubit_decompose.OneQubitEulerDecomposer`)
59 """
60 # pylint: disable=cyclic-import
61 from qiskit.synthesis.one_qubit.one_qubit_decompose import OneQubitEulerDecomposer
62
63 super().__init__("rv", 1, [v_x, v_y, v_z])
64 self._decomposer = OneQubitEulerDecomposer(basis=basis)
65
66 def _define(self):
67 try:
68 self.definition = self._decomposer(self.to_matrix())
69 except TypeError as ex:
70 raise CircuitError(
71 f"The {self.name} gate cannot be decomposed with unbound parameters"
72 ) from ex
73
74 def inverse(self):
75 """Invert this gate."""
76 vx, vy, vz = self.params
77 return RVGate(-vx, -vy, -vz)
78
79 def to_matrix(self):
80 """Return a numpy.array for the R(v) gate."""
81 v = numpy.asarray(self.params, dtype=float)
82 angle = numpy.sqrt(v.dot(v))
83 if angle == 0:
84 return numpy.array([[1, 0], [0, 1]])
85 nx, ny, nz = v / angle
86 sin = numpy.sin(angle / 2)
87 cos = numpy.cos(angle / 2)
88 return numpy.array(
89 [
90 [cos - 1j * nz * sin, (-ny - 1j * nx) * sin],
91 [(ny - 1j * nx) * sin, cos + 1j * nz * sin],
92 ]
93 )
94
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/qiskit/circuit/library/generalized_gates/rv.py b/qiskit/circuit/library/generalized_gates/rv.py
--- a/qiskit/circuit/library/generalized_gates/rv.py
+++ b/qiskit/circuit/library/generalized_gates/rv.py
@@ -18,7 +18,7 @@
class RVGate(Gate):
- r"""Rotation around arbitrary rotation axis :math:`v` where :math:`|v|` is
+ r"""Rotation around arbitrary rotation axis :math:`\vec{v}` where :math:`\|\vec{v}\|_2` is
angle of rotation in radians.
Can be applied to a :class:`~qiskit.circuit.QuantumCircuit`
@@ -36,14 +36,17 @@
.. math::
- \newcommand{\rotationangle}{|\vec{v}|}
- \newcommand{\sinc}{\text{sinc}}
- R(\vec{v}) = e^{-i \vec{v}\cdot\vec{\sigma}} =
+ \newcommand{\rotationangle}{\frac{\|\vec{v}\|_2}{2}}
+ R(\vec{v}) = e^{-i \vec{v}\cdot\vec{\sigma} / 2} =
\begin{pmatrix}
- \cos\left(\rotationangle\right) -i v_z \sinc\left(\rotationangle\right)
- & -(i v_x + v_y) \sinc\left(\rotationangle\right) \\
- -(i v_x - v_y) \sinc\left(\rotationangle\right)
- & \cos\left(\rotationangle\right) + i v_z \sinc\left(\rotationangle\right)
+ \cos\left(\rotationangle\right)
+ -i \frac{v_z}{\|\vec{v}\|_2} \sin\left(\rotationangle\right)
+ & -(i \frac{v_x}{\|\vec{v}\|_2}
+ + \frac{v_y}{\|\vec{v}\|_2}) \sin\left(\rotationangle\right) \\
+ -(i \frac{v_x}{\|\vec{v}\|_2}
+ - \frac{v_y}{\|\vec{v}\|_2}) \sin\left(\rotationangle\right)
+ & \cos\left(\rotationangle\right)
+ + i \frac{v_z}{\|\vec{v}\|_2} \sin\left(\rotationangle\right)
\end{pmatrix}
"""
| {"golden_diff": "diff --git a/qiskit/circuit/library/generalized_gates/rv.py b/qiskit/circuit/library/generalized_gates/rv.py\n--- a/qiskit/circuit/library/generalized_gates/rv.py\n+++ b/qiskit/circuit/library/generalized_gates/rv.py\n@@ -18,7 +18,7 @@\n \n \n class RVGate(Gate):\n- r\"\"\"Rotation around arbitrary rotation axis :math:`v` where :math:`|v|` is\n+ r\"\"\"Rotation around arbitrary rotation axis :math:`\\vec{v}` where :math:`\\|\\vec{v}\\|_2` is\n angle of rotation in radians.\n \n Can be applied to a :class:`~qiskit.circuit.QuantumCircuit`\n@@ -36,14 +36,17 @@\n \n .. math::\n \n- \\newcommand{\\rotationangle}{|\\vec{v}|}\n- \\newcommand{\\sinc}{\\text{sinc}}\n- R(\\vec{v}) = e^{-i \\vec{v}\\cdot\\vec{\\sigma}} =\n+ \\newcommand{\\rotationangle}{\\frac{\\|\\vec{v}\\|_2}{2}}\n+ R(\\vec{v}) = e^{-i \\vec{v}\\cdot\\vec{\\sigma} / 2} =\n \\begin{pmatrix}\n- \\cos\\left(\\rotationangle\\right) -i v_z \\sinc\\left(\\rotationangle\\right)\n- & -(i v_x + v_y) \\sinc\\left(\\rotationangle\\right) \\\\\n- -(i v_x - v_y) \\sinc\\left(\\rotationangle\\right)\n- & \\cos\\left(\\rotationangle\\right) + i v_z \\sinc\\left(\\rotationangle\\right)\n+ \\cos\\left(\\rotationangle\\right)\n+ -i \\frac{v_z}{\\|\\vec{v}\\|_2} \\sin\\left(\\rotationangle\\right)\n+ & -(i \\frac{v_x}{\\|\\vec{v}\\|_2}\n+ + \\frac{v_y}{\\|\\vec{v}\\|_2}) \\sin\\left(\\rotationangle\\right) \\\\\n+ -(i \\frac{v_x}{\\|\\vec{v}\\|_2}\n+ - \\frac{v_y}{\\|\\vec{v}\\|_2}) \\sin\\left(\\rotationangle\\right)\n+ & \\cos\\left(\\rotationangle\\right)\n+ + i \\frac{v_z}{\\|\\vec{v}\\|_2} \\sin\\left(\\rotationangle\\right)\n \\end{pmatrix}\n \"\"\"\n", "issue": "Documentation of RVGate is incorrect\n### Environment\n\nN/A\n\n### What is happening?\n\nReceived this in an email:\r\n>Hi, I think I found some errors in the Qiskit documentation at\r\n<https://docs.quantum.ibm.com/api/qiskit/qiskit.circuit.library.RVGate>\r\nand I'm contacting you because you look like the two people who most recently edited the source file at\r\n<https://github.com/Qiskit/qiskit/blob/stable/0.46/qiskit/circuit/library/generalized_gates/rv.py>\r\nThe matrix representation given in the documentation seems to be wrong. I compared it to the definition given in\r\n<https://arxiv.org/pdf/2104.14875.pdf>\r\non page 4, equation 1, we see the definition of the rotation matrix. It almost matches the definition given in the documentation at\r\n<https://docs.quantum.ibm.com/api/qiskit/qiskit.circuit.library.RVGate>\r\nexcept for two mistakes: the \"sinc\" function should be \"sin\", and the angle should be divided by two. This can be compared to the source code at\r\n<https://github.com/Qiskit/qiskit/blob/stable/0.46/qiskit/circuit/library/generalized_gates/rv.py>\r\nat lines 86 and 87, where we see the angle divided by two, and we see the use of the sin and cos functions.\n\n### How can we reproduce the issue?\n\nN/A\n\n### What should happen?\n\nN/A\n\n### Any suggestions?\n\n_No response_\n", "before_files": [{"content": "# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2017, 2020\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"Rotation around an arbitrary axis on the Bloch sphere.\"\"\"\n\nimport numpy\nfrom qiskit.circuit.gate import Gate\nfrom qiskit.circuit.exceptions import CircuitError\n\n\nclass RVGate(Gate):\n r\"\"\"Rotation around arbitrary rotation axis :math:`v` where :math:`|v|` is\n angle of rotation in radians.\n\n Can be applied to a :class:`~qiskit.circuit.QuantumCircuit`\n with the :meth:`~qiskit.circuit.QuantumCircuit.rv` method.\n\n **Circuit symbol:**\n\n .. parsed-literal::\n\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n q_0: \u2524 RV(v_x,v_y,v_z) \u251c\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\n **Matrix Representation:**\n\n .. math::\n\n \\newcommand{\\rotationangle}{|\\vec{v}|}\n \\newcommand{\\sinc}{\\text{sinc}}\n R(\\vec{v}) = e^{-i \\vec{v}\\cdot\\vec{\\sigma}} =\n \\begin{pmatrix}\n \\cos\\left(\\rotationangle\\right) -i v_z \\sinc\\left(\\rotationangle\\right)\n & -(i v_x + v_y) \\sinc\\left(\\rotationangle\\right) \\\\\n -(i v_x - v_y) \\sinc\\left(\\rotationangle\\right)\n & \\cos\\left(\\rotationangle\\right) + i v_z \\sinc\\left(\\rotationangle\\right)\n \\end{pmatrix}\n \"\"\"\n\n def __init__(self, v_x, v_y, v_z, basis=\"U\"):\n \"\"\"Create new rv single-qubit gate.\n\n Args:\n v_x (float): x-component\n v_y (float): y-component\n v_z (float): z-component\n basis (str, optional): basis (see\n :class:`~qiskit.synthesis.one_qubit.one_qubit_decompose.OneQubitEulerDecomposer`)\n \"\"\"\n # pylint: disable=cyclic-import\n from qiskit.synthesis.one_qubit.one_qubit_decompose import OneQubitEulerDecomposer\n\n super().__init__(\"rv\", 1, [v_x, v_y, v_z])\n self._decomposer = OneQubitEulerDecomposer(basis=basis)\n\n def _define(self):\n try:\n self.definition = self._decomposer(self.to_matrix())\n except TypeError as ex:\n raise CircuitError(\n f\"The {self.name} gate cannot be decomposed with unbound parameters\"\n ) from ex\n\n def inverse(self):\n \"\"\"Invert this gate.\"\"\"\n vx, vy, vz = self.params\n return RVGate(-vx, -vy, -vz)\n\n def to_matrix(self):\n \"\"\"Return a numpy.array for the R(v) gate.\"\"\"\n v = numpy.asarray(self.params, dtype=float)\n angle = numpy.sqrt(v.dot(v))\n if angle == 0:\n return numpy.array([[1, 0], [0, 1]])\n nx, ny, nz = v / angle\n sin = numpy.sin(angle / 2)\n cos = numpy.cos(angle / 2)\n return numpy.array(\n [\n [cos - 1j * nz * sin, (-ny - 1j * nx) * sin],\n [(ny - 1j * nx) * sin, cos + 1j * nz * sin],\n ]\n )\n", "path": "qiskit/circuit/library/generalized_gates/rv.py"}], "after_files": [{"content": "# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2017, 2020\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"Rotation around an arbitrary axis on the Bloch sphere.\"\"\"\n\nimport numpy\nfrom qiskit.circuit.gate import Gate\nfrom qiskit.circuit.exceptions import CircuitError\n\n\nclass RVGate(Gate):\n r\"\"\"Rotation around arbitrary rotation axis :math:`\\vec{v}` where :math:`\\|\\vec{v}\\|_2` is\n angle of rotation in radians.\n\n Can be applied to a :class:`~qiskit.circuit.QuantumCircuit`\n with the :meth:`~qiskit.circuit.QuantumCircuit.rv` method.\n\n **Circuit symbol:**\n\n .. parsed-literal::\n\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n q_0: \u2524 RV(v_x,v_y,v_z) \u251c\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\n **Matrix Representation:**\n\n .. math::\n\n \\newcommand{\\rotationangle}{\\frac{\\|\\vec{v}\\|_2}{2}}\n R(\\vec{v}) = e^{-i \\vec{v}\\cdot\\vec{\\sigma} / 2} =\n \\begin{pmatrix}\n \\cos\\left(\\rotationangle\\right)\n -i \\frac{v_z}{\\|\\vec{v}\\|_2} \\sin\\left(\\rotationangle\\right)\n & -(i \\frac{v_x}{\\|\\vec{v}\\|_2}\n + \\frac{v_y}{\\|\\vec{v}\\|_2}) \\sin\\left(\\rotationangle\\right) \\\\\n -(i \\frac{v_x}{\\|\\vec{v}\\|_2}\n - \\frac{v_y}{\\|\\vec{v}\\|_2}) \\sin\\left(\\rotationangle\\right)\n & \\cos\\left(\\rotationangle\\right)\n + i \\frac{v_z}{\\|\\vec{v}\\|_2} \\sin\\left(\\rotationangle\\right)\n \\end{pmatrix}\n \"\"\"\n\n def __init__(self, v_x, v_y, v_z, basis=\"U\"):\n \"\"\"Create new rv single-qubit gate.\n\n Args:\n v_x (float): x-component\n v_y (float): y-component\n v_z (float): z-component\n basis (str, optional): basis (see\n :class:`~qiskit.synthesis.one_qubit.one_qubit_decompose.OneQubitEulerDecomposer`)\n \"\"\"\n # pylint: disable=cyclic-import\n from qiskit.synthesis.one_qubit.one_qubit_decompose import OneQubitEulerDecomposer\n\n super().__init__(\"rv\", 1, [v_x, v_y, v_z])\n self._decomposer = OneQubitEulerDecomposer(basis=basis)\n\n def _define(self):\n try:\n self.definition = self._decomposer(self.to_matrix())\n except TypeError as ex:\n raise CircuitError(\n f\"The {self.name} gate cannot be decomposed with unbound parameters\"\n ) from ex\n\n def inverse(self):\n \"\"\"Invert this gate.\"\"\"\n vx, vy, vz = self.params\n return RVGate(-vx, -vy, -vz)\n\n def to_matrix(self):\n \"\"\"Return a numpy.array for the R(v) gate.\"\"\"\n v = numpy.asarray(self.params, dtype=float)\n angle = numpy.sqrt(v.dot(v))\n if angle == 0:\n return numpy.array([[1, 0], [0, 1]])\n nx, ny, nz = v / angle\n sin = numpy.sin(angle / 2)\n cos = numpy.cos(angle / 2)\n return numpy.array(\n [\n [cos - 1j * nz * sin, (-ny - 1j * nx) * sin],\n [(ny - 1j * nx) * sin, cos + 1j * nz * sin],\n ]\n )\n", "path": "qiskit/circuit/library/generalized_gates/rv.py"}]} | 1,667 | 592 |
gh_patches_debug_31070 | rasdani/github-patches | git_diff | praw-dev__praw-1877 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
modmail conversation messages list only includes the most recent message
### Describe the Bug
If you retrieve modmail through the `subreddit.modmail.conversations()` API, the `.messages` list only includes the most recent message rather than all of the messages for each conversation. The length is always 1 even if thread.num_messages is greater than 1.
Accessing a modmail thread's messages via `subreddit.modmail(thread_id).messages` works as expected.
The example code below will output the unexpected results using PRAW 7.6 (assuming the account's modmail has some threads with more than one message). On PRAW 7.3, the example code produces no output.
### Desired Result
The `.messages` attribute should include all of the messages for each thread.
### Relevant Logs
_No response_
### Code to reproduce the bug
```python
for thread in r.subreddit("all").modmail.conversations(limit=25):
if len(thread.messages) != thread.num_messages:
print(f"unexpected result: {thread.id} num_messages={thread.num_messages} length={len(thread.messages)}")
```
### My code example does not include the `Reddit()` initialization to prevent credential leakage.
Yes
### This code has previously worked as intended.
Yes
### Operating System/Environment
Linux
### Python Version
3.7.3
### PRAW Version
7.6.0
### Prawcore Version
2.2.0
### Anything else?
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `praw/objector.py`
Content:
```
1 """Provides the Objector class."""
2 from datetime import datetime
3 from json import loads
4 from typing import TYPE_CHECKING, Any, Dict, List, Optional, Union
5
6 from .exceptions import ClientException, RedditAPIException
7 from .models.reddit.base import RedditBase
8 from .util import snake_case_keys
9
10 if TYPE_CHECKING: # pragma: no cover
11 import praw
12
13
14 class Objector:
15 """The objector builds :class:`.RedditBase` objects."""
16
17 @classmethod
18 def parse_error(
19 cls, data: Union[List[Any], Dict[str, Dict[str, str]]]
20 ) -> Optional[RedditAPIException]:
21 """Convert JSON response into an error object.
22
23 :param data: The dict to be converted.
24
25 :returns: An instance of :class:`.RedditAPIException`, or ``None`` if ``data``
26 doesn't fit this model.
27
28 """
29 if isinstance(data, list):
30 # Fetching a Submission returns a list (of two items). Although it's handled
31 # manually in `Submission._fetch()`, assume it's a possibility here.
32 return None
33
34 errors = data.get("json", {}).get("errors")
35 if errors is None:
36 return None
37 if len(errors) < 1:
38 # See `Collection._fetch()`.
39 raise ClientException("successful error response", data)
40 return RedditAPIException(errors)
41
42 @classmethod
43 def check_error(cls, data: Union[List[Any], Dict[str, Dict[str, str]]]):
44 """Raise an error if the argument resolves to an error object."""
45 error = cls.parse_error(data)
46 if error:
47 raise error
48
49 def __init__(self, reddit: "praw.Reddit", parsers: Optional[Dict[str, Any]] = None):
50 """Initialize an :class:`.Objector` instance.
51
52 :param reddit: An instance of :class:`.Reddit`.
53
54 """
55 self.parsers = {} if parsers is None else parsers
56 self._reddit = reddit
57
58 def _objectify_dict(self, data):
59 """Create :class:`.RedditBase` objects from dicts.
60
61 :param data: The structured data, assumed to be a dict.
62
63 :returns: An instance of :class:`.RedditBase`.
64
65 """
66 if {"messages", "modActions"}.issubset(data) and {
67 "conversations",
68 "conversation",
69 }.intersection(data):
70 data.update(
71 data.pop("conversation")
72 if "conversation" in data
73 else data.pop("conversations")
74 )
75 parser = self.parsers["ModmailConversation"]
76 parser._convert_conversation_objects(data, self._reddit)
77 elif {"messages", "modActions"}.issubset(data) or {
78 "legacyFirstMessageId",
79 "state",
80 }.issubset(data):
81 parser = self.parsers["ModmailConversation"]
82 elif {"conversationIds", "conversations", "messages"}.issubset(data):
83 data["conversations"] = [
84 data["conversations"][conversation_id]
85 for conversation_id in data["conversationIds"]
86 ]
87 data = snake_case_keys(data)
88 parser = self.parsers["ModmailConversations-list"]
89 elif {"actionTypeId", "author", "date"}.issubset(data):
90 # Modmail mod action
91 data = snake_case_keys(data)
92 parser = self.parsers["ModmailAction"]
93 elif {"bodyMarkdown", "isInternal"}.issubset(data):
94 # Modmail message
95 data = snake_case_keys(data)
96 parser = self.parsers["ModmailMessage"]
97 elif {"kind", "short_name", "violation_reason"}.issubset(data):
98 # This is a Rule
99 parser = self.parsers["rule"]
100 elif {"isAdmin", "isDeleted"}.issubset(data):
101 # Modmail author
102 data = snake_case_keys(data)
103 # Prevent clobbering base-36 id
104 del data["id"]
105 data["is_subreddit_mod"] = data.pop("is_mod")
106 parser = self.parsers[self._reddit.config.kinds["redditor"]]
107 elif {"banStatus", "muteStatus", "recentComments"}.issubset(data):
108 # Modmail user
109 data = snake_case_keys(data)
110 data["created_string"] = data.pop("created")
111 parser = self.parsers[self._reddit.config.kinds["redditor"]]
112 elif {"displayName", "id", "type"}.issubset(data):
113 # Modmail subreddit
114 data = snake_case_keys(data)
115 parser = self.parsers[self._reddit.config.kinds[data["type"]]]
116 elif {"date", "id", "name"}.issubset(data) or {
117 "id",
118 "name",
119 "permissions",
120 }.issubset(data):
121 parser = self.parsers[self._reddit.config.kinds["redditor"]]
122 elif {"text", "url"}.issubset(data):
123 if "color" in data or "linkUrl" in data:
124 parser = self.parsers["Button"]
125 else:
126 parser = self.parsers["MenuLink"]
127 elif {"children", "text"}.issubset(data):
128 parser = self.parsers["Submenu"]
129 elif {"height", "url", "width"}.issubset(data):
130 parser = self.parsers["Image"]
131 elif {"isSubscribed", "name", "subscribers"}.issubset(data):
132 # discards icon and subscribed information
133 return self._reddit.subreddit(data["name"])
134 elif {"authorFlairType", "name"}.issubset(data):
135 # discards flair information
136 return self._reddit.redditor(data["name"])
137 elif {"parent_id"}.issubset(data):
138 parser = self.parsers[self._reddit.config.kinds["comment"]]
139 elif "collection_id" in data.keys():
140 parser = self.parsers["Collection"]
141 elif {"moderators", "moderatorIds", "allUsersLoaded", "subredditId"}.issubset(
142 data
143 ):
144 data = snake_case_keys(data)
145 moderators = []
146 for mod_id in data["moderator_ids"]:
147 mod = snake_case_keys(data["moderators"][mod_id])
148 mod["mod_permissions"] = list(mod["mod_permissions"].keys())
149 moderators.append(mod)
150 data["moderators"] = moderators
151 parser = self.parsers["moderator-list"]
152 elif "username" in data.keys():
153 data["name"] = data.pop("username")
154 parser = self.parsers[self._reddit.config.kinds["redditor"]]
155 elif {"mod_permissions", "name", "sr", "subscribers"}.issubset(data):
156 data["display_name"] = data["sr"]
157 parser = self.parsers[self._reddit.config.kinds["subreddit"]]
158 elif {"drafts", "subreddits"}.issubset(data): # Draft list
159 subreddit_parser = self.parsers[self._reddit.config.kinds["subreddit"]]
160 user_subreddit_parser = self.parsers["UserSubreddit"]
161 subreddits = {
162 subreddit["name"]: user_subreddit_parser.parse(subreddit, self._reddit)
163 if subreddit["display_name_prefixed"].startswith("u/")
164 else subreddit_parser.parse(subreddit, self._reddit)
165 for subreddit in data.pop("subreddits")
166 }
167 for draft in data["drafts"]:
168 if draft["subreddit"]:
169 draft["subreddit"] = subreddits[draft["subreddit"]]
170 draft["modified"] = datetime.fromtimestamp(
171 draft["modified"] / 1000
172 ).astimezone()
173 parser = self.parsers["DraftList"]
174 elif {"mod_action_data", "user_note_data"}.issubset(data):
175 data["moderator"] = self._reddit.redditor(data["operator"])
176 data["subreddit"] = self._reddit.subreddit(data["subreddit"])
177 data["user"] = self._reddit.redditor(data["user"])
178 # move these sub dict values into the main dict for simplicity
179 data.update(data["mod_action_data"])
180 del data["mod_action_data"]
181 data.update(data["user_note_data"])
182 del data["user_note_data"]
183 parser = self.parsers["mod_note"]
184 elif (
185 "created" in data
186 and isinstance(data["created"], dict)
187 and {"mod_action_data", "user_note_data"}.issubset(data["created"])
188 ):
189 data = data["created"]
190 return self._objectify_dict(data)
191 else:
192 if "user" in data:
193 parser = self.parsers[self._reddit.config.kinds["redditor"]]
194 data["user"] = parser.parse({"name": data["user"]}, self._reddit)
195 return data
196 return parser.parse(data, self._reddit)
197
198 def objectify(
199 self, data: Optional[Union[Dict[str, Any], List[Any], bool]]
200 ) -> Optional[Union[RedditBase, Dict[str, Any], List[Any], bool]]:
201 """Create :class:`.RedditBase` objects from data.
202
203 :param data: The structured data.
204
205 :returns: An instance of :class:`.RedditBase`, or ``None`` if given ``data`` is
206 ``None``.
207
208 """
209 # pylint: disable=too-many-return-statements
210 if data is None: # 204 no content
211 return None
212 if isinstance(data, list):
213 return [self.objectify(item) for item in data]
214 if isinstance(data, bool): # Reddit.username_available
215 return data
216 if "json" in data and "errors" in data["json"]:
217 errors = data["json"]["errors"]
218 if len(errors) > 0:
219 raise RedditAPIException(errors)
220 if "kind" in data and (
221 "shortName" in data or data["kind"] in ("menu", "moderators")
222 ):
223 # This is a widget
224 parser = self.parsers.get(data["kind"], self.parsers["widget"])
225 return parser.parse(data, self._reddit)
226 if {"kind", "data"}.issubset(data) and data["kind"] in self.parsers:
227 parser = self.parsers[data["kind"]]
228 if data["kind"] == "ModeratedList":
229 return parser.parse(data, self._reddit)
230 else:
231 return parser.parse(data["data"], self._reddit)
232 if "json" in data and "data" in data["json"]:
233 if "websocket_url" in data["json"]["data"]:
234 return data
235 if "things" in data["json"]["data"]: # Submission.reply
236 return self.objectify(data["json"]["data"]["things"])
237 if "rules" in data["json"]["data"]:
238 return self.objectify(loads(data["json"]["data"]["rules"]))
239 if "drafts_count" in data["json"]["data"] and all(
240 [key not in data["json"]["data"] for key in ["name", "url"]]
241 ): # Draft
242 data["json"]["data"].pop("drafts_count")
243 return self.parsers["Draft"].parse(data["json"]["data"], self._reddit)
244 if "url" in data["json"]["data"]: # Subreddit.submit
245 # The URL is the URL to the submission, so it's removed.
246 del data["json"]["data"]["url"]
247 parser = self.parsers[self._reddit.config.kinds["submission"]]
248 if data["json"]["data"]["id"].startswith(
249 f"{self._reddit.config.kinds['submission']}_"
250 ):
251 # With polls, Reddit returns a fullname but calls it an "id". This
252 # fixes this by coercing the fullname into an id.
253 data["json"]["data"]["id"] = data["json"]["data"]["id"].split(
254 "_", 1
255 )[1]
256 else:
257 parser = self.parsers["LiveUpdateEvent"]
258 return parser.parse(data["json"]["data"], self._reddit)
259 if {"is_public_link", "title", "body"}.issubset(data):
260 parser = self.parsers["Draft"]
261 return parser.parse(data, self._reddit)
262 if "rules" in data:
263 return self.objectify(data["rules"])
264 elif isinstance(data, dict):
265 return self._objectify_dict(data)
266
267 return data
268
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/praw/objector.py b/praw/objector.py
--- a/praw/objector.py
+++ b/praw/objector.py
@@ -67,6 +67,7 @@
"conversations",
"conversation",
}.intersection(data):
+ # fetched conversation
data.update(
data.pop("conversation")
if "conversation" in data
@@ -78,12 +79,24 @@
"legacyFirstMessageId",
"state",
}.issubset(data):
+ # not fetched conversation i.e., from conversations()
+ del data["objIds"] # delete objIds since it could be missing data
parser = self.parsers["ModmailConversation"]
elif {"conversationIds", "conversations", "messages"}.issubset(data):
- data["conversations"] = [
- data["conversations"][conversation_id]
- for conversation_id in data["conversationIds"]
- ]
+ # modmail conversations
+ conversations = []
+ for conversation_id in data["conversationIds"]:
+ conversation = data["conversations"][conversation_id]
+ # set if the numMessages is same as number of messages in objIds
+ if conversation["numMessages"] == len(
+ [obj for obj in conversation["objIds"] if obj["key"] == "messages"]
+ ):
+ conversation["messages"] = [
+ self.objectify(data["messages"][obj_id["id"]])
+ for obj_id in conversation["objIds"]
+ ]
+ conversations.append(conversation)
+ data["conversations"] = conversations
data = snake_case_keys(data)
parser = self.parsers["ModmailConversations-list"]
elif {"actionTypeId", "author", "date"}.issubset(data):
| {"golden_diff": "diff --git a/praw/objector.py b/praw/objector.py\n--- a/praw/objector.py\n+++ b/praw/objector.py\n@@ -67,6 +67,7 @@\n \"conversations\",\n \"conversation\",\n }.intersection(data):\n+ # fetched conversation\n data.update(\n data.pop(\"conversation\")\n if \"conversation\" in data\n@@ -78,12 +79,24 @@\n \"legacyFirstMessageId\",\n \"state\",\n }.issubset(data):\n+ # not fetched conversation i.e., from conversations()\n+ del data[\"objIds\"] # delete objIds since it could be missing data\n parser = self.parsers[\"ModmailConversation\"]\n elif {\"conversationIds\", \"conversations\", \"messages\"}.issubset(data):\n- data[\"conversations\"] = [\n- data[\"conversations\"][conversation_id]\n- for conversation_id in data[\"conversationIds\"]\n- ]\n+ # modmail conversations\n+ conversations = []\n+ for conversation_id in data[\"conversationIds\"]:\n+ conversation = data[\"conversations\"][conversation_id]\n+ # set if the numMessages is same as number of messages in objIds\n+ if conversation[\"numMessages\"] == len(\n+ [obj for obj in conversation[\"objIds\"] if obj[\"key\"] == \"messages\"]\n+ ):\n+ conversation[\"messages\"] = [\n+ self.objectify(data[\"messages\"][obj_id[\"id\"]])\n+ for obj_id in conversation[\"objIds\"]\n+ ]\n+ conversations.append(conversation)\n+ data[\"conversations\"] = conversations\n data = snake_case_keys(data)\n parser = self.parsers[\"ModmailConversations-list\"]\n elif {\"actionTypeId\", \"author\", \"date\"}.issubset(data):\n", "issue": "modmail conversation messages list only includes the most recent message\n### Describe the Bug\n\nIf you retrieve modmail through the `subreddit.modmail.conversations()` API, the `.messages` list only includes the most recent message rather than all of the messages for each conversation. The length is always 1 even if thread.num_messages is greater than 1.\r\n\r\nAccessing a modmail thread's messages via `subreddit.modmail(thread_id).messages` works as expected.\r\n\r\nThe example code below will output the unexpected results using PRAW 7.6 (assuming the account's modmail has some threads with more than one message). On PRAW 7.3, the example code produces no output.\n\n### Desired Result\n\nThe `.messages` attribute should include all of the messages for each thread.\n\n### Relevant Logs\n\n_No response_\n\n### Code to reproduce the bug\n\n```python\nfor thread in r.subreddit(\"all\").modmail.conversations(limit=25):\r\n if len(thread.messages) != thread.num_messages:\r\n print(f\"unexpected result: {thread.id} num_messages={thread.num_messages} length={len(thread.messages)}\")\n```\n\n\n### My code example does not include the `Reddit()` initialization to prevent credential leakage.\n\nYes\n\n### This code has previously worked as intended.\n\nYes\n\n### Operating System/Environment\n\nLinux\n\n### Python Version\n\n3.7.3\n\n### PRAW Version\n\n7.6.0\n\n### Prawcore Version\n\n2.2.0\n\n### Anything else?\n\n_No response_\n", "before_files": [{"content": "\"\"\"Provides the Objector class.\"\"\"\nfrom datetime import datetime\nfrom json import loads\nfrom typing import TYPE_CHECKING, Any, Dict, List, Optional, Union\n\nfrom .exceptions import ClientException, RedditAPIException\nfrom .models.reddit.base import RedditBase\nfrom .util import snake_case_keys\n\nif TYPE_CHECKING: # pragma: no cover\n import praw\n\n\nclass Objector:\n \"\"\"The objector builds :class:`.RedditBase` objects.\"\"\"\n\n @classmethod\n def parse_error(\n cls, data: Union[List[Any], Dict[str, Dict[str, str]]]\n ) -> Optional[RedditAPIException]:\n \"\"\"Convert JSON response into an error object.\n\n :param data: The dict to be converted.\n\n :returns: An instance of :class:`.RedditAPIException`, or ``None`` if ``data``\n doesn't fit this model.\n\n \"\"\"\n if isinstance(data, list):\n # Fetching a Submission returns a list (of two items). Although it's handled\n # manually in `Submission._fetch()`, assume it's a possibility here.\n return None\n\n errors = data.get(\"json\", {}).get(\"errors\")\n if errors is None:\n return None\n if len(errors) < 1:\n # See `Collection._fetch()`.\n raise ClientException(\"successful error response\", data)\n return RedditAPIException(errors)\n\n @classmethod\n def check_error(cls, data: Union[List[Any], Dict[str, Dict[str, str]]]):\n \"\"\"Raise an error if the argument resolves to an error object.\"\"\"\n error = cls.parse_error(data)\n if error:\n raise error\n\n def __init__(self, reddit: \"praw.Reddit\", parsers: Optional[Dict[str, Any]] = None):\n \"\"\"Initialize an :class:`.Objector` instance.\n\n :param reddit: An instance of :class:`.Reddit`.\n\n \"\"\"\n self.parsers = {} if parsers is None else parsers\n self._reddit = reddit\n\n def _objectify_dict(self, data):\n \"\"\"Create :class:`.RedditBase` objects from dicts.\n\n :param data: The structured data, assumed to be a dict.\n\n :returns: An instance of :class:`.RedditBase`.\n\n \"\"\"\n if {\"messages\", \"modActions\"}.issubset(data) and {\n \"conversations\",\n \"conversation\",\n }.intersection(data):\n data.update(\n data.pop(\"conversation\")\n if \"conversation\" in data\n else data.pop(\"conversations\")\n )\n parser = self.parsers[\"ModmailConversation\"]\n parser._convert_conversation_objects(data, self._reddit)\n elif {\"messages\", \"modActions\"}.issubset(data) or {\n \"legacyFirstMessageId\",\n \"state\",\n }.issubset(data):\n parser = self.parsers[\"ModmailConversation\"]\n elif {\"conversationIds\", \"conversations\", \"messages\"}.issubset(data):\n data[\"conversations\"] = [\n data[\"conversations\"][conversation_id]\n for conversation_id in data[\"conversationIds\"]\n ]\n data = snake_case_keys(data)\n parser = self.parsers[\"ModmailConversations-list\"]\n elif {\"actionTypeId\", \"author\", \"date\"}.issubset(data):\n # Modmail mod action\n data = snake_case_keys(data)\n parser = self.parsers[\"ModmailAction\"]\n elif {\"bodyMarkdown\", \"isInternal\"}.issubset(data):\n # Modmail message\n data = snake_case_keys(data)\n parser = self.parsers[\"ModmailMessage\"]\n elif {\"kind\", \"short_name\", \"violation_reason\"}.issubset(data):\n # This is a Rule\n parser = self.parsers[\"rule\"]\n elif {\"isAdmin\", \"isDeleted\"}.issubset(data):\n # Modmail author\n data = snake_case_keys(data)\n # Prevent clobbering base-36 id\n del data[\"id\"]\n data[\"is_subreddit_mod\"] = data.pop(\"is_mod\")\n parser = self.parsers[self._reddit.config.kinds[\"redditor\"]]\n elif {\"banStatus\", \"muteStatus\", \"recentComments\"}.issubset(data):\n # Modmail user\n data = snake_case_keys(data)\n data[\"created_string\"] = data.pop(\"created\")\n parser = self.parsers[self._reddit.config.kinds[\"redditor\"]]\n elif {\"displayName\", \"id\", \"type\"}.issubset(data):\n # Modmail subreddit\n data = snake_case_keys(data)\n parser = self.parsers[self._reddit.config.kinds[data[\"type\"]]]\n elif {\"date\", \"id\", \"name\"}.issubset(data) or {\n \"id\",\n \"name\",\n \"permissions\",\n }.issubset(data):\n parser = self.parsers[self._reddit.config.kinds[\"redditor\"]]\n elif {\"text\", \"url\"}.issubset(data):\n if \"color\" in data or \"linkUrl\" in data:\n parser = self.parsers[\"Button\"]\n else:\n parser = self.parsers[\"MenuLink\"]\n elif {\"children\", \"text\"}.issubset(data):\n parser = self.parsers[\"Submenu\"]\n elif {\"height\", \"url\", \"width\"}.issubset(data):\n parser = self.parsers[\"Image\"]\n elif {\"isSubscribed\", \"name\", \"subscribers\"}.issubset(data):\n # discards icon and subscribed information\n return self._reddit.subreddit(data[\"name\"])\n elif {\"authorFlairType\", \"name\"}.issubset(data):\n # discards flair information\n return self._reddit.redditor(data[\"name\"])\n elif {\"parent_id\"}.issubset(data):\n parser = self.parsers[self._reddit.config.kinds[\"comment\"]]\n elif \"collection_id\" in data.keys():\n parser = self.parsers[\"Collection\"]\n elif {\"moderators\", \"moderatorIds\", \"allUsersLoaded\", \"subredditId\"}.issubset(\n data\n ):\n data = snake_case_keys(data)\n moderators = []\n for mod_id in data[\"moderator_ids\"]:\n mod = snake_case_keys(data[\"moderators\"][mod_id])\n mod[\"mod_permissions\"] = list(mod[\"mod_permissions\"].keys())\n moderators.append(mod)\n data[\"moderators\"] = moderators\n parser = self.parsers[\"moderator-list\"]\n elif \"username\" in data.keys():\n data[\"name\"] = data.pop(\"username\")\n parser = self.parsers[self._reddit.config.kinds[\"redditor\"]]\n elif {\"mod_permissions\", \"name\", \"sr\", \"subscribers\"}.issubset(data):\n data[\"display_name\"] = data[\"sr\"]\n parser = self.parsers[self._reddit.config.kinds[\"subreddit\"]]\n elif {\"drafts\", \"subreddits\"}.issubset(data): # Draft list\n subreddit_parser = self.parsers[self._reddit.config.kinds[\"subreddit\"]]\n user_subreddit_parser = self.parsers[\"UserSubreddit\"]\n subreddits = {\n subreddit[\"name\"]: user_subreddit_parser.parse(subreddit, self._reddit)\n if subreddit[\"display_name_prefixed\"].startswith(\"u/\")\n else subreddit_parser.parse(subreddit, self._reddit)\n for subreddit in data.pop(\"subreddits\")\n }\n for draft in data[\"drafts\"]:\n if draft[\"subreddit\"]:\n draft[\"subreddit\"] = subreddits[draft[\"subreddit\"]]\n draft[\"modified\"] = datetime.fromtimestamp(\n draft[\"modified\"] / 1000\n ).astimezone()\n parser = self.parsers[\"DraftList\"]\n elif {\"mod_action_data\", \"user_note_data\"}.issubset(data):\n data[\"moderator\"] = self._reddit.redditor(data[\"operator\"])\n data[\"subreddit\"] = self._reddit.subreddit(data[\"subreddit\"])\n data[\"user\"] = self._reddit.redditor(data[\"user\"])\n # move these sub dict values into the main dict for simplicity\n data.update(data[\"mod_action_data\"])\n del data[\"mod_action_data\"]\n data.update(data[\"user_note_data\"])\n del data[\"user_note_data\"]\n parser = self.parsers[\"mod_note\"]\n elif (\n \"created\" in data\n and isinstance(data[\"created\"], dict)\n and {\"mod_action_data\", \"user_note_data\"}.issubset(data[\"created\"])\n ):\n data = data[\"created\"]\n return self._objectify_dict(data)\n else:\n if \"user\" in data:\n parser = self.parsers[self._reddit.config.kinds[\"redditor\"]]\n data[\"user\"] = parser.parse({\"name\": data[\"user\"]}, self._reddit)\n return data\n return parser.parse(data, self._reddit)\n\n def objectify(\n self, data: Optional[Union[Dict[str, Any], List[Any], bool]]\n ) -> Optional[Union[RedditBase, Dict[str, Any], List[Any], bool]]:\n \"\"\"Create :class:`.RedditBase` objects from data.\n\n :param data: The structured data.\n\n :returns: An instance of :class:`.RedditBase`, or ``None`` if given ``data`` is\n ``None``.\n\n \"\"\"\n # pylint: disable=too-many-return-statements\n if data is None: # 204 no content\n return None\n if isinstance(data, list):\n return [self.objectify(item) for item in data]\n if isinstance(data, bool): # Reddit.username_available\n return data\n if \"json\" in data and \"errors\" in data[\"json\"]:\n errors = data[\"json\"][\"errors\"]\n if len(errors) > 0:\n raise RedditAPIException(errors)\n if \"kind\" in data and (\n \"shortName\" in data or data[\"kind\"] in (\"menu\", \"moderators\")\n ):\n # This is a widget\n parser = self.parsers.get(data[\"kind\"], self.parsers[\"widget\"])\n return parser.parse(data, self._reddit)\n if {\"kind\", \"data\"}.issubset(data) and data[\"kind\"] in self.parsers:\n parser = self.parsers[data[\"kind\"]]\n if data[\"kind\"] == \"ModeratedList\":\n return parser.parse(data, self._reddit)\n else:\n return parser.parse(data[\"data\"], self._reddit)\n if \"json\" in data and \"data\" in data[\"json\"]:\n if \"websocket_url\" in data[\"json\"][\"data\"]:\n return data\n if \"things\" in data[\"json\"][\"data\"]: # Submission.reply\n return self.objectify(data[\"json\"][\"data\"][\"things\"])\n if \"rules\" in data[\"json\"][\"data\"]:\n return self.objectify(loads(data[\"json\"][\"data\"][\"rules\"]))\n if \"drafts_count\" in data[\"json\"][\"data\"] and all(\n [key not in data[\"json\"][\"data\"] for key in [\"name\", \"url\"]]\n ): # Draft\n data[\"json\"][\"data\"].pop(\"drafts_count\")\n return self.parsers[\"Draft\"].parse(data[\"json\"][\"data\"], self._reddit)\n if \"url\" in data[\"json\"][\"data\"]: # Subreddit.submit\n # The URL is the URL to the submission, so it's removed.\n del data[\"json\"][\"data\"][\"url\"]\n parser = self.parsers[self._reddit.config.kinds[\"submission\"]]\n if data[\"json\"][\"data\"][\"id\"].startswith(\n f\"{self._reddit.config.kinds['submission']}_\"\n ):\n # With polls, Reddit returns a fullname but calls it an \"id\". This\n # fixes this by coercing the fullname into an id.\n data[\"json\"][\"data\"][\"id\"] = data[\"json\"][\"data\"][\"id\"].split(\n \"_\", 1\n )[1]\n else:\n parser = self.parsers[\"LiveUpdateEvent\"]\n return parser.parse(data[\"json\"][\"data\"], self._reddit)\n if {\"is_public_link\", \"title\", \"body\"}.issubset(data):\n parser = self.parsers[\"Draft\"]\n return parser.parse(data, self._reddit)\n if \"rules\" in data:\n return self.objectify(data[\"rules\"])\n elif isinstance(data, dict):\n return self._objectify_dict(data)\n\n return data\n", "path": "praw/objector.py"}], "after_files": [{"content": "\"\"\"Provides the Objector class.\"\"\"\nfrom datetime import datetime\nfrom json import loads\nfrom typing import TYPE_CHECKING, Any, Dict, List, Optional, Union\n\nfrom .exceptions import ClientException, RedditAPIException\nfrom .models.reddit.base import RedditBase\nfrom .util import snake_case_keys\n\nif TYPE_CHECKING: # pragma: no cover\n import praw\n\n\nclass Objector:\n \"\"\"The objector builds :class:`.RedditBase` objects.\"\"\"\n\n @classmethod\n def parse_error(\n cls, data: Union[List[Any], Dict[str, Dict[str, str]]]\n ) -> Optional[RedditAPIException]:\n \"\"\"Convert JSON response into an error object.\n\n :param data: The dict to be converted.\n\n :returns: An instance of :class:`.RedditAPIException`, or ``None`` if ``data``\n doesn't fit this model.\n\n \"\"\"\n if isinstance(data, list):\n # Fetching a Submission returns a list (of two items). Although it's handled\n # manually in `Submission._fetch()`, assume it's a possibility here.\n return None\n\n errors = data.get(\"json\", {}).get(\"errors\")\n if errors is None:\n return None\n if len(errors) < 1:\n # See `Collection._fetch()`.\n raise ClientException(\"successful error response\", data)\n return RedditAPIException(errors)\n\n @classmethod\n def check_error(cls, data: Union[List[Any], Dict[str, Dict[str, str]]]):\n \"\"\"Raise an error if the argument resolves to an error object.\"\"\"\n error = cls.parse_error(data)\n if error:\n raise error\n\n def __init__(self, reddit: \"praw.Reddit\", parsers: Optional[Dict[str, Any]] = None):\n \"\"\"Initialize an :class:`.Objector` instance.\n\n :param reddit: An instance of :class:`.Reddit`.\n\n \"\"\"\n self.parsers = {} if parsers is None else parsers\n self._reddit = reddit\n\n def _objectify_dict(self, data):\n \"\"\"Create :class:`.RedditBase` objects from dicts.\n\n :param data: The structured data, assumed to be a dict.\n\n :returns: An instance of :class:`.RedditBase`.\n\n \"\"\"\n if {\"messages\", \"modActions\"}.issubset(data) and {\n \"conversations\",\n \"conversation\",\n }.intersection(data):\n # fetched conversation\n data.update(\n data.pop(\"conversation\")\n if \"conversation\" in data\n else data.pop(\"conversations\")\n )\n parser = self.parsers[\"ModmailConversation\"]\n parser._convert_conversation_objects(data, self._reddit)\n elif {\"messages\", \"modActions\"}.issubset(data) or {\n \"legacyFirstMessageId\",\n \"state\",\n }.issubset(data):\n # not fetched conversation i.e., from conversations()\n del data[\"objIds\"] # delete objIds since it could be missing data\n parser = self.parsers[\"ModmailConversation\"]\n elif {\"conversationIds\", \"conversations\", \"messages\"}.issubset(data):\n # modmail conversations\n conversations = []\n for conversation_id in data[\"conversationIds\"]:\n conversation = data[\"conversations\"][conversation_id]\n # set if the numMessages is same as number of messages in objIds\n if conversation[\"numMessages\"] == len(\n [obj for obj in conversation[\"objIds\"] if obj[\"key\"] == \"messages\"]\n ):\n conversation[\"messages\"] = [\n self.objectify(data[\"messages\"][obj_id[\"id\"]])\n for obj_id in conversation[\"objIds\"]\n ]\n conversations.append(conversation)\n data[\"conversations\"] = conversations\n data = snake_case_keys(data)\n parser = self.parsers[\"ModmailConversations-list\"]\n elif {\"actionTypeId\", \"author\", \"date\"}.issubset(data):\n # Modmail mod action\n data = snake_case_keys(data)\n parser = self.parsers[\"ModmailAction\"]\n elif {\"bodyMarkdown\", \"isInternal\"}.issubset(data):\n # Modmail message\n data = snake_case_keys(data)\n parser = self.parsers[\"ModmailMessage\"]\n elif {\"kind\", \"short_name\", \"violation_reason\"}.issubset(data):\n # This is a Rule\n parser = self.parsers[\"rule\"]\n elif {\"isAdmin\", \"isDeleted\"}.issubset(data):\n # Modmail author\n data = snake_case_keys(data)\n # Prevent clobbering base-36 id\n del data[\"id\"]\n data[\"is_subreddit_mod\"] = data.pop(\"is_mod\")\n parser = self.parsers[self._reddit.config.kinds[\"redditor\"]]\n elif {\"banStatus\", \"muteStatus\", \"recentComments\"}.issubset(data):\n # Modmail user\n data = snake_case_keys(data)\n data[\"created_string\"] = data.pop(\"created\")\n parser = self.parsers[self._reddit.config.kinds[\"redditor\"]]\n elif {\"displayName\", \"id\", \"type\"}.issubset(data):\n # Modmail subreddit\n data = snake_case_keys(data)\n parser = self.parsers[self._reddit.config.kinds[data[\"type\"]]]\n elif {\"date\", \"id\", \"name\"}.issubset(data) or {\n \"id\",\n \"name\",\n \"permissions\",\n }.issubset(data):\n parser = self.parsers[self._reddit.config.kinds[\"redditor\"]]\n elif {\"text\", \"url\"}.issubset(data):\n if \"color\" in data or \"linkUrl\" in data:\n parser = self.parsers[\"Button\"]\n else:\n parser = self.parsers[\"MenuLink\"]\n elif {\"children\", \"text\"}.issubset(data):\n parser = self.parsers[\"Submenu\"]\n elif {\"height\", \"url\", \"width\"}.issubset(data):\n parser = self.parsers[\"Image\"]\n elif {\"isSubscribed\", \"name\", \"subscribers\"}.issubset(data):\n # discards icon and subscribed information\n return self._reddit.subreddit(data[\"name\"])\n elif {\"authorFlairType\", \"name\"}.issubset(data):\n # discards flair information\n return self._reddit.redditor(data[\"name\"])\n elif {\"parent_id\"}.issubset(data):\n parser = self.parsers[self._reddit.config.kinds[\"comment\"]]\n elif \"collection_id\" in data.keys():\n parser = self.parsers[\"Collection\"]\n elif {\"moderators\", \"moderatorIds\", \"allUsersLoaded\", \"subredditId\"}.issubset(\n data\n ):\n data = snake_case_keys(data)\n moderators = []\n for mod_id in data[\"moderator_ids\"]:\n mod = snake_case_keys(data[\"moderators\"][mod_id])\n mod[\"mod_permissions\"] = list(mod[\"mod_permissions\"].keys())\n moderators.append(mod)\n data[\"moderators\"] = moderators\n parser = self.parsers[\"moderator-list\"]\n elif \"username\" in data.keys():\n data[\"name\"] = data.pop(\"username\")\n parser = self.parsers[self._reddit.config.kinds[\"redditor\"]]\n elif {\"mod_permissions\", \"name\", \"sr\", \"subscribers\"}.issubset(data):\n data[\"display_name\"] = data[\"sr\"]\n parser = self.parsers[self._reddit.config.kinds[\"subreddit\"]]\n elif {\"drafts\", \"subreddits\"}.issubset(data): # Draft list\n subreddit_parser = self.parsers[self._reddit.config.kinds[\"subreddit\"]]\n user_subreddit_parser = self.parsers[\"UserSubreddit\"]\n subreddits = {\n subreddit[\"name\"]: user_subreddit_parser.parse(subreddit, self._reddit)\n if subreddit[\"display_name_prefixed\"].startswith(\"u/\")\n else subreddit_parser.parse(subreddit, self._reddit)\n for subreddit in data.pop(\"subreddits\")\n }\n for draft in data[\"drafts\"]:\n if draft[\"subreddit\"]:\n draft[\"subreddit\"] = subreddits[draft[\"subreddit\"]]\n draft[\"modified\"] = datetime.fromtimestamp(\n draft[\"modified\"] / 1000\n ).astimezone()\n parser = self.parsers[\"DraftList\"]\n elif {\"mod_action_data\", \"user_note_data\"}.issubset(data):\n data[\"moderator\"] = self._reddit.redditor(data[\"operator\"])\n data[\"subreddit\"] = self._reddit.subreddit(data[\"subreddit\"])\n data[\"user\"] = self._reddit.redditor(data[\"user\"])\n # move these sub dict values into the main dict for simplicity\n data.update(data[\"mod_action_data\"])\n del data[\"mod_action_data\"]\n data.update(data[\"user_note_data\"])\n del data[\"user_note_data\"]\n parser = self.parsers[\"mod_note\"]\n elif (\n \"created\" in data\n and isinstance(data[\"created\"], dict)\n and {\"mod_action_data\", \"user_note_data\"}.issubset(data[\"created\"])\n ):\n data = data[\"created\"]\n return self._objectify_dict(data)\n else:\n if \"user\" in data:\n parser = self.parsers[self._reddit.config.kinds[\"redditor\"]]\n data[\"user\"] = parser.parse({\"name\": data[\"user\"]}, self._reddit)\n return data\n return parser.parse(data, self._reddit)\n\n def objectify(\n self, data: Optional[Union[Dict[str, Any], List[Any], bool]]\n ) -> Optional[Union[RedditBase, Dict[str, Any], List[Any], bool]]:\n \"\"\"Create :class:`.RedditBase` objects from data.\n\n :param data: The structured data.\n\n :returns: An instance of :class:`.RedditBase`, or ``None`` if given ``data`` is\n ``None``.\n\n \"\"\"\n # pylint: disable=too-many-return-statements\n if data is None: # 204 no content\n return None\n if isinstance(data, list):\n return [self.objectify(item) for item in data]\n if isinstance(data, bool): # Reddit.username_available\n return data\n if \"json\" in data and \"errors\" in data[\"json\"]:\n errors = data[\"json\"][\"errors\"]\n if len(errors) > 0:\n raise RedditAPIException(errors)\n if \"kind\" in data and (\n \"shortName\" in data or data[\"kind\"] in (\"menu\", \"moderators\")\n ):\n # This is a widget\n parser = self.parsers.get(data[\"kind\"], self.parsers[\"widget\"])\n return parser.parse(data, self._reddit)\n if {\"kind\", \"data\"}.issubset(data) and data[\"kind\"] in self.parsers:\n parser = self.parsers[data[\"kind\"]]\n if data[\"kind\"] == \"ModeratedList\":\n return parser.parse(data, self._reddit)\n else:\n return parser.parse(data[\"data\"], self._reddit)\n if \"json\" in data and \"data\" in data[\"json\"]:\n if \"websocket_url\" in data[\"json\"][\"data\"]:\n return data\n if \"things\" in data[\"json\"][\"data\"]: # Submission.reply\n return self.objectify(data[\"json\"][\"data\"][\"things\"])\n if \"rules\" in data[\"json\"][\"data\"]:\n return self.objectify(loads(data[\"json\"][\"data\"][\"rules\"]))\n if \"drafts_count\" in data[\"json\"][\"data\"] and all(\n [key not in data[\"json\"][\"data\"] for key in [\"name\", \"url\"]]\n ): # Draft\n data[\"json\"][\"data\"].pop(\"drafts_count\")\n return self.parsers[\"Draft\"].parse(data[\"json\"][\"data\"], self._reddit)\n if \"url\" in data[\"json\"][\"data\"]: # Subreddit.submit\n # The URL is the URL to the submission, so it's removed.\n del data[\"json\"][\"data\"][\"url\"]\n parser = self.parsers[self._reddit.config.kinds[\"submission\"]]\n if data[\"json\"][\"data\"][\"id\"].startswith(\n f\"{self._reddit.config.kinds['submission']}_\"\n ):\n # With polls, Reddit returns a fullname but calls it an \"id\". This\n # fixes this by coercing the fullname into an id.\n data[\"json\"][\"data\"][\"id\"] = data[\"json\"][\"data\"][\"id\"].split(\n \"_\", 1\n )[1]\n else:\n parser = self.parsers[\"LiveUpdateEvent\"]\n return parser.parse(data[\"json\"][\"data\"], self._reddit)\n if {\"is_public_link\", \"title\", \"body\"}.issubset(data):\n parser = self.parsers[\"Draft\"]\n return parser.parse(data, self._reddit)\n if \"rules\" in data:\n return self.objectify(data[\"rules\"])\n elif isinstance(data, dict):\n return self._objectify_dict(data)\n\n return data\n", "path": "praw/objector.py"}]} | 3,903 | 388 |
gh_patches_debug_8362 | rasdani/github-patches | git_diff | getnikola__nikola-3036 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
RSS_PATH doesn't work as advertised (is path and filename, excluding .xml)
* Python Version: 3.5.3
* Nikola Version: v7.8.14
* Operating System: Debian
A fresh config says:
```
# Final location for the blog main RSS feed is:
# output / TRANSLATION[lang] / RSS_PATH / rss.xml
```
which is in line with other `_PATH` variables.
But it seems `RSS_PATH` is actually path+filename (and `.xml` is appended).
With `RSS_PATH = "blog/`I get `render_taxonomies:output/blog/.xml` (instead of `blog/rss.xml`)
With `RSS_PATH = blog/index.xml` I get `render_taxonomies:output/blog/index.xml.xml`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nikola/plugins/task/indexes.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2012-2018 Roberto Alsina and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 """Render the blog's main index."""
28
29
30 from nikola.plugin_categories import Taxonomy
31
32
33 class Indexes(Taxonomy):
34 """Classify for the blog's main index."""
35
36 name = "classify_indexes"
37
38 classification_name = "index"
39 overview_page_variable_name = None
40 more_than_one_classifications_per_post = False
41 has_hierarchy = False
42 show_list_as_index = True
43 template_for_single_list = "index.tmpl"
44 template_for_classification_overview = None
45 apply_to_posts = True
46 apply_to_pages = False
47 omit_empty_classifications = False
48 path_handler_docstrings = {
49 'index_index': False,
50 'index': """Link to a numbered index.
51
52 Example:
53
54 link://index/3 => /index-3.html""",
55 'index_atom': """Link to a numbered Atom index.
56
57 Example:
58
59 link://index_atom/3 => /index-3.atom""",
60 'index_rss': """A link to the RSS feed path.
61
62 Example:
63
64 link://rss => /blog/rss.xml""",
65 }
66
67 def set_site(self, site):
68 """Set Nikola site."""
69 # Redirect automatically generated 'index_rss' path handler to 'rss' for compatibility with old rss plugin
70 site.register_path_handler('rss', lambda name, lang: site.path_handlers['index_rss'](name, lang))
71 site.path_handlers['rss'].__doc__ = """A link to the RSS feed path.
72
73 Example:
74
75 link://rss => /blog/rss.xml
76 """.strip()
77 return super(Indexes, self).set_site(site)
78
79 def get_implicit_classifications(self, lang):
80 """Return a list of classification strings which should always appear in posts_per_classification."""
81 return [""]
82
83 def classify(self, post, lang):
84 """Classify the given post for the given language."""
85 return [""]
86
87 def get_classification_friendly_name(self, classification, lang, only_last_component=False):
88 """Extract a friendly name from the classification."""
89 return self.site.config["BLOG_TITLE"](lang)
90
91 def get_path(self, classification, lang, dest_type='page'):
92 """Return a path for the given classification."""
93 if dest_type == 'rss':
94 return [self.site.config['RSS_PATH'](lang)], True
95 # 'page' (index) or 'feed' (Atom)
96 page_number = None
97 if dest_type == 'page':
98 # Interpret argument as page number
99 try:
100 page_number = int(classification)
101 except (ValueError, TypeError):
102 pass
103 return [self.site.config['INDEX_PATH'](lang)], 'always', page_number
104
105 def provide_context_and_uptodate(self, classification, lang, node=None):
106 """Provide data for the context and the uptodate list for the list of the given classifiation."""
107 kw = {
108 }
109 context = {
110 "title": self.site.config["INDEXES_TITLE"](lang) or self.site.config["BLOG_TITLE"](lang),
111 "description": self.site.config["BLOG_DESCRIPTION"](lang),
112 "pagekind": ["main_index", "index"],
113 }
114 kw.update(context)
115 return context, kw
116
117 def should_generate_classification_page(self, classification, post_list, lang):
118 """Only generates list of posts for classification if this function returns True."""
119 return not self.site.config["DISABLE_INDEXES_PLUGIN_INDEX_AND_ATOM_FEED"]
120
121 def should_generate_rss_for_classification_page(self, classification, post_list, lang):
122 """Only generates RSS feed for list of posts for classification if this function returns True."""
123 return not self.site.config["DISABLE_INDEXES_PLUGIN_RSS_FEED"]
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nikola/plugins/task/indexes.py b/nikola/plugins/task/indexes.py
--- a/nikola/plugins/task/indexes.py
+++ b/nikola/plugins/task/indexes.py
@@ -91,7 +91,7 @@
def get_path(self, classification, lang, dest_type='page'):
"""Return a path for the given classification."""
if dest_type == 'rss':
- return [self.site.config['RSS_PATH'](lang)], True
+ return [self.site.config['RSS_PATH'](lang), 'rss'], 'auto'
# 'page' (index) or 'feed' (Atom)
page_number = None
if dest_type == 'page':
| {"golden_diff": "diff --git a/nikola/plugins/task/indexes.py b/nikola/plugins/task/indexes.py\n--- a/nikola/plugins/task/indexes.py\n+++ b/nikola/plugins/task/indexes.py\n@@ -91,7 +91,7 @@\n def get_path(self, classification, lang, dest_type='page'):\n \"\"\"Return a path for the given classification.\"\"\"\n if dest_type == 'rss':\n- return [self.site.config['RSS_PATH'](lang)], True\n+ return [self.site.config['RSS_PATH'](lang), 'rss'], 'auto'\n # 'page' (index) or 'feed' (Atom)\n page_number = None\n if dest_type == 'page':\n", "issue": "RSS_PATH doesn't work as advertised (is path and filename, excluding .xml)\n* Python Version: 3.5.3\r\n* Nikola Version: v7.8.14\r\n* Operating System: Debian\r\n\r\nA fresh config says:\r\n\r\n```\r\n# Final location for the blog main RSS feed is:\r\n# output / TRANSLATION[lang] / RSS_PATH / rss.xml\r\n```\r\n\r\nwhich is in line with other `_PATH` variables.\r\n\r\nBut it seems `RSS_PATH` is actually path+filename (and `.xml` is appended).\r\n\r\nWith `RSS_PATH = \"blog/`I get `render_taxonomies:output/blog/.xml` (instead of `blog/rss.xml`)\r\n\r\nWith `RSS_PATH = blog/index.xml` I get `render_taxonomies:output/blog/index.xml.xml`\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2018 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Render the blog's main index.\"\"\"\n\n\nfrom nikola.plugin_categories import Taxonomy\n\n\nclass Indexes(Taxonomy):\n \"\"\"Classify for the blog's main index.\"\"\"\n\n name = \"classify_indexes\"\n\n classification_name = \"index\"\n overview_page_variable_name = None\n more_than_one_classifications_per_post = False\n has_hierarchy = False\n show_list_as_index = True\n template_for_single_list = \"index.tmpl\"\n template_for_classification_overview = None\n apply_to_posts = True\n apply_to_pages = False\n omit_empty_classifications = False\n path_handler_docstrings = {\n 'index_index': False,\n 'index': \"\"\"Link to a numbered index.\n\nExample:\n\nlink://index/3 => /index-3.html\"\"\",\n 'index_atom': \"\"\"Link to a numbered Atom index.\n\nExample:\n\nlink://index_atom/3 => /index-3.atom\"\"\",\n 'index_rss': \"\"\"A link to the RSS feed path.\n\nExample:\n\nlink://rss => /blog/rss.xml\"\"\",\n }\n\n def set_site(self, site):\n \"\"\"Set Nikola site.\"\"\"\n # Redirect automatically generated 'index_rss' path handler to 'rss' for compatibility with old rss plugin\n site.register_path_handler('rss', lambda name, lang: site.path_handlers['index_rss'](name, lang))\n site.path_handlers['rss'].__doc__ = \"\"\"A link to the RSS feed path.\n\nExample:\n\n link://rss => /blog/rss.xml\n \"\"\".strip()\n return super(Indexes, self).set_site(site)\n\n def get_implicit_classifications(self, lang):\n \"\"\"Return a list of classification strings which should always appear in posts_per_classification.\"\"\"\n return [\"\"]\n\n def classify(self, post, lang):\n \"\"\"Classify the given post for the given language.\"\"\"\n return [\"\"]\n\n def get_classification_friendly_name(self, classification, lang, only_last_component=False):\n \"\"\"Extract a friendly name from the classification.\"\"\"\n return self.site.config[\"BLOG_TITLE\"](lang)\n\n def get_path(self, classification, lang, dest_type='page'):\n \"\"\"Return a path for the given classification.\"\"\"\n if dest_type == 'rss':\n return [self.site.config['RSS_PATH'](lang)], True\n # 'page' (index) or 'feed' (Atom)\n page_number = None\n if dest_type == 'page':\n # Interpret argument as page number\n try:\n page_number = int(classification)\n except (ValueError, TypeError):\n pass\n return [self.site.config['INDEX_PATH'](lang)], 'always', page_number\n\n def provide_context_and_uptodate(self, classification, lang, node=None):\n \"\"\"Provide data for the context and the uptodate list for the list of the given classifiation.\"\"\"\n kw = {\n }\n context = {\n \"title\": self.site.config[\"INDEXES_TITLE\"](lang) or self.site.config[\"BLOG_TITLE\"](lang),\n \"description\": self.site.config[\"BLOG_DESCRIPTION\"](lang),\n \"pagekind\": [\"main_index\", \"index\"],\n }\n kw.update(context)\n return context, kw\n\n def should_generate_classification_page(self, classification, post_list, lang):\n \"\"\"Only generates list of posts for classification if this function returns True.\"\"\"\n return not self.site.config[\"DISABLE_INDEXES_PLUGIN_INDEX_AND_ATOM_FEED\"]\n\n def should_generate_rss_for_classification_page(self, classification, post_list, lang):\n \"\"\"Only generates RSS feed for list of posts for classification if this function returns True.\"\"\"\n return not self.site.config[\"DISABLE_INDEXES_PLUGIN_RSS_FEED\"]\n", "path": "nikola/plugins/task/indexes.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2018 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Render the blog's main index.\"\"\"\n\n\nfrom nikola.plugin_categories import Taxonomy\n\n\nclass Indexes(Taxonomy):\n \"\"\"Classify for the blog's main index.\"\"\"\n\n name = \"classify_indexes\"\n\n classification_name = \"index\"\n overview_page_variable_name = None\n more_than_one_classifications_per_post = False\n has_hierarchy = False\n show_list_as_index = True\n template_for_single_list = \"index.tmpl\"\n template_for_classification_overview = None\n apply_to_posts = True\n apply_to_pages = False\n omit_empty_classifications = False\n path_handler_docstrings = {\n 'index_index': False,\n 'index': \"\"\"Link to a numbered index.\n\nExample:\n\nlink://index/3 => /index-3.html\"\"\",\n 'index_atom': \"\"\"Link to a numbered Atom index.\n\nExample:\n\nlink://index_atom/3 => /index-3.atom\"\"\",\n 'index_rss': \"\"\"A link to the RSS feed path.\n\nExample:\n\nlink://rss => /blog/rss.xml\"\"\",\n }\n\n def set_site(self, site):\n \"\"\"Set Nikola site.\"\"\"\n # Redirect automatically generated 'index_rss' path handler to 'rss' for compatibility with old rss plugin\n site.register_path_handler('rss', lambda name, lang: site.path_handlers['index_rss'](name, lang))\n site.path_handlers['rss'].__doc__ = \"\"\"A link to the RSS feed path.\n\nExample:\n\n link://rss => /blog/rss.xml\n \"\"\".strip()\n return super(Indexes, self).set_site(site)\n\n def get_implicit_classifications(self, lang):\n \"\"\"Return a list of classification strings which should always appear in posts_per_classification.\"\"\"\n return [\"\"]\n\n def classify(self, post, lang):\n \"\"\"Classify the given post for the given language.\"\"\"\n return [\"\"]\n\n def get_classification_friendly_name(self, classification, lang, only_last_component=False):\n \"\"\"Extract a friendly name from the classification.\"\"\"\n return self.site.config[\"BLOG_TITLE\"](lang)\n\n def get_path(self, classification, lang, dest_type='page'):\n \"\"\"Return a path for the given classification.\"\"\"\n if dest_type == 'rss':\n return [self.site.config['RSS_PATH'](lang), 'rss'], 'auto'\n # 'page' (index) or 'feed' (Atom)\n page_number = None\n if dest_type == 'page':\n # Interpret argument as page number\n try:\n page_number = int(classification)\n except (ValueError, TypeError):\n pass\n return [self.site.config['INDEX_PATH'](lang)], 'always', page_number\n\n def provide_context_and_uptodate(self, classification, lang, node=None):\n \"\"\"Provide data for the context and the uptodate list for the list of the given classifiation.\"\"\"\n kw = {\n }\n context = {\n \"title\": self.site.config[\"INDEXES_TITLE\"](lang) or self.site.config[\"BLOG_TITLE\"](lang),\n \"description\": self.site.config[\"BLOG_DESCRIPTION\"](lang),\n \"pagekind\": [\"main_index\", \"index\"],\n }\n kw.update(context)\n return context, kw\n\n def should_generate_classification_page(self, classification, post_list, lang):\n \"\"\"Only generates list of posts for classification if this function returns True.\"\"\"\n return not self.site.config[\"DISABLE_INDEXES_PLUGIN_INDEX_AND_ATOM_FEED\"]\n\n def should_generate_rss_for_classification_page(self, classification, post_list, lang):\n \"\"\"Only generates RSS feed for list of posts for classification if this function returns True.\"\"\"\n return not self.site.config[\"DISABLE_INDEXES_PLUGIN_RSS_FEED\"]\n", "path": "nikola/plugins/task/indexes.py"}]} | 1,735 | 152 |
gh_patches_debug_33474 | rasdani/github-patches | git_diff | privacyidea__privacyidea-3231 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support for SSH Token based on ed25519-sk or ecdsa-sk
**Is your feature request related to a problem? Please describe.**
What are you trying to achieve?
I try to add a ssh public key token based on ed25519-sk (associated with a Yubikey)
[2021-06-21 11:46:36,715][1177539][139641674618752][ERROR][privacyidea.app:1891] Exception on /token/init [POST]
Traceback (most recent call last):
File "/opt/privacyidea/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/opt/privacyidea/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/opt/privacyidea/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/opt/privacyidea/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/opt/privacyidea/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/opt/privacyidea/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/opt/privacyidea/lib/python3.8/site-packages/privacyidea/api/lib/prepolicy.py", line 154, in policy_wrapper
return wrapped_function(*args, **kwds)
File "/opt/privacyidea/lib/python3.8/site-packages/privacyidea/api/lib/prepolicy.py", line 154, in policy_wrapper
return wrapped_function(*args, **kwds)
File "/opt/privacyidea/lib/python3.8/site-packages/privacyidea/api/lib/prepolicy.py", line 154, in policy_wrapper
return wrapped_function(*args, **kwds)
[Previous line repeated 21 more times]
File "/opt/privacyidea/lib/python3.8/site-packages/privacyidea/api/lib/postpolicy.py", line 108, in policy_wrapper
response = wrapped_function(*args, **kwds)
File "/opt/privacyidea/lib/python3.8/site-packages/privacyidea/lib/subscriptions.py", line 333, in check_subscription_wrapper
f_result = func(*args, **kwds)
File "/opt/privacyidea/lib/python3.8/site-packages/privacyidea/lib/event.py", line 99, in event_wrapper
f_result = func(*args, **kwds)
File "/opt/privacyidea/lib/python3.8/site-packages/privacyidea/lib/log.py", line 155, in log_wrapper
return func(*args, **kwds)
File "/opt/privacyidea/lib/python3.8/site-packages/privacyidea/api/token.py", line 270, in init
tokenobject = init_token(param,
File "/opt/privacyidea/lib/python3.8/site-packages/privacyidea/lib/log.py", line 155, in log_wrapper
return func(*args, **kwds)
File "/opt/privacyidea/lib/python3.8/site-packages/privacyidea/lib/token.py", line 1085, in init_token
tokenobject.update(upd_params)
File "/opt/privacyidea/lib/python3.8/site-packages/privacyidea/lib/tokens/sshkeytoken.py", line 131, in update
raise Exception("The keytype you specified is not supported.")
Exception: The keytype you specified is not supported.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
I want to register this kind of keytype
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
Using gpg slot and generate rsa pubkey
**Additional context**
Add any other context or screenshots, that might help us to better understand your idea, your need and your circumstances.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `privacyidea/lib/tokens/sshkeytoken.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # privacyIDEA
4 # Jul 18, 2014 Cornelius Kölbel
5 # License: AGPLv3
6 # contact: http://www.privacyidea.org
7 #
8 # This code is free software; you can redistribute it and/or
9 # modify it under the terms of the GNU AFFERO GENERAL PUBLIC LICENSE
10 # License as published by the Free Software Foundation; either
11 # version 3 of the License, or any later version.
12 #
13 # This code is distributed in the hope that it will be useful,
14 # but WITHOUT ANY WARRANTY; without even the implied warranty of
15 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 # GNU AFFERO GENERAL PUBLIC LICENSE for more details.
17 #
18 # You should have received a copy of the GNU Affero General Public
19 # License along with this program. If not, see <http://www.gnu.org/licenses/>.
20 #
21 __doc__="""The SSHKeyTokenClass provides a TokenClass that stores the public
22 SSH key. This can be used to manage SSH keys and retrieve the public ssh key
23 to import it to authorized keys files.
24
25 The code is tested in tests/test_lib_tokens_ssh
26 """
27
28 import logging
29 from privacyidea.lib import _
30 from privacyidea.api.lib.utils import getParam
31 from privacyidea.lib.log import log_with
32 from privacyidea.lib.tokenclass import TokenClass, ROLLOUTSTATE
33 from privacyidea.lib.policy import SCOPE, ACTION, GROUP
34
35 log = logging.getLogger(__name__)
36
37
38 optional = True
39 required = False
40
41
42 ##TODO: We should save a fingerprint of the SSH Key in the encrypted OTP
43 # field, so that we can be sure, that the public ssh key was not changed in
44 # the database!
45
46
47 class SSHkeyTokenClass(TokenClass):
48 """
49 The SSHKeyTokenClass provides a TokenClass that stores the public
50 SSH key. This can be used to manage SSH keys and retrieve the public ssh key
51 to import it to authorized keys files.
52 """
53 mode = ['authenticate']
54 using_pin = False
55
56 def __init__(self, db_token):
57 TokenClass.__init__(self, db_token)
58 self.set_type(u"sshkey")
59
60 @staticmethod
61 def get_class_type():
62 return "sshkey"
63
64 @staticmethod
65 def get_class_prefix():
66 return "SSHK"
67
68 @staticmethod
69 @log_with(log)
70 def get_class_info(key=None, ret='all'):
71 """
72 returns a subtree of the token definition
73
74 :param key: subsection identifier
75 :type key: string
76 :param ret: default return value, if nothing is found
77 :type ret: user defined
78 :return: subsection if key exists or user defined
79 :rtype: dictionary
80 """
81 res = {'type': 'sshkey',
82 'title': 'SSHkey Token',
83 'description': _('SSH Public Key: The public SSH key.'),
84 'config': {},
85 'user': ['enroll'],
86 # This tokentype is enrollable in the UI for...
87 'ui_enroll': ["admin", "user"],
88 'policy': {
89 SCOPE.ENROLL: {
90 ACTION.MAXTOKENUSER: {
91 'type': 'int',
92 'desc': _("The user may only have this maximum number of SSH keys assigned."),
93 'group': GROUP.TOKEN
94 },
95 ACTION.MAXACTIVETOKENUSER: {
96 'type': 'int',
97 'desc': _(
98 "The user may only have this maximum number of active SSH keys assigned."),
99 'group': GROUP.TOKEN
100 }
101 }
102 },
103 }
104 if key:
105 ret = res.get(key, {})
106 else:
107 if ret == 'all':
108 ret = res
109
110 return ret
111
112 def update(self, param):
113 """
114 The key holds the public ssh key and this is required
115
116 The key probably is of the form "ssh-rsa BASE64 comment"
117 """
118 # We need to save the token, so that we can later add the tokeninfo
119 # Otherwise we might not have created the DB entry, yet and we would
120 # be missing the token.id
121 self.token.save()
122
123 getParam(param, "sshkey", required)
124
125 key_elem = param.get("sshkey").split(" ", 2)
126 if key_elem[0] not in ["ssh-rsa", "ssh-ed25519", "ecdsa-sha2-nistp256"]:
127 self.token.rollout_state = ROLLOUTSTATE.BROKEN
128 self.token.save()
129 raise Exception("The keytype you specified is not supported.")
130
131 if len(key_elem) < 2:
132 self.token.rollout_state = ROLLOUTSTATE.BROKEN
133 self.token.save()
134 raise Exception("Missing key.")
135
136 key_type = key_elem[0]
137 key = key_elem[1]
138 if len(key_elem) > 2:
139 key_comment = key_elem[2]
140 else:
141 key_comment = ""
142
143 # convert key to hex
144 self.add_tokeninfo("ssh_key", key, value_type="password")
145 self.add_tokeninfo("ssh_type", key_type)
146 self.add_tokeninfo("ssh_comment", key_comment)
147
148 # call the parents function
149 TokenClass.update(self, param)
150
151 @log_with(log)
152 def get_sshkey(self):
153 """
154 returns the public SSH key
155
156 :return: SSH pub key
157 :rtype: string
158 """
159 ti = self.get_tokeninfo()
160 key_type = ti.get("ssh_type")
161 key_comment = ti.get("ssh_comment")
162 # get the ssh key directly, otherwise it will not be decrypted
163 sshkey = self.get_tokeninfo("ssh_key")
164 return u"{0!s} {1!s} {2!s}".format(key_type, sshkey, key_comment)
165
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/privacyidea/lib/tokens/sshkeytoken.py b/privacyidea/lib/tokens/sshkeytoken.py
--- a/privacyidea/lib/tokens/sshkeytoken.py
+++ b/privacyidea/lib/tokens/sshkeytoken.py
@@ -28,6 +28,7 @@
import logging
from privacyidea.lib import _
from privacyidea.api.lib.utils import getParam
+from privacyidea.lib.error import TokenAdminError
from privacyidea.lib.log import log_with
from privacyidea.lib.tokenclass import TokenClass, ROLLOUTSTATE
from privacyidea.lib.policy import SCOPE, ACTION, GROUP
@@ -121,17 +122,18 @@
self.token.save()
getParam(param, "sshkey", required)
-
+
key_elem = param.get("sshkey").split(" ", 2)
- if key_elem[0] not in ["ssh-rsa", "ssh-ed25519", "ecdsa-sha2-nistp256"]:
+ if key_elem[0] not in ["ssh-rsa", "ssh-ed25519", "ecdsa-sha2-nistp256",
+ "[email protected]", "[email protected]"]:
self.token.rollout_state = ROLLOUTSTATE.BROKEN
self.token.save()
- raise Exception("The keytype you specified is not supported.")
+ raise TokenAdminError("The keytype you specified is not supported.")
if len(key_elem) < 2:
self.token.rollout_state = ROLLOUTSTATE.BROKEN
self.token.save()
- raise Exception("Missing key.")
+ raise TokenAdminError("Missing key.")
key_type = key_elem[0]
key = key_elem[1]
@@ -161,4 +163,7 @@
key_comment = ti.get("ssh_comment")
# get the ssh key directly, otherwise it will not be decrypted
sshkey = self.get_tokeninfo("ssh_key")
- return u"{0!s} {1!s} {2!s}".format(key_type, sshkey, key_comment)
+ r = u"{0!s} {1!s}".format(key_type, sshkey)
+ if key_comment:
+ r += " " + key_comment
+ return r
| {"golden_diff": "diff --git a/privacyidea/lib/tokens/sshkeytoken.py b/privacyidea/lib/tokens/sshkeytoken.py\n--- a/privacyidea/lib/tokens/sshkeytoken.py\n+++ b/privacyidea/lib/tokens/sshkeytoken.py\n@@ -28,6 +28,7 @@\n import logging\n from privacyidea.lib import _\n from privacyidea.api.lib.utils import getParam\n+from privacyidea.lib.error import TokenAdminError\n from privacyidea.lib.log import log_with\n from privacyidea.lib.tokenclass import TokenClass, ROLLOUTSTATE\n from privacyidea.lib.policy import SCOPE, ACTION, GROUP\n@@ -121,17 +122,18 @@\n self.token.save()\n \n getParam(param, \"sshkey\", required)\n- \n+\n key_elem = param.get(\"sshkey\").split(\" \", 2)\n- if key_elem[0] not in [\"ssh-rsa\", \"ssh-ed25519\", \"ecdsa-sha2-nistp256\"]:\n+ if key_elem[0] not in [\"ssh-rsa\", \"ssh-ed25519\", \"ecdsa-sha2-nistp256\",\n+ \"[email protected]\", \"[email protected]\"]:\n self.token.rollout_state = ROLLOUTSTATE.BROKEN\n self.token.save()\n- raise Exception(\"The keytype you specified is not supported.\")\n+ raise TokenAdminError(\"The keytype you specified is not supported.\")\n \n if len(key_elem) < 2:\n self.token.rollout_state = ROLLOUTSTATE.BROKEN\n self.token.save()\n- raise Exception(\"Missing key.\")\n+ raise TokenAdminError(\"Missing key.\")\n \n key_type = key_elem[0]\n key = key_elem[1]\n@@ -161,4 +163,7 @@\n key_comment = ti.get(\"ssh_comment\")\n # get the ssh key directly, otherwise it will not be decrypted\n sshkey = self.get_tokeninfo(\"ssh_key\")\n- return u\"{0!s} {1!s} {2!s}\".format(key_type, sshkey, key_comment)\n+ r = u\"{0!s} {1!s}\".format(key_type, sshkey)\n+ if key_comment:\n+ r += \" \" + key_comment\n+ return r\n", "issue": "Support for SSH Token based on ed25519-sk or ecdsa-sk\n**Is your feature request related to a problem? Please describe.**\r\nWhat are you trying to achieve?\r\n\r\nI try to add a ssh public key token based on ed25519-sk (associated with a Yubikey)\r\n\r\n[2021-06-21 11:46:36,715][1177539][139641674618752][ERROR][privacyidea.app:1891] Exception on /token/init [POST]\r\nTraceback (most recent call last):\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/flask/app.py\", line 2447, in wsgi_app\r\n response = self.full_dispatch_request()\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/flask/app.py\", line 1952, in full_dispatch_request\r\n rv = self.handle_user_exception(e)\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/flask/app.py\", line 1821, in handle_user_exception\r\n reraise(exc_type, exc_value, tb)\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/flask/_compat.py\", line 39, in reraise\r\n raise value\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/flask/app.py\", line 1950, in full_dispatch_request\r\n rv = self.dispatch_request()\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/flask/app.py\", line 1936, in dispatch_request\r\n return self.view_functions[rule.endpoint](**req.view_args)\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/privacyidea/api/lib/prepolicy.py\", line 154, in policy_wrapper\r\n return wrapped_function(*args, **kwds)\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/privacyidea/api/lib/prepolicy.py\", line 154, in policy_wrapper\r\n return wrapped_function(*args, **kwds)\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/privacyidea/api/lib/prepolicy.py\", line 154, in policy_wrapper\r\n return wrapped_function(*args, **kwds)\r\n [Previous line repeated 21 more times]\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/privacyidea/api/lib/postpolicy.py\", line 108, in policy_wrapper\r\n response = wrapped_function(*args, **kwds)\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/privacyidea/lib/subscriptions.py\", line 333, in check_subscription_wrapper\r\n f_result = func(*args, **kwds)\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/privacyidea/lib/event.py\", line 99, in event_wrapper\r\n f_result = func(*args, **kwds)\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/privacyidea/lib/log.py\", line 155, in log_wrapper\r\n return func(*args, **kwds)\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/privacyidea/api/token.py\", line 270, in init\r\n tokenobject = init_token(param,\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/privacyidea/lib/log.py\", line 155, in log_wrapper\r\n return func(*args, **kwds)\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/privacyidea/lib/token.py\", line 1085, in init_token\r\n tokenobject.update(upd_params)\r\n File \"/opt/privacyidea/lib/python3.8/site-packages/privacyidea/lib/tokens/sshkeytoken.py\", line 131, in update\r\n raise Exception(\"The keytype you specified is not supported.\")\r\nException: The keytype you specified is not supported.\r\n\r\n**Describe the solution you'd like**\r\nA clear and concise description of what you want to happen.\r\n\r\nI want to register this kind of keytype\r\n\r\n**Describe alternatives you've considered**\r\nA clear and concise description of any alternative solutions or features you've considered.\r\n\r\nUsing gpg slot and generate rsa pubkey\r\n\r\n**Additional context**\r\nAdd any other context or screenshots, that might help us to better understand your idea, your need and your circumstances.\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# privacyIDEA\n# Jul 18, 2014 Cornelius K\u00f6lbel\n# License: AGPLv3\n# contact: http://www.privacyidea.org\n#\n# This code is free software; you can redistribute it and/or\n# modify it under the terms of the GNU AFFERO GENERAL PUBLIC LICENSE\n# License as published by the Free Software Foundation; either\n# version 3 of the License, or any later version.\n#\n# This code is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU AFFERO GENERAL PUBLIC LICENSE for more details.\n#\n# You should have received a copy of the GNU Affero General Public\n# License along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n__doc__=\"\"\"The SSHKeyTokenClass provides a TokenClass that stores the public\nSSH key. This can be used to manage SSH keys and retrieve the public ssh key\nto import it to authorized keys files.\n\nThe code is tested in tests/test_lib_tokens_ssh\n\"\"\"\n\nimport logging\nfrom privacyidea.lib import _\nfrom privacyidea.api.lib.utils import getParam\nfrom privacyidea.lib.log import log_with\nfrom privacyidea.lib.tokenclass import TokenClass, ROLLOUTSTATE\nfrom privacyidea.lib.policy import SCOPE, ACTION, GROUP\n\nlog = logging.getLogger(__name__)\n\n\noptional = True\nrequired = False\n\n\n##TODO: We should save a fingerprint of the SSH Key in the encrypted OTP\n# field, so that we can be sure, that the public ssh key was not changed in\n# the database!\n\n\nclass SSHkeyTokenClass(TokenClass):\n \"\"\"\n The SSHKeyTokenClass provides a TokenClass that stores the public\n SSH key. This can be used to manage SSH keys and retrieve the public ssh key\n to import it to authorized keys files.\n \"\"\"\n mode = ['authenticate']\n using_pin = False\n\n def __init__(self, db_token):\n TokenClass.__init__(self, db_token)\n self.set_type(u\"sshkey\")\n\n @staticmethod\n def get_class_type():\n return \"sshkey\"\n\n @staticmethod\n def get_class_prefix():\n return \"SSHK\"\n\n @staticmethod\n @log_with(log)\n def get_class_info(key=None, ret='all'):\n \"\"\"\n returns a subtree of the token definition\n\n :param key: subsection identifier\n :type key: string\n :param ret: default return value, if nothing is found\n :type ret: user defined\n :return: subsection if key exists or user defined\n :rtype: dictionary\n \"\"\"\n res = {'type': 'sshkey',\n 'title': 'SSHkey Token',\n 'description': _('SSH Public Key: The public SSH key.'),\n 'config': {},\n 'user': ['enroll'],\n # This tokentype is enrollable in the UI for...\n 'ui_enroll': [\"admin\", \"user\"],\n 'policy': {\n SCOPE.ENROLL: {\n ACTION.MAXTOKENUSER: {\n 'type': 'int',\n 'desc': _(\"The user may only have this maximum number of SSH keys assigned.\"),\n 'group': GROUP.TOKEN\n },\n ACTION.MAXACTIVETOKENUSER: {\n 'type': 'int',\n 'desc': _(\n \"The user may only have this maximum number of active SSH keys assigned.\"),\n 'group': GROUP.TOKEN\n }\n }\n },\n }\n if key:\n ret = res.get(key, {})\n else:\n if ret == 'all':\n ret = res\n\n return ret\n\n def update(self, param):\n \"\"\"\n The key holds the public ssh key and this is required\n \n The key probably is of the form \"ssh-rsa BASE64 comment\"\n \"\"\"\n # We need to save the token, so that we can later add the tokeninfo\n # Otherwise we might not have created the DB entry, yet and we would\n # be missing the token.id\n self.token.save()\n\n getParam(param, \"sshkey\", required)\n \n key_elem = param.get(\"sshkey\").split(\" \", 2)\n if key_elem[0] not in [\"ssh-rsa\", \"ssh-ed25519\", \"ecdsa-sha2-nistp256\"]:\n self.token.rollout_state = ROLLOUTSTATE.BROKEN\n self.token.save()\n raise Exception(\"The keytype you specified is not supported.\")\n\n if len(key_elem) < 2:\n self.token.rollout_state = ROLLOUTSTATE.BROKEN\n self.token.save()\n raise Exception(\"Missing key.\")\n\n key_type = key_elem[0]\n key = key_elem[1]\n if len(key_elem) > 2:\n key_comment = key_elem[2]\n else:\n key_comment = \"\"\n \n # convert key to hex\n self.add_tokeninfo(\"ssh_key\", key, value_type=\"password\")\n self.add_tokeninfo(\"ssh_type\", key_type)\n self.add_tokeninfo(\"ssh_comment\", key_comment)\n\n # call the parents function\n TokenClass.update(self, param)\n \n @log_with(log)\n def get_sshkey(self):\n \"\"\"\n returns the public SSH key\n \n :return: SSH pub key\n :rtype: string\n \"\"\"\n ti = self.get_tokeninfo()\n key_type = ti.get(\"ssh_type\")\n key_comment = ti.get(\"ssh_comment\")\n # get the ssh key directly, otherwise it will not be decrypted\n sshkey = self.get_tokeninfo(\"ssh_key\")\n return u\"{0!s} {1!s} {2!s}\".format(key_type, sshkey, key_comment)\n", "path": "privacyidea/lib/tokens/sshkeytoken.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# privacyIDEA\n# Jul 18, 2014 Cornelius K\u00f6lbel\n# License: AGPLv3\n# contact: http://www.privacyidea.org\n#\n# This code is free software; you can redistribute it and/or\n# modify it under the terms of the GNU AFFERO GENERAL PUBLIC LICENSE\n# License as published by the Free Software Foundation; either\n# version 3 of the License, or any later version.\n#\n# This code is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU AFFERO GENERAL PUBLIC LICENSE for more details.\n#\n# You should have received a copy of the GNU Affero General Public\n# License along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n__doc__=\"\"\"The SSHKeyTokenClass provides a TokenClass that stores the public\nSSH key. This can be used to manage SSH keys and retrieve the public ssh key\nto import it to authorized keys files.\n\nThe code is tested in tests/test_lib_tokens_ssh\n\"\"\"\n\nimport logging\nfrom privacyidea.lib import _\nfrom privacyidea.api.lib.utils import getParam\nfrom privacyidea.lib.error import TokenAdminError\nfrom privacyidea.lib.log import log_with\nfrom privacyidea.lib.tokenclass import TokenClass, ROLLOUTSTATE\nfrom privacyidea.lib.policy import SCOPE, ACTION, GROUP\n\nlog = logging.getLogger(__name__)\n\n\noptional = True\nrequired = False\n\n\n##TODO: We should save a fingerprint of the SSH Key in the encrypted OTP\n# field, so that we can be sure, that the public ssh key was not changed in\n# the database!\n\n\nclass SSHkeyTokenClass(TokenClass):\n \"\"\"\n The SSHKeyTokenClass provides a TokenClass that stores the public\n SSH key. This can be used to manage SSH keys and retrieve the public ssh key\n to import it to authorized keys files.\n \"\"\"\n mode = ['authenticate']\n using_pin = False\n\n def __init__(self, db_token):\n TokenClass.__init__(self, db_token)\n self.set_type(u\"sshkey\")\n\n @staticmethod\n def get_class_type():\n return \"sshkey\"\n\n @staticmethod\n def get_class_prefix():\n return \"SSHK\"\n\n @staticmethod\n @log_with(log)\n def get_class_info(key=None, ret='all'):\n \"\"\"\n returns a subtree of the token definition\n\n :param key: subsection identifier\n :type key: string\n :param ret: default return value, if nothing is found\n :type ret: user defined\n :return: subsection if key exists or user defined\n :rtype: dictionary\n \"\"\"\n res = {'type': 'sshkey',\n 'title': 'SSHkey Token',\n 'description': _('SSH Public Key: The public SSH key.'),\n 'config': {},\n 'user': ['enroll'],\n # This tokentype is enrollable in the UI for...\n 'ui_enroll': [\"admin\", \"user\"],\n 'policy': {\n SCOPE.ENROLL: {\n ACTION.MAXTOKENUSER: {\n 'type': 'int',\n 'desc': _(\"The user may only have this maximum number of SSH keys assigned.\"),\n 'group': GROUP.TOKEN\n },\n ACTION.MAXACTIVETOKENUSER: {\n 'type': 'int',\n 'desc': _(\n \"The user may only have this maximum number of active SSH keys assigned.\"),\n 'group': GROUP.TOKEN\n }\n }\n },\n }\n if key:\n ret = res.get(key, {})\n else:\n if ret == 'all':\n ret = res\n\n return ret\n\n def update(self, param):\n \"\"\"\n The key holds the public ssh key and this is required\n \n The key probably is of the form \"ssh-rsa BASE64 comment\"\n \"\"\"\n # We need to save the token, so that we can later add the tokeninfo\n # Otherwise we might not have created the DB entry, yet and we would\n # be missing the token.id\n self.token.save()\n\n getParam(param, \"sshkey\", required)\n\n key_elem = param.get(\"sshkey\").split(\" \", 2)\n if key_elem[0] not in [\"ssh-rsa\", \"ssh-ed25519\", \"ecdsa-sha2-nistp256\",\n \"[email protected]\", \"[email protected]\"]:\n self.token.rollout_state = ROLLOUTSTATE.BROKEN\n self.token.save()\n raise TokenAdminError(\"The keytype you specified is not supported.\")\n\n if len(key_elem) < 2:\n self.token.rollout_state = ROLLOUTSTATE.BROKEN\n self.token.save()\n raise TokenAdminError(\"Missing key.\")\n\n key_type = key_elem[0]\n key = key_elem[1]\n if len(key_elem) > 2:\n key_comment = key_elem[2]\n else:\n key_comment = \"\"\n \n # convert key to hex\n self.add_tokeninfo(\"ssh_key\", key, value_type=\"password\")\n self.add_tokeninfo(\"ssh_type\", key_type)\n self.add_tokeninfo(\"ssh_comment\", key_comment)\n\n # call the parents function\n TokenClass.update(self, param)\n \n @log_with(log)\n def get_sshkey(self):\n \"\"\"\n returns the public SSH key\n \n :return: SSH pub key\n :rtype: string\n \"\"\"\n ti = self.get_tokeninfo()\n key_type = ti.get(\"ssh_type\")\n key_comment = ti.get(\"ssh_comment\")\n # get the ssh key directly, otherwise it will not be decrypted\n sshkey = self.get_tokeninfo(\"ssh_key\")\n r = u\"{0!s} {1!s}\".format(key_type, sshkey)\n if key_comment:\n r += \" \" + key_comment\n return r\n", "path": "privacyidea/lib/tokens/sshkeytoken.py"}]} | 2,895 | 542 |
gh_patches_debug_246 | rasdani/github-patches | git_diff | numpy__numpy-3245 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
2to3 run `standarderror` fixer
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/py3tool.py`
Content:
```
1 #!/usr/bin/env python3
2 # -*- python -*-
3 """
4 %prog SUBMODULE...
5
6 Hack to pipe submodules of Numpy through 2to3 and build them in-place
7 one-by-one.
8
9 Example usage:
10
11 python3 tools/py3tool.py testing distutils core
12
13 This will copy files to _py3k/numpy, add a dummy __init__.py and
14 version.py on the top level, and copy and 2to3 the files of the three
15 submodules.
16
17 When running py3tool again, only changed files are re-processed, which
18 makes the test-bugfix cycle faster.
19
20 """
21 from __future__ import division, absolute_import, print_function
22
23 from optparse import OptionParser
24 import shutil
25 import os
26 import sys
27 import re
28 import subprocess
29 import fnmatch
30
31 if os.environ.get('USE_2TO3CACHE'):
32 import lib2to3cache
33
34 BASE = os.path.normpath(os.path.join(os.path.dirname(__file__), '..'))
35 TEMP = os.path.normpath(os.path.join(BASE, '_py3k'))
36
37 SCRIPT_2TO3 = os.path.join(BASE, 'tools', '2to3.py')
38
39 EXTRA_2TO3_FLAGS = {
40 'numpy/core/defchararray.py': '-x unicode',
41 'numpy/compat/py3k.py': '-x unicode',
42 'numpy/ma/timer_comparison.py': 'skip',
43 }
44
45 # Names of fixers to skip when running 2to3. This is a complete list of
46 # available fixers, with fixers not currently skipped commented out.
47 FIXES_TO_SKIP = [
48 'apply',
49 'basestring',
50 'buffer',
51 'callable',
52 'dict',
53 'exec',
54 'execfile',
55 'exitfunc',
56 'filter',
57 'funcattrs',
58 'future',
59 'getcwdu',
60 'has_key',
61 # 'idioms',
62 'import',
63 'imports',
64 'imports2',
65 'input',
66 'intern',
67 # 'isinstance',
68 'itertools',
69 'itertools_imports',
70 'long',
71 'map',
72 'metaclass',
73 'methodattrs',
74 'ne',
75 # 'next',
76 'nonzero',
77 'numliterals',
78 'operator',
79 'paren',
80 'print',
81 'raise',
82 'raw_input',
83 'reduce',
84 'renames',
85 'repr',
86 'setliteral',
87 'standarderror',
88 'sys_exc',
89 'throw',
90 'tuple_params',
91 # 'types',
92 # 'unicode',
93 # 'urllib',
94 # 'ws_comma',
95 'xrange',
96 'xreadlines',
97 'zip',
98 ]
99
100 skip_fixes= []
101 for _t in FIXES_TO_SKIP:
102 skip_fixes.append('-x')
103 skip_fixes.append(_t)
104
105
106 def main():
107 p = OptionParser(usage=__doc__.strip())
108 p.add_option("--clean", "-c", action="store_true",
109 help="clean source directory")
110 options, args = p.parse_args()
111
112 if not args:
113 p.error('no submodules given')
114 else:
115 dirs = ['numpy/%s' % x for x in map(os.path.basename, args)]
116
117 # Prepare
118 if not os.path.isdir(TEMP):
119 os.makedirs(TEMP)
120
121 # Set up dummy files (for building only submodules)
122 dummy_files = {
123 '__init__.py': 'from numpy.version import version as __version__',
124 'version.py': 'version = "1.4.0.dev"'
125 }
126
127 for fn, content in dummy_files.items():
128 fn = os.path.join(TEMP, 'numpy', fn)
129 if not os.path.isfile(fn):
130 try:
131 os.makedirs(os.path.dirname(fn))
132 except OSError:
133 pass
134 f = open(fn, 'wb+')
135 f.write(content.encode('ascii'))
136 f.close()
137
138 # Environment
139 pp = [os.path.abspath(TEMP)]
140 def getenv():
141 env = dict(os.environ)
142 env.update({'PYTHONPATH': ':'.join(pp)})
143 return env
144
145 # Copy
146 for d in dirs:
147 src = os.path.join(BASE, d)
148 dst = os.path.join(TEMP, d)
149
150 # Run 2to3
151 sync_2to3(dst=dst,
152 src=src,
153 patchfile=os.path.join(TEMP, os.path.basename(d) + '.patch'),
154 clean=options.clean)
155
156 # Run setup.py, falling back to Pdb post-mortem on exceptions
157 setup_py = os.path.join(dst, 'setup.py')
158 if os.path.isfile(setup_py):
159 code = """\
160 import pdb, sys, traceback
161 p = pdb.Pdb()
162 try:
163 import __main__
164 __main__.__dict__.update({
165 "__name__": "__main__", "__file__": "setup.py",
166 "__builtins__": __builtins__})
167 fp = open("setup.py", "rb")
168 try:
169 exec(compile(fp.read(), "setup.py", 'exec'))
170 finally:
171 fp.close()
172 except SystemExit:
173 raise
174 except:
175 traceback.print_exc()
176 t = sys.exc_info()[2]
177 p.interaction(None, t)
178 """
179 ret = subprocess.call([sys.executable, '-c', code,
180 'build_ext', '-i'],
181 cwd=dst,
182 env=getenv())
183 if ret != 0:
184 raise RuntimeError("Build failed.")
185
186 # Run nosetests
187 subprocess.call(['nosetests3', '-v', d], cwd=TEMP)
188
189
190 def walk_sync(dir1, dir2, _seen=None):
191 if _seen is None:
192 seen = {}
193 else:
194 seen = _seen
195
196 if not dir1.endswith(os.path.sep):
197 dir1 = dir1 + os.path.sep
198
199 # Walk through stuff (which we haven't yet gone through) in dir1
200 for root, dirs, files in os.walk(dir1):
201 sub = root[len(dir1):]
202 if sub in seen:
203 dirs = [x for x in dirs if x not in seen[sub][0]]
204 files = [x for x in files if x not in seen[sub][1]]
205 seen[sub][0].extend(dirs)
206 seen[sub][1].extend(files)
207 else:
208 seen[sub] = (dirs, files)
209 if not dirs and not files:
210 continue
211 yield os.path.join(dir1, sub), os.path.join(dir2, sub), dirs, files
212
213 if _seen is None:
214 # Walk through stuff (which we haven't yet gone through) in dir2
215 for root2, root1, dirs, files in walk_sync(dir2, dir1, _seen=seen):
216 yield root1, root2, dirs, files
217
218 def sync_2to3(src, dst, patchfile=None, clean=False):
219 import lib2to3.main
220 from io import StringIO
221
222 to_convert = []
223
224 for src_dir, dst_dir, dirs, files in walk_sync(src, dst):
225 for fn in dirs + files:
226 src_fn = os.path.join(src_dir, fn)
227 dst_fn = os.path.join(dst_dir, fn)
228
229 # skip temporary etc. files
230 if fn.startswith('.#') or fn.endswith('~'):
231 continue
232
233 # remove non-existing
234 if os.path.exists(dst_fn) and not os.path.exists(src_fn):
235 if clean:
236 if os.path.isdir(dst_fn):
237 shutil.rmtree(dst_fn)
238 else:
239 os.unlink(dst_fn)
240 continue
241
242 # make directories
243 if os.path.isdir(src_fn):
244 if not os.path.isdir(dst_fn):
245 os.makedirs(dst_fn)
246 continue
247
248 dst_dir = os.path.dirname(dst_fn)
249 if os.path.isfile(dst_fn) and not os.path.isdir(dst_dir):
250 os.makedirs(dst_dir)
251
252 # don't replace up-to-date files
253 try:
254 if os.path.isfile(dst_fn) and \
255 os.stat(dst_fn).st_mtime >= os.stat(src_fn).st_mtime:
256 continue
257 except OSError:
258 pass
259
260 # copy file
261 shutil.copyfile(src_fn, dst_fn)
262
263 # add .py files to 2to3 list
264 if dst_fn.endswith('.py'):
265 to_convert.append((src_fn, dst_fn))
266
267 # run 2to3
268 flag_sets = {}
269 for fn, dst_fn in to_convert:
270 flag = ''
271 for pat, opt in EXTRA_2TO3_FLAGS.items():
272 if fnmatch.fnmatch(fn, pat):
273 flag = opt
274 break
275 flag_sets.setdefault(flag, []).append(dst_fn)
276
277 if patchfile:
278 p = open(patchfile, 'wb+')
279 else:
280 p = open(os.devnull, 'wb')
281
282 for flags, filenames in flag_sets.items():
283 if flags == 'skip':
284 continue
285
286 _old_stdout = sys.stdout
287 try:
288 sys.stdout = StringIO()
289 opt = []
290 opt.extend(['-w', '-n'])
291 opt.extend(skip_fixes)
292 opt.extend(flags.split())
293 opt.extend(filenames)
294 lib2to3.main.main("lib2to3.fixes", opt)
295 finally:
296 sys.stdout = _old_stdout
297
298 p.close()
299
300 if __name__ == "__main__":
301 main()
302
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tools/py3tool.py b/tools/py3tool.py
--- a/tools/py3tool.py
+++ b/tools/py3tool.py
@@ -64,7 +64,7 @@
'imports2',
'input',
'intern',
-# 'isinstance',
+ 'isinstance',
'itertools',
'itertools_imports',
'long',
| {"golden_diff": "diff --git a/tools/py3tool.py b/tools/py3tool.py\n--- a/tools/py3tool.py\n+++ b/tools/py3tool.py\n@@ -64,7 +64,7 @@\n 'imports2',\n 'input',\n 'intern',\n-# 'isinstance',\n+ 'isinstance',\n 'itertools',\n 'itertools_imports',\n 'long',\n", "issue": "2to3 run `standarderror` fixer\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n# -*- python -*-\n\"\"\"\n%prog SUBMODULE...\n\nHack to pipe submodules of Numpy through 2to3 and build them in-place\none-by-one.\n\nExample usage:\n\n python3 tools/py3tool.py testing distutils core\n\nThis will copy files to _py3k/numpy, add a dummy __init__.py and\nversion.py on the top level, and copy and 2to3 the files of the three\nsubmodules.\n\nWhen running py3tool again, only changed files are re-processed, which\nmakes the test-bugfix cycle faster.\n\n\"\"\"\nfrom __future__ import division, absolute_import, print_function\n\nfrom optparse import OptionParser\nimport shutil\nimport os\nimport sys\nimport re\nimport subprocess\nimport fnmatch\n\nif os.environ.get('USE_2TO3CACHE'):\n import lib2to3cache\n\nBASE = os.path.normpath(os.path.join(os.path.dirname(__file__), '..'))\nTEMP = os.path.normpath(os.path.join(BASE, '_py3k'))\n\nSCRIPT_2TO3 = os.path.join(BASE, 'tools', '2to3.py')\n\nEXTRA_2TO3_FLAGS = {\n 'numpy/core/defchararray.py': '-x unicode',\n 'numpy/compat/py3k.py': '-x unicode',\n 'numpy/ma/timer_comparison.py': 'skip',\n}\n\n# Names of fixers to skip when running 2to3. This is a complete list of\n# available fixers, with fixers not currently skipped commented out.\nFIXES_TO_SKIP = [\n 'apply',\n 'basestring',\n 'buffer',\n 'callable',\n 'dict',\n 'exec',\n 'execfile',\n 'exitfunc',\n 'filter',\n 'funcattrs',\n 'future',\n 'getcwdu',\n 'has_key',\n# 'idioms',\n 'import',\n 'imports',\n 'imports2',\n 'input',\n 'intern',\n# 'isinstance',\n 'itertools',\n 'itertools_imports',\n 'long',\n 'map',\n 'metaclass',\n 'methodattrs',\n 'ne',\n# 'next',\n 'nonzero',\n 'numliterals',\n 'operator',\n 'paren',\n 'print',\n 'raise',\n 'raw_input',\n 'reduce',\n 'renames',\n 'repr',\n 'setliteral',\n 'standarderror',\n 'sys_exc',\n 'throw',\n 'tuple_params',\n# 'types',\n# 'unicode',\n# 'urllib',\n# 'ws_comma',\n 'xrange',\n 'xreadlines',\n 'zip',\n]\n\nskip_fixes= []\nfor _t in FIXES_TO_SKIP:\n skip_fixes.append('-x')\n skip_fixes.append(_t)\n\n\ndef main():\n p = OptionParser(usage=__doc__.strip())\n p.add_option(\"--clean\", \"-c\", action=\"store_true\",\n help=\"clean source directory\")\n options, args = p.parse_args()\n\n if not args:\n p.error('no submodules given')\n else:\n dirs = ['numpy/%s' % x for x in map(os.path.basename, args)]\n\n # Prepare\n if not os.path.isdir(TEMP):\n os.makedirs(TEMP)\n\n # Set up dummy files (for building only submodules)\n dummy_files = {\n '__init__.py': 'from numpy.version import version as __version__',\n 'version.py': 'version = \"1.4.0.dev\"'\n }\n\n for fn, content in dummy_files.items():\n fn = os.path.join(TEMP, 'numpy', fn)\n if not os.path.isfile(fn):\n try:\n os.makedirs(os.path.dirname(fn))\n except OSError:\n pass\n f = open(fn, 'wb+')\n f.write(content.encode('ascii'))\n f.close()\n\n # Environment\n pp = [os.path.abspath(TEMP)]\n def getenv():\n env = dict(os.environ)\n env.update({'PYTHONPATH': ':'.join(pp)})\n return env\n\n # Copy\n for d in dirs:\n src = os.path.join(BASE, d)\n dst = os.path.join(TEMP, d)\n\n # Run 2to3\n sync_2to3(dst=dst,\n src=src,\n patchfile=os.path.join(TEMP, os.path.basename(d) + '.patch'),\n clean=options.clean)\n\n # Run setup.py, falling back to Pdb post-mortem on exceptions\n setup_py = os.path.join(dst, 'setup.py')\n if os.path.isfile(setup_py):\n code = \"\"\"\\\nimport pdb, sys, traceback\np = pdb.Pdb()\ntry:\n import __main__\n __main__.__dict__.update({\n \"__name__\": \"__main__\", \"__file__\": \"setup.py\",\n \"__builtins__\": __builtins__})\n fp = open(\"setup.py\", \"rb\")\n try:\n exec(compile(fp.read(), \"setup.py\", 'exec'))\n finally:\n fp.close()\nexcept SystemExit:\n raise\nexcept:\n traceback.print_exc()\n t = sys.exc_info()[2]\n p.interaction(None, t)\n\"\"\"\n ret = subprocess.call([sys.executable, '-c', code,\n 'build_ext', '-i'],\n cwd=dst,\n env=getenv())\n if ret != 0:\n raise RuntimeError(\"Build failed.\")\n\n # Run nosetests\n subprocess.call(['nosetests3', '-v', d], cwd=TEMP)\n\n\ndef walk_sync(dir1, dir2, _seen=None):\n if _seen is None:\n seen = {}\n else:\n seen = _seen\n\n if not dir1.endswith(os.path.sep):\n dir1 = dir1 + os.path.sep\n\n # Walk through stuff (which we haven't yet gone through) in dir1\n for root, dirs, files in os.walk(dir1):\n sub = root[len(dir1):]\n if sub in seen:\n dirs = [x for x in dirs if x not in seen[sub][0]]\n files = [x for x in files if x not in seen[sub][1]]\n seen[sub][0].extend(dirs)\n seen[sub][1].extend(files)\n else:\n seen[sub] = (dirs, files)\n if not dirs and not files:\n continue\n yield os.path.join(dir1, sub), os.path.join(dir2, sub), dirs, files\n\n if _seen is None:\n # Walk through stuff (which we haven't yet gone through) in dir2\n for root2, root1, dirs, files in walk_sync(dir2, dir1, _seen=seen):\n yield root1, root2, dirs, files\n\ndef sync_2to3(src, dst, patchfile=None, clean=False):\n import lib2to3.main\n from io import StringIO\n\n to_convert = []\n\n for src_dir, dst_dir, dirs, files in walk_sync(src, dst):\n for fn in dirs + files:\n src_fn = os.path.join(src_dir, fn)\n dst_fn = os.path.join(dst_dir, fn)\n\n # skip temporary etc. files\n if fn.startswith('.#') or fn.endswith('~'):\n continue\n\n # remove non-existing\n if os.path.exists(dst_fn) and not os.path.exists(src_fn):\n if clean:\n if os.path.isdir(dst_fn):\n shutil.rmtree(dst_fn)\n else:\n os.unlink(dst_fn)\n continue\n\n # make directories\n if os.path.isdir(src_fn):\n if not os.path.isdir(dst_fn):\n os.makedirs(dst_fn)\n continue\n\n dst_dir = os.path.dirname(dst_fn)\n if os.path.isfile(dst_fn) and not os.path.isdir(dst_dir):\n os.makedirs(dst_dir)\n\n # don't replace up-to-date files\n try:\n if os.path.isfile(dst_fn) and \\\n os.stat(dst_fn).st_mtime >= os.stat(src_fn).st_mtime:\n continue\n except OSError:\n pass\n\n # copy file\n shutil.copyfile(src_fn, dst_fn)\n\n # add .py files to 2to3 list\n if dst_fn.endswith('.py'):\n to_convert.append((src_fn, dst_fn))\n\n # run 2to3\n flag_sets = {}\n for fn, dst_fn in to_convert:\n flag = ''\n for pat, opt in EXTRA_2TO3_FLAGS.items():\n if fnmatch.fnmatch(fn, pat):\n flag = opt\n break\n flag_sets.setdefault(flag, []).append(dst_fn)\n\n if patchfile:\n p = open(patchfile, 'wb+')\n else:\n p = open(os.devnull, 'wb')\n\n for flags, filenames in flag_sets.items():\n if flags == 'skip':\n continue\n\n _old_stdout = sys.stdout\n try:\n sys.stdout = StringIO()\n opt = []\n opt.extend(['-w', '-n'])\n opt.extend(skip_fixes)\n opt.extend(flags.split())\n opt.extend(filenames)\n lib2to3.main.main(\"lib2to3.fixes\", opt)\n finally:\n sys.stdout = _old_stdout\n\n p.close()\n\nif __name__ == \"__main__\":\n main()\n", "path": "tools/py3tool.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n# -*- python -*-\n\"\"\"\n%prog SUBMODULE...\n\nHack to pipe submodules of Numpy through 2to3 and build them in-place\none-by-one.\n\nExample usage:\n\n python3 tools/py3tool.py testing distutils core\n\nThis will copy files to _py3k/numpy, add a dummy __init__.py and\nversion.py on the top level, and copy and 2to3 the files of the three\nsubmodules.\n\nWhen running py3tool again, only changed files are re-processed, which\nmakes the test-bugfix cycle faster.\n\n\"\"\"\nfrom __future__ import division, absolute_import, print_function\n\nfrom optparse import OptionParser\nimport shutil\nimport os\nimport sys\nimport re\nimport subprocess\nimport fnmatch\n\nif os.environ.get('USE_2TO3CACHE'):\n import lib2to3cache\n\nBASE = os.path.normpath(os.path.join(os.path.dirname(__file__), '..'))\nTEMP = os.path.normpath(os.path.join(BASE, '_py3k'))\n\nSCRIPT_2TO3 = os.path.join(BASE, 'tools', '2to3.py')\n\nEXTRA_2TO3_FLAGS = {\n 'numpy/core/defchararray.py': '-x unicode',\n 'numpy/compat/py3k.py': '-x unicode',\n 'numpy/ma/timer_comparison.py': 'skip',\n}\n\n# Names of fixers to skip when running 2to3. This is a complete list of\n# available fixers, with fixers not currently skipped commented out.\nFIXES_TO_SKIP = [\n 'apply',\n 'basestring',\n 'buffer',\n 'callable',\n 'dict',\n 'exec',\n 'execfile',\n 'exitfunc',\n 'filter',\n 'funcattrs',\n 'future',\n 'getcwdu',\n 'has_key',\n# 'idioms',\n 'import',\n 'imports',\n 'imports2',\n 'input',\n 'intern',\n 'isinstance',\n 'itertools',\n 'itertools_imports',\n 'long',\n 'map',\n 'metaclass',\n 'methodattrs',\n 'ne',\n# 'next',\n 'nonzero',\n 'numliterals',\n 'operator',\n 'paren',\n 'print',\n 'raise',\n 'raw_input',\n 'reduce',\n 'renames',\n 'repr',\n 'setliteral',\n 'standarderror',\n 'sys_exc',\n 'throw',\n 'tuple_params',\n# 'types',\n# 'unicode',\n# 'urllib',\n# 'ws_comma',\n 'xrange',\n 'xreadlines',\n 'zip',\n]\n\nskip_fixes= []\nfor _t in FIXES_TO_SKIP:\n skip_fixes.append('-x')\n skip_fixes.append(_t)\n\n\ndef main():\n p = OptionParser(usage=__doc__.strip())\n p.add_option(\"--clean\", \"-c\", action=\"store_true\",\n help=\"clean source directory\")\n options, args = p.parse_args()\n\n if not args:\n p.error('no submodules given')\n else:\n dirs = ['numpy/%s' % x for x in map(os.path.basename, args)]\n\n # Prepare\n if not os.path.isdir(TEMP):\n os.makedirs(TEMP)\n\n # Set up dummy files (for building only submodules)\n dummy_files = {\n '__init__.py': 'from numpy.version import version as __version__',\n 'version.py': 'version = \"1.4.0.dev\"'\n }\n\n for fn, content in dummy_files.items():\n fn = os.path.join(TEMP, 'numpy', fn)\n if not os.path.isfile(fn):\n try:\n os.makedirs(os.path.dirname(fn))\n except OSError:\n pass\n f = open(fn, 'wb+')\n f.write(content.encode('ascii'))\n f.close()\n\n # Environment\n pp = [os.path.abspath(TEMP)]\n def getenv():\n env = dict(os.environ)\n env.update({'PYTHONPATH': ':'.join(pp)})\n return env\n\n # Copy\n for d in dirs:\n src = os.path.join(BASE, d)\n dst = os.path.join(TEMP, d)\n\n # Run 2to3\n sync_2to3(dst=dst,\n src=src,\n patchfile=os.path.join(TEMP, os.path.basename(d) + '.patch'),\n clean=options.clean)\n\n # Run setup.py, falling back to Pdb post-mortem on exceptions\n setup_py = os.path.join(dst, 'setup.py')\n if os.path.isfile(setup_py):\n code = \"\"\"\\\nimport pdb, sys, traceback\np = pdb.Pdb()\ntry:\n import __main__\n __main__.__dict__.update({\n \"__name__\": \"__main__\", \"__file__\": \"setup.py\",\n \"__builtins__\": __builtins__})\n fp = open(\"setup.py\", \"rb\")\n try:\n exec(compile(fp.read(), \"setup.py\", 'exec'))\n finally:\n fp.close()\nexcept SystemExit:\n raise\nexcept:\n traceback.print_exc()\n t = sys.exc_info()[2]\n p.interaction(None, t)\n\"\"\"\n ret = subprocess.call([sys.executable, '-c', code,\n 'build_ext', '-i'],\n cwd=dst,\n env=getenv())\n if ret != 0:\n raise RuntimeError(\"Build failed.\")\n\n # Run nosetests\n subprocess.call(['nosetests3', '-v', d], cwd=TEMP)\n\n\ndef walk_sync(dir1, dir2, _seen=None):\n if _seen is None:\n seen = {}\n else:\n seen = _seen\n\n if not dir1.endswith(os.path.sep):\n dir1 = dir1 + os.path.sep\n\n # Walk through stuff (which we haven't yet gone through) in dir1\n for root, dirs, files in os.walk(dir1):\n sub = root[len(dir1):]\n if sub in seen:\n dirs = [x for x in dirs if x not in seen[sub][0]]\n files = [x for x in files if x not in seen[sub][1]]\n seen[sub][0].extend(dirs)\n seen[sub][1].extend(files)\n else:\n seen[sub] = (dirs, files)\n if not dirs and not files:\n continue\n yield os.path.join(dir1, sub), os.path.join(dir2, sub), dirs, files\n\n if _seen is None:\n # Walk through stuff (which we haven't yet gone through) in dir2\n for root2, root1, dirs, files in walk_sync(dir2, dir1, _seen=seen):\n yield root1, root2, dirs, files\n\ndef sync_2to3(src, dst, patchfile=None, clean=False):\n import lib2to3.main\n from io import StringIO\n\n to_convert = []\n\n for src_dir, dst_dir, dirs, files in walk_sync(src, dst):\n for fn in dirs + files:\n src_fn = os.path.join(src_dir, fn)\n dst_fn = os.path.join(dst_dir, fn)\n\n # skip temporary etc. files\n if fn.startswith('.#') or fn.endswith('~'):\n continue\n\n # remove non-existing\n if os.path.exists(dst_fn) and not os.path.exists(src_fn):\n if clean:\n if os.path.isdir(dst_fn):\n shutil.rmtree(dst_fn)\n else:\n os.unlink(dst_fn)\n continue\n\n # make directories\n if os.path.isdir(src_fn):\n if not os.path.isdir(dst_fn):\n os.makedirs(dst_fn)\n continue\n\n dst_dir = os.path.dirname(dst_fn)\n if os.path.isfile(dst_fn) and not os.path.isdir(dst_dir):\n os.makedirs(dst_dir)\n\n # don't replace up-to-date files\n try:\n if os.path.isfile(dst_fn) and \\\n os.stat(dst_fn).st_mtime >= os.stat(src_fn).st_mtime:\n continue\n except OSError:\n pass\n\n # copy file\n shutil.copyfile(src_fn, dst_fn)\n\n # add .py files to 2to3 list\n if dst_fn.endswith('.py'):\n to_convert.append((src_fn, dst_fn))\n\n # run 2to3\n flag_sets = {}\n for fn, dst_fn in to_convert:\n flag = ''\n for pat, opt in EXTRA_2TO3_FLAGS.items():\n if fnmatch.fnmatch(fn, pat):\n flag = opt\n break\n flag_sets.setdefault(flag, []).append(dst_fn)\n\n if patchfile:\n p = open(patchfile, 'wb+')\n else:\n p = open(os.devnull, 'wb')\n\n for flags, filenames in flag_sets.items():\n if flags == 'skip':\n continue\n\n _old_stdout = sys.stdout\n try:\n sys.stdout = StringIO()\n opt = []\n opt.extend(['-w', '-n'])\n opt.extend(skip_fixes)\n opt.extend(flags.split())\n opt.extend(filenames)\n lib2to3.main.main(\"lib2to3.fixes\", opt)\n finally:\n sys.stdout = _old_stdout\n\n p.close()\n\nif __name__ == \"__main__\":\n main()\n", "path": "tools/py3tool.py"}]} | 3,095 | 86 |
gh_patches_debug_17500 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-598 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Problems with awido_de
Hello,
i have a problem with the awido_de calander :(
Log:
```
2023-01-09 20:43:52.963 ERROR (SyncWorker_4) [waste_collection_schedule.source_shell] fetch failed for source AWIDO Online:
File "/config/custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py", line 134, in fetch
File "/config/custom_components/waste_collection_schedule/waste_collection_schedule/source/awido_de.py", line 108, in fetch
```
configuration.yaml
```
waste_collection_schedule:
sources:
- name: awido_de
args:
customer: "bgl"
city: "Laufen"
street: "XXX Str."
housenumber: 1
```
Setup:
```
Home Assistant 2023.1.2
Frontend 20230104.0 - latest
Installation: Docker
Waste Collection Schedule 1.33.0
```
Any ideas?
Regards,
Christian
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `custom_components/waste_collection_schedule/waste_collection_schedule/source/awido_de.py`
Content:
```
1 import datetime
2 import logging
3
4 import requests
5 from waste_collection_schedule import Collection # type: ignore[attr-defined]
6
7 TITLE = "AWIDO Online"
8 DESCRIPTION = "Source for AWIDO waste collection."
9 URL = "https://www.awido-online.de/"
10
11
12 def EXTRA_INFO():
13 return [{"title": s["title"], "url": s["url"]} for s in SERVICE_MAP]
14
15
16 SERVICE_MAP = [
17 {
18 "title": "Abfallwirtschaft Rems-Murr",
19 "url": "https://www.abfallwirtschaft-rems-murr.de/",
20 "service_id": "rmk",
21 },
22 {
23 "title": "Landkreis Schweinfurt",
24 "url": "https://www.landkreis-schweinfurt.de",
25 "service_id": "lra-schweinfurt",
26 },
27 {
28 "title": "Landkreis Gotha",
29 "url": "https://www.landkreis-gotha.de/",
30 "service_id": "gotha",
31 },
32 {
33 "title": "Zweckverband Abfallwirtschaft Saale-Orla",
34 "url": "https://www.zaso-online.de/",
35 "service_id": "zaso",
36 },
37 {
38 "title": "Gemeinde Unterhaching",
39 "url": "https://www.unterhaching.de/",
40 "service_id": "unterhaching",
41 },
42 {
43 "title": "Stadt Kaufbeuren",
44 "url": "https://www.kaufbeuren.de/",
45 "service_id": "kaufbeuren",
46 },
47 {
48 "title": "Landkreis Berchtesgadener Land",
49 "url": "https://www.lra-bgl.de/",
50 "service_id": "bgl",
51 },
52 {
53 "title": "Pullach im Isartal",
54 "url": "https://www.pullach.de/",
55 "service_id": "pullach",
56 },
57 {
58 "title": "AWB Landkreis Fürstenfeldbruck",
59 "url": "https://www.awb-ffb.de/",
60 "service_id": "ffb",
61 },
62 {
63 "title": "Stadt Unterschleißheim",
64 "url": "https://www.unterschleissheim.de/",
65 "service_id": "unterschleissheim",
66 },
67 {
68 "title": "Landkreis Tirschenreuth",
69 "url": "https://www.kreis-tir.de/",
70 "service_id": "kreis-tir",
71 },
72 {
73 "title": "Landkreis Rosenheim",
74 "url": "https://www.abfall.landkreis-rosenheim.de/",
75 "service_id": "rosenheim",
76 },
77 {
78 "title": "Landkreis Tübingen",
79 "url": "https://www.abfall-kreis-tuebingen.de/",
80 "service_id": "tuebingen",
81 },
82 {
83 "title": "Landkreis Kronach",
84 "url": "https://www.landkreis-kronach.de/",
85 "service_id": "kronach",
86 },
87 {
88 "title": "Landkreis Erding",
89 "url": "https://www.landkreis-erding.de/",
90 "service_id": "erding",
91 },
92 {
93 "title": "Zweckverband München-Südost",
94 "url": "https://www.zvmso.de/",
95 "service_id": "zv-muc-so",
96 },
97 {
98 "title": "Landkreis Coburg",
99 "url": "https://www.landkreis-coburg.de/",
100 "service_id": "coburg",
101 },
102 {
103 "title": "Landkreis Ansbach",
104 "url": "https://www.landkreis-ansbach.de/",
105 "service_id": "ansbach",
106 },
107 {
108 "title": "AWB Landkreis Bad Dürkheim",
109 "url": "http://awb.kreis-bad-duerkheim.de/",
110 "service_id": "awb-duerkheim",
111 },
112 {
113 "title": "Landratsamt Aichach-Friedberg",
114 "url": "https://lra-aic-fdb.de/",
115 "service_id": "aic-fdb",
116 },
117 {
118 "title": "WGV Recycling GmbH",
119 "url": "https://wgv-quarzbichl.de/",
120 "service_id": "wgv",
121 },
122 {
123 "title": "Neustadt a.d. Waldnaab",
124 "url": "https://www.neustadt.de/",
125 "service_id": "neustadt",
126 },
127 {
128 "title": "Landkreis Kelheim",
129 "url": "https://www.landkreis-kelheim.de/",
130 "service_id": "kelheim",
131 },
132 {
133 "title": "Landkreis Günzburg",
134 "url": "https://kaw.landkreis-guenzburg.de/",
135 "service_id": "kaw-guenzburg",
136 },
137 {
138 "title": "Stadt Memmingen",
139 "url": "https://umwelt.memmingen.de/",
140 "service_id": "memmingen",
141 },
142 {
143 "title": "Landkreis Südliche Weinstraße",
144 "url": "https://www.suedliche-weinstrasse.de/",
145 "service_id": "eww-suew",
146 },
147 {
148 "title": "Landratsamt Dachau",
149 "url": "https://www.landratsamt-dachau.de/",
150 "service_id": "lra-dah",
151 },
152 {
153 "title": "Landkreisbetriebe Neuburg-Schrobenhausen",
154 "url": "https://www.landkreisbetriebe.de/",
155 "service_id": "landkreisbetriebe",
156 },
157 {
158 "title": "Abfallwirtschaftsbetrieb Landkreis Altenkirchen",
159 "url": "https://www.awb-ak.de/",
160 "service_id": "awb-ak",
161 },
162 {
163 "title": "Abfallwirtschaft Lahn-Dill-Kreises",
164 "url": "https://www.awld.de/",
165 "service_id": "awld",
166 },
167 {
168 "title": "Abfallwirtschafts-Zweckverband des Landkreises Hersfeld-Rotenburg",
169 "url": "https://www.azv-hef-rof.de/",
170 "service_id": "azv-hef-rof",
171 },
172 {
173 "title": "Abfall-Wirtschafts-Verband Nordschwaben",
174 "url": "https://www.awv-nordschwaben.de/",
175 "service_id": "awv-nordschwaben",
176 },
177 ]
178 TEST_CASES = {
179 "Schorndorf, Miedelsbacher Straße 30 /1": {
180 "customer": "rmk",
181 "city": "Schorndorf",
182 "street": "Miedelsbacher Straße",
183 "housenumber": "30 /1",
184 },
185 "Altomünster, Maisbrunn": {
186 "customer": "lra-dah",
187 "city": "Altomünster",
188 "street": "Maisbrunn",
189 },
190 "SOK-Alsmannsdorf": {"customer": "zaso", "city": "SOK-Alsmannsdorf"},
191 "Kaufbeuren, Rehgrund": {
192 "customer": "kaufbeuren",
193 "city": "Kaufbeuren",
194 "street": "Rehgrund",
195 },
196 "Tübingen, Dettenhausen": {"customer": "tuebingen", "city": "Dettenhausen"},
197 }
198
199 _LOGGER = logging.getLogger(__name__)
200
201
202 class Source:
203 def __init__(self, customer, city, street=None, housenumber=None):
204 self._customer = customer
205 self._city = city
206 self._street = street
207 self._housenumber = housenumber
208
209 def fetch(self):
210 # Retrieve list of places
211 r = requests.get(
212 f"https://awido.cubefour.de/WebServices/Awido.Service.svc/secure/getPlaces/client={self._customer}"
213 )
214 r.raise_for_status()
215 places = r.json()
216
217 # create city to key map from retrieved places
218 city_to_oid = {place["value"].strip(): place["key"] for (place) in places}
219
220 if self._city not in city_to_oid:
221 raise Exception(f"city not found: {self._city}")
222
223 oid = city_to_oid[self._city]
224
225 if self._street is None:
226 # test if we have to use city also as street name
227 self._street = self._city
228 r = requests.get(
229 f"https://awido.cubefour.de/WebServices/Awido.Service.svc/secure/getGroupedStreets/{oid}",
230 params={"client": self._customer},
231 )
232 r.raise_for_status()
233 streets = r.json()
234
235 # create street to key map from retrieved places
236 street_to_oid = {
237 street["value"].strip(): street["key"] for (street) in streets
238 }
239
240 if self._street in street_to_oid:
241 oid = street_to_oid[self._street]
242
243 else:
244 # street specified
245 r = requests.get(
246 f"https://awido.cubefour.de/WebServices/Awido.Service.svc/secure/getGroupedStreets/{oid}",
247 params={"client": self._customer},
248 )
249 r.raise_for_status()
250 streets = r.json()
251
252 # create street to key map from retrieved places
253 street_to_oid = {
254 street["value"].strip(): street["key"] for (street) in streets
255 }
256
257 if self._street not in street_to_oid:
258 raise Exception(f"street not found: {self._street}")
259
260 oid = street_to_oid[self._street]
261
262 if self._housenumber is not None:
263 r = requests.get(
264 f"https://awido.cubefour.de/WebServices/Awido.Service.svc/secure/getStreetAddons/{oid}",
265 params={"client": self._customer},
266 )
267 r.raise_for_status()
268 hsnbrs = r.json()
269
270 # create housenumber to key map from retrieved places
271 hsnbr_to_oid = {
272 hsnbr["value"].strip(): hsnbr["key"] for (hsnbr) in hsnbrs
273 }
274
275 if self._housenumber not in hsnbr_to_oid:
276 raise Exception(f"housenumber not found: {self._housenumber}")
277
278 oid = hsnbr_to_oid[self._housenumber]
279
280 # get calendar data
281 r = requests.get(
282 f"https://awido.cubefour.de/WebServices/Awido.Service.svc/secure/getData/{oid}",
283 params={"fractions": "", "client": self._customer},
284 )
285 r.raise_for_status()
286 cal_json = r.json()
287
288 # map fraction code to fraction name
289 fractions = {fract["snm"]: fract["nm"] for (fract) in cal_json["fracts"]}
290
291 # calendar also contains public holidays. In this case, 'ad' is None
292 calendar = [item for item in cal_json["calendar"] if item["ad"] is not None]
293
294 entries = []
295 for calitem in calendar:
296 date = datetime.datetime.strptime(calitem["dt"], "%Y%m%d").date()
297
298 # add all fractions for this date
299 for fracitem in calitem["fr"]:
300 waste_type = fractions[fracitem]
301 entries.append(Collection(date, waste_type))
302
303 return entries
304
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/awido_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/awido_de.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/awido_de.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/awido_de.py
@@ -194,6 +194,12 @@
"street": "Rehgrund",
},
"Tübingen, Dettenhausen": {"customer": "tuebingen", "city": "Dettenhausen"},
+ "Berchtesgadener Land": {
+ "customer": "bgl",
+ "city": "Laufen",
+ "street": "Ahornweg",
+ "housenumber": 1,
+ },
}
_LOGGER = logging.getLogger(__name__)
@@ -204,7 +210,7 @@
self._customer = customer
self._city = city
self._street = street
- self._housenumber = housenumber
+ self._housenumber = None if housenumber is None else str(housenumber)
def fetch(self):
# Retrieve list of places
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/awido_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/awido_de.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/awido_de.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/awido_de.py\n@@ -194,6 +194,12 @@\n \"street\": \"Rehgrund\",\n },\n \"T\u00fcbingen, Dettenhausen\": {\"customer\": \"tuebingen\", \"city\": \"Dettenhausen\"},\n+ \"Berchtesgadener Land\": {\n+ \"customer\": \"bgl\",\n+ \"city\": \"Laufen\",\n+ \"street\": \"Ahornweg\",\n+ \"housenumber\": 1,\n+ },\n }\n \n _LOGGER = logging.getLogger(__name__)\n@@ -204,7 +210,7 @@\n self._customer = customer\n self._city = city\n self._street = street\n- self._housenumber = housenumber\n+ self._housenumber = None if housenumber is None else str(housenumber)\n \n def fetch(self):\n # Retrieve list of places\n", "issue": "Problems with awido_de\nHello,\r\n\r\ni have a problem with the awido_de calander :(\r\n\r\n\r\nLog:\r\n```\r\n2023-01-09 20:43:52.963 ERROR (SyncWorker_4) [waste_collection_schedule.source_shell] fetch failed for source AWIDO Online:\r\nFile \"/config/custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py\", line 134, in fetch\r\nFile \"/config/custom_components/waste_collection_schedule/waste_collection_schedule/source/awido_de.py\", line 108, in fetch\r\n\r\n```\r\n\r\nconfiguration.yaml\r\n```\r\nwaste_collection_schedule:\r\n sources:\r\n - name: awido_de\r\n args:\r\n customer: \"bgl\"\r\n city: \"Laufen\"\r\n street: \"XXX Str.\"\r\n housenumber: 1\r\n```\r\n\r\nSetup:\r\n```\r\nHome Assistant 2023.1.2\r\nFrontend 20230104.0 - latest\r\nInstallation: Docker\r\nWaste Collection Schedule 1.33.0\r\n```\r\n\r\nAny ideas?\r\n\r\nRegards,\r\nChristian\n", "before_files": [{"content": "import datetime\nimport logging\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\nTITLE = \"AWIDO Online\"\nDESCRIPTION = \"Source for AWIDO waste collection.\"\nURL = \"https://www.awido-online.de/\"\n\n\ndef EXTRA_INFO():\n return [{\"title\": s[\"title\"], \"url\": s[\"url\"]} for s in SERVICE_MAP]\n\n\nSERVICE_MAP = [\n {\n \"title\": \"Abfallwirtschaft Rems-Murr\",\n \"url\": \"https://www.abfallwirtschaft-rems-murr.de/\",\n \"service_id\": \"rmk\",\n },\n {\n \"title\": \"Landkreis Schweinfurt\",\n \"url\": \"https://www.landkreis-schweinfurt.de\",\n \"service_id\": \"lra-schweinfurt\",\n },\n {\n \"title\": \"Landkreis Gotha\",\n \"url\": \"https://www.landkreis-gotha.de/\",\n \"service_id\": \"gotha\",\n },\n {\n \"title\": \"Zweckverband Abfallwirtschaft Saale-Orla\",\n \"url\": \"https://www.zaso-online.de/\",\n \"service_id\": \"zaso\",\n },\n {\n \"title\": \"Gemeinde Unterhaching\",\n \"url\": \"https://www.unterhaching.de/\",\n \"service_id\": \"unterhaching\",\n },\n {\n \"title\": \"Stadt Kaufbeuren\",\n \"url\": \"https://www.kaufbeuren.de/\",\n \"service_id\": \"kaufbeuren\",\n },\n {\n \"title\": \"Landkreis Berchtesgadener Land\",\n \"url\": \"https://www.lra-bgl.de/\",\n \"service_id\": \"bgl\",\n },\n {\n \"title\": \"Pullach im Isartal\",\n \"url\": \"https://www.pullach.de/\",\n \"service_id\": \"pullach\",\n },\n {\n \"title\": \"AWB Landkreis F\u00fcrstenfeldbruck\",\n \"url\": \"https://www.awb-ffb.de/\",\n \"service_id\": \"ffb\",\n },\n {\n \"title\": \"Stadt Unterschlei\u00dfheim\",\n \"url\": \"https://www.unterschleissheim.de/\",\n \"service_id\": \"unterschleissheim\",\n },\n {\n \"title\": \"Landkreis Tirschenreuth\",\n \"url\": \"https://www.kreis-tir.de/\",\n \"service_id\": \"kreis-tir\",\n },\n {\n \"title\": \"Landkreis Rosenheim\",\n \"url\": \"https://www.abfall.landkreis-rosenheim.de/\",\n \"service_id\": \"rosenheim\",\n },\n {\n \"title\": \"Landkreis T\u00fcbingen\",\n \"url\": \"https://www.abfall-kreis-tuebingen.de/\",\n \"service_id\": \"tuebingen\",\n },\n {\n \"title\": \"Landkreis Kronach\",\n \"url\": \"https://www.landkreis-kronach.de/\",\n \"service_id\": \"kronach\",\n },\n {\n \"title\": \"Landkreis Erding\",\n \"url\": \"https://www.landkreis-erding.de/\",\n \"service_id\": \"erding\",\n },\n {\n \"title\": \"Zweckverband M\u00fcnchen-S\u00fcdost\",\n \"url\": \"https://www.zvmso.de/\",\n \"service_id\": \"zv-muc-so\",\n },\n {\n \"title\": \"Landkreis Coburg\",\n \"url\": \"https://www.landkreis-coburg.de/\",\n \"service_id\": \"coburg\",\n },\n {\n \"title\": \"Landkreis Ansbach\",\n \"url\": \"https://www.landkreis-ansbach.de/\",\n \"service_id\": \"ansbach\",\n },\n {\n \"title\": \"AWB Landkreis Bad D\u00fcrkheim\",\n \"url\": \"http://awb.kreis-bad-duerkheim.de/\",\n \"service_id\": \"awb-duerkheim\",\n },\n {\n \"title\": \"Landratsamt Aichach-Friedberg\",\n \"url\": \"https://lra-aic-fdb.de/\",\n \"service_id\": \"aic-fdb\",\n },\n {\n \"title\": \"WGV Recycling GmbH\",\n \"url\": \"https://wgv-quarzbichl.de/\",\n \"service_id\": \"wgv\",\n },\n {\n \"title\": \"Neustadt a.d. Waldnaab\",\n \"url\": \"https://www.neustadt.de/\",\n \"service_id\": \"neustadt\",\n },\n {\n \"title\": \"Landkreis Kelheim\",\n \"url\": \"https://www.landkreis-kelheim.de/\",\n \"service_id\": \"kelheim\",\n },\n {\n \"title\": \"Landkreis G\u00fcnzburg\",\n \"url\": \"https://kaw.landkreis-guenzburg.de/\",\n \"service_id\": \"kaw-guenzburg\",\n },\n {\n \"title\": \"Stadt Memmingen\",\n \"url\": \"https://umwelt.memmingen.de/\",\n \"service_id\": \"memmingen\",\n },\n {\n \"title\": \"Landkreis S\u00fcdliche Weinstra\u00dfe\",\n \"url\": \"https://www.suedliche-weinstrasse.de/\",\n \"service_id\": \"eww-suew\",\n },\n {\n \"title\": \"Landratsamt Dachau\",\n \"url\": \"https://www.landratsamt-dachau.de/\",\n \"service_id\": \"lra-dah\",\n },\n {\n \"title\": \"Landkreisbetriebe Neuburg-Schrobenhausen\",\n \"url\": \"https://www.landkreisbetriebe.de/\",\n \"service_id\": \"landkreisbetriebe\",\n },\n {\n \"title\": \"Abfallwirtschaftsbetrieb Landkreis Altenkirchen\",\n \"url\": \"https://www.awb-ak.de/\",\n \"service_id\": \"awb-ak\",\n },\n {\n \"title\": \"Abfallwirtschaft Lahn-Dill-Kreises\",\n \"url\": \"https://www.awld.de/\",\n \"service_id\": \"awld\",\n },\n {\n \"title\": \"Abfallwirtschafts-Zweckverband des Landkreises Hersfeld-Rotenburg\",\n \"url\": \"https://www.azv-hef-rof.de/\",\n \"service_id\": \"azv-hef-rof\",\n },\n {\n \"title\": \"Abfall-Wirtschafts-Verband Nordschwaben\",\n \"url\": \"https://www.awv-nordschwaben.de/\",\n \"service_id\": \"awv-nordschwaben\",\n },\n]\nTEST_CASES = {\n \"Schorndorf, Miedelsbacher Stra\u00dfe 30 /1\": {\n \"customer\": \"rmk\",\n \"city\": \"Schorndorf\",\n \"street\": \"Miedelsbacher Stra\u00dfe\",\n \"housenumber\": \"30 /1\",\n },\n \"Altom\u00fcnster, Maisbrunn\": {\n \"customer\": \"lra-dah\",\n \"city\": \"Altom\u00fcnster\",\n \"street\": \"Maisbrunn\",\n },\n \"SOK-Alsmannsdorf\": {\"customer\": \"zaso\", \"city\": \"SOK-Alsmannsdorf\"},\n \"Kaufbeuren, Rehgrund\": {\n \"customer\": \"kaufbeuren\",\n \"city\": \"Kaufbeuren\",\n \"street\": \"Rehgrund\",\n },\n \"T\u00fcbingen, Dettenhausen\": {\"customer\": \"tuebingen\", \"city\": \"Dettenhausen\"},\n}\n\n_LOGGER = logging.getLogger(__name__)\n\n\nclass Source:\n def __init__(self, customer, city, street=None, housenumber=None):\n self._customer = customer\n self._city = city\n self._street = street\n self._housenumber = housenumber\n\n def fetch(self):\n # Retrieve list of places\n r = requests.get(\n f\"https://awido.cubefour.de/WebServices/Awido.Service.svc/secure/getPlaces/client={self._customer}\"\n )\n r.raise_for_status()\n places = r.json()\n\n # create city to key map from retrieved places\n city_to_oid = {place[\"value\"].strip(): place[\"key\"] for (place) in places}\n\n if self._city not in city_to_oid:\n raise Exception(f\"city not found: {self._city}\")\n\n oid = city_to_oid[self._city]\n\n if self._street is None:\n # test if we have to use city also as street name\n self._street = self._city\n r = requests.get(\n f\"https://awido.cubefour.de/WebServices/Awido.Service.svc/secure/getGroupedStreets/{oid}\",\n params={\"client\": self._customer},\n )\n r.raise_for_status()\n streets = r.json()\n\n # create street to key map from retrieved places\n street_to_oid = {\n street[\"value\"].strip(): street[\"key\"] for (street) in streets\n }\n\n if self._street in street_to_oid:\n oid = street_to_oid[self._street]\n\n else:\n # street specified\n r = requests.get(\n f\"https://awido.cubefour.de/WebServices/Awido.Service.svc/secure/getGroupedStreets/{oid}\",\n params={\"client\": self._customer},\n )\n r.raise_for_status()\n streets = r.json()\n\n # create street to key map from retrieved places\n street_to_oid = {\n street[\"value\"].strip(): street[\"key\"] for (street) in streets\n }\n\n if self._street not in street_to_oid:\n raise Exception(f\"street not found: {self._street}\")\n\n oid = street_to_oid[self._street]\n\n if self._housenumber is not None:\n r = requests.get(\n f\"https://awido.cubefour.de/WebServices/Awido.Service.svc/secure/getStreetAddons/{oid}\",\n params={\"client\": self._customer},\n )\n r.raise_for_status()\n hsnbrs = r.json()\n\n # create housenumber to key map from retrieved places\n hsnbr_to_oid = {\n hsnbr[\"value\"].strip(): hsnbr[\"key\"] for (hsnbr) in hsnbrs\n }\n\n if self._housenumber not in hsnbr_to_oid:\n raise Exception(f\"housenumber not found: {self._housenumber}\")\n\n oid = hsnbr_to_oid[self._housenumber]\n\n # get calendar data\n r = requests.get(\n f\"https://awido.cubefour.de/WebServices/Awido.Service.svc/secure/getData/{oid}\",\n params={\"fractions\": \"\", \"client\": self._customer},\n )\n r.raise_for_status()\n cal_json = r.json()\n\n # map fraction code to fraction name\n fractions = {fract[\"snm\"]: fract[\"nm\"] for (fract) in cal_json[\"fracts\"]}\n\n # calendar also contains public holidays. In this case, 'ad' is None\n calendar = [item for item in cal_json[\"calendar\"] if item[\"ad\"] is not None]\n\n entries = []\n for calitem in calendar:\n date = datetime.datetime.strptime(calitem[\"dt\"], \"%Y%m%d\").date()\n\n # add all fractions for this date\n for fracitem in calitem[\"fr\"]:\n waste_type = fractions[fracitem]\n entries.append(Collection(date, waste_type))\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/awido_de.py"}], "after_files": [{"content": "import datetime\nimport logging\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\nTITLE = \"AWIDO Online\"\nDESCRIPTION = \"Source for AWIDO waste collection.\"\nURL = \"https://www.awido-online.de/\"\n\n\ndef EXTRA_INFO():\n return [{\"title\": s[\"title\"], \"url\": s[\"url\"]} for s in SERVICE_MAP]\n\n\nSERVICE_MAP = [\n {\n \"title\": \"Abfallwirtschaft Rems-Murr\",\n \"url\": \"https://www.abfallwirtschaft-rems-murr.de/\",\n \"service_id\": \"rmk\",\n },\n {\n \"title\": \"Landkreis Schweinfurt\",\n \"url\": \"https://www.landkreis-schweinfurt.de\",\n \"service_id\": \"lra-schweinfurt\",\n },\n {\n \"title\": \"Landkreis Gotha\",\n \"url\": \"https://www.landkreis-gotha.de/\",\n \"service_id\": \"gotha\",\n },\n {\n \"title\": \"Zweckverband Abfallwirtschaft Saale-Orla\",\n \"url\": \"https://www.zaso-online.de/\",\n \"service_id\": \"zaso\",\n },\n {\n \"title\": \"Gemeinde Unterhaching\",\n \"url\": \"https://www.unterhaching.de/\",\n \"service_id\": \"unterhaching\",\n },\n {\n \"title\": \"Stadt Kaufbeuren\",\n \"url\": \"https://www.kaufbeuren.de/\",\n \"service_id\": \"kaufbeuren\",\n },\n {\n \"title\": \"Landkreis Berchtesgadener Land\",\n \"url\": \"https://www.lra-bgl.de/\",\n \"service_id\": \"bgl\",\n },\n {\n \"title\": \"Pullach im Isartal\",\n \"url\": \"https://www.pullach.de/\",\n \"service_id\": \"pullach\",\n },\n {\n \"title\": \"AWB Landkreis F\u00fcrstenfeldbruck\",\n \"url\": \"https://www.awb-ffb.de/\",\n \"service_id\": \"ffb\",\n },\n {\n \"title\": \"Stadt Unterschlei\u00dfheim\",\n \"url\": \"https://www.unterschleissheim.de/\",\n \"service_id\": \"unterschleissheim\",\n },\n {\n \"title\": \"Landkreis Tirschenreuth\",\n \"url\": \"https://www.kreis-tir.de/\",\n \"service_id\": \"kreis-tir\",\n },\n {\n \"title\": \"Landkreis Rosenheim\",\n \"url\": \"https://www.abfall.landkreis-rosenheim.de/\",\n \"service_id\": \"rosenheim\",\n },\n {\n \"title\": \"Landkreis T\u00fcbingen\",\n \"url\": \"https://www.abfall-kreis-tuebingen.de/\",\n \"service_id\": \"tuebingen\",\n },\n {\n \"title\": \"Landkreis Kronach\",\n \"url\": \"https://www.landkreis-kronach.de/\",\n \"service_id\": \"kronach\",\n },\n {\n \"title\": \"Landkreis Erding\",\n \"url\": \"https://www.landkreis-erding.de/\",\n \"service_id\": \"erding\",\n },\n {\n \"title\": \"Zweckverband M\u00fcnchen-S\u00fcdost\",\n \"url\": \"https://www.zvmso.de/\",\n \"service_id\": \"zv-muc-so\",\n },\n {\n \"title\": \"Landkreis Coburg\",\n \"url\": \"https://www.landkreis-coburg.de/\",\n \"service_id\": \"coburg\",\n },\n {\n \"title\": \"Landkreis Ansbach\",\n \"url\": \"https://www.landkreis-ansbach.de/\",\n \"service_id\": \"ansbach\",\n },\n {\n \"title\": \"AWB Landkreis Bad D\u00fcrkheim\",\n \"url\": \"http://awb.kreis-bad-duerkheim.de/\",\n \"service_id\": \"awb-duerkheim\",\n },\n {\n \"title\": \"Landratsamt Aichach-Friedberg\",\n \"url\": \"https://lra-aic-fdb.de/\",\n \"service_id\": \"aic-fdb\",\n },\n {\n \"title\": \"WGV Recycling GmbH\",\n \"url\": \"https://wgv-quarzbichl.de/\",\n \"service_id\": \"wgv\",\n },\n {\n \"title\": \"Neustadt a.d. Waldnaab\",\n \"url\": \"https://www.neustadt.de/\",\n \"service_id\": \"neustadt\",\n },\n {\n \"title\": \"Landkreis Kelheim\",\n \"url\": \"https://www.landkreis-kelheim.de/\",\n \"service_id\": \"kelheim\",\n },\n {\n \"title\": \"Landkreis G\u00fcnzburg\",\n \"url\": \"https://kaw.landkreis-guenzburg.de/\",\n \"service_id\": \"kaw-guenzburg\",\n },\n {\n \"title\": \"Stadt Memmingen\",\n \"url\": \"https://umwelt.memmingen.de/\",\n \"service_id\": \"memmingen\",\n },\n {\n \"title\": \"Landkreis S\u00fcdliche Weinstra\u00dfe\",\n \"url\": \"https://www.suedliche-weinstrasse.de/\",\n \"service_id\": \"eww-suew\",\n },\n {\n \"title\": \"Landratsamt Dachau\",\n \"url\": \"https://www.landratsamt-dachau.de/\",\n \"service_id\": \"lra-dah\",\n },\n {\n \"title\": \"Landkreisbetriebe Neuburg-Schrobenhausen\",\n \"url\": \"https://www.landkreisbetriebe.de/\",\n \"service_id\": \"landkreisbetriebe\",\n },\n {\n \"title\": \"Abfallwirtschaftsbetrieb Landkreis Altenkirchen\",\n \"url\": \"https://www.awb-ak.de/\",\n \"service_id\": \"awb-ak\",\n },\n {\n \"title\": \"Abfallwirtschaft Lahn-Dill-Kreises\",\n \"url\": \"https://www.awld.de/\",\n \"service_id\": \"awld\",\n },\n {\n \"title\": \"Abfallwirtschafts-Zweckverband des Landkreises Hersfeld-Rotenburg\",\n \"url\": \"https://www.azv-hef-rof.de/\",\n \"service_id\": \"azv-hef-rof\",\n },\n {\n \"title\": \"Abfall-Wirtschafts-Verband Nordschwaben\",\n \"url\": \"https://www.awv-nordschwaben.de/\",\n \"service_id\": \"awv-nordschwaben\",\n },\n]\nTEST_CASES = {\n \"Schorndorf, Miedelsbacher Stra\u00dfe 30 /1\": {\n \"customer\": \"rmk\",\n \"city\": \"Schorndorf\",\n \"street\": \"Miedelsbacher Stra\u00dfe\",\n \"housenumber\": \"30 /1\",\n },\n \"Altom\u00fcnster, Maisbrunn\": {\n \"customer\": \"lra-dah\",\n \"city\": \"Altom\u00fcnster\",\n \"street\": \"Maisbrunn\",\n },\n \"SOK-Alsmannsdorf\": {\"customer\": \"zaso\", \"city\": \"SOK-Alsmannsdorf\"},\n \"Kaufbeuren, Rehgrund\": {\n \"customer\": \"kaufbeuren\",\n \"city\": \"Kaufbeuren\",\n \"street\": \"Rehgrund\",\n },\n \"T\u00fcbingen, Dettenhausen\": {\"customer\": \"tuebingen\", \"city\": \"Dettenhausen\"},\n \"Berchtesgadener Land\": {\n \"customer\": \"bgl\",\n \"city\": \"Laufen\",\n \"street\": \"Ahornweg\",\n \"housenumber\": 1,\n },\n}\n\n_LOGGER = logging.getLogger(__name__)\n\n\nclass Source:\n def __init__(self, customer, city, street=None, housenumber=None):\n self._customer = customer\n self._city = city\n self._street = street\n self._housenumber = None if housenumber is None else str(housenumber)\n\n def fetch(self):\n # Retrieve list of places\n r = requests.get(\n f\"https://awido.cubefour.de/WebServices/Awido.Service.svc/secure/getPlaces/client={self._customer}\"\n )\n r.raise_for_status()\n places = r.json()\n\n # create city to key map from retrieved places\n city_to_oid = {place[\"value\"].strip(): place[\"key\"] for (place) in places}\n\n if self._city not in city_to_oid:\n raise Exception(f\"city not found: {self._city}\")\n\n oid = city_to_oid[self._city]\n\n if self._street is None:\n # test if we have to use city also as street name\n self._street = self._city\n r = requests.get(\n f\"https://awido.cubefour.de/WebServices/Awido.Service.svc/secure/getGroupedStreets/{oid}\",\n params={\"client\": self._customer},\n )\n r.raise_for_status()\n streets = r.json()\n\n # create street to key map from retrieved places\n street_to_oid = {\n street[\"value\"].strip(): street[\"key\"] for (street) in streets\n }\n\n if self._street in street_to_oid:\n oid = street_to_oid[self._street]\n\n else:\n # street specified\n r = requests.get(\n f\"https://awido.cubefour.de/WebServices/Awido.Service.svc/secure/getGroupedStreets/{oid}\",\n params={\"client\": self._customer},\n )\n r.raise_for_status()\n streets = r.json()\n\n # create street to key map from retrieved places\n street_to_oid = {\n street[\"value\"].strip(): street[\"key\"] for (street) in streets\n }\n\n if self._street not in street_to_oid:\n raise Exception(f\"street not found: {self._street}\")\n\n oid = street_to_oid[self._street]\n\n if self._housenumber is not None:\n r = requests.get(\n f\"https://awido.cubefour.de/WebServices/Awido.Service.svc/secure/getStreetAddons/{oid}\",\n params={\"client\": self._customer},\n )\n r.raise_for_status()\n hsnbrs = r.json()\n\n # create housenumber to key map from retrieved places\n hsnbr_to_oid = {\n hsnbr[\"value\"].strip(): hsnbr[\"key\"] for (hsnbr) in hsnbrs\n }\n\n if self._housenumber not in hsnbr_to_oid:\n raise Exception(f\"housenumber not found: {self._housenumber}\")\n\n oid = hsnbr_to_oid[self._housenumber]\n\n # get calendar data\n r = requests.get(\n f\"https://awido.cubefour.de/WebServices/Awido.Service.svc/secure/getData/{oid}\",\n params={\"fractions\": \"\", \"client\": self._customer},\n )\n r.raise_for_status()\n cal_json = r.json()\n\n # map fraction code to fraction name\n fractions = {fract[\"snm\"]: fract[\"nm\"] for (fract) in cal_json[\"fracts\"]}\n\n # calendar also contains public holidays. In this case, 'ad' is None\n calendar = [item for item in cal_json[\"calendar\"] if item[\"ad\"] is not None]\n\n entries = []\n for calitem in calendar:\n date = datetime.datetime.strptime(calitem[\"dt\"], \"%Y%m%d\").date()\n\n # add all fractions for this date\n for fracitem in calitem[\"fr\"]:\n waste_type = fractions[fracitem]\n entries.append(Collection(date, waste_type))\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/awido_de.py"}]} | 3,917 | 278 |
gh_patches_debug_4808 | rasdani/github-patches | git_diff | buildbot__buildbot-5301 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cache-control header is filled incorrectly
When a cache-control header is formed a ';' character is used as a delimiter:
https://github.com/buildbot/buildbot/blob/144eb7e82dc261e6506f1f68493446bcb24d77a0/master/buildbot/www/config.py#L120
This is not allowed by [RFC 7234](https://tools.ietf.org/html/rfc7234). The RFC states the following format of the header:
```
Cache-Control = *( "," OWS ) cache-directive *( OWS "," [ OWS
cache-directive ] )
cache-directive = token [ "=" ( token / quoted-string ) ]
```
Thus a replace `;` -> `, ` is required.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `master/buildbot/www/config.py`
Content:
```
1 # This file is part of Buildbot. Buildbot is free software: you can
2 # redistribute it and/or modify it under the terms of the GNU General Public
3 # License as published by the Free Software Foundation, version 2.
4 #
5 # This program is distributed in the hope that it will be useful, but WITHOUT
6 # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
7 # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
8 # details.
9 #
10 # You should have received a copy of the GNU General Public License along with
11 # this program; if not, write to the Free Software Foundation, Inc., 51
12 # Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
13 #
14 # Copyright Buildbot Team Members
15
16
17 import json
18 import os
19 import posixpath
20
21 import jinja2
22
23 from twisted.internet import defer
24 from twisted.python import log
25 from twisted.web.error import Error
26
27 from buildbot.interfaces import IConfigured
28 from buildbot.util import unicode2bytes
29 from buildbot.www import resource
30
31
32 class IndexResource(resource.Resource):
33 # enable reconfigResource calls
34 needsReconfig = True
35
36 def __init__(self, master, staticdir):
37 super().__init__(master)
38 loader = jinja2.FileSystemLoader(staticdir)
39 self.jinja = jinja2.Environment(
40 loader=loader, undefined=jinja2.StrictUndefined)
41
42 def reconfigResource(self, new_config):
43 self.config = new_config.www
44
45 versions = self.getEnvironmentVersions()
46 vs = self.config.get('versions')
47 if isinstance(vs, list):
48 versions += vs
49 self.config['versions'] = versions
50
51 self.custom_templates = {}
52 template_dir = self.config.pop('custom_templates_dir', None)
53 if template_dir is not None:
54 template_dir = os.path.join(self.master.basedir, template_dir)
55 self.custom_templates = self.parseCustomTemplateDir(template_dir)
56
57 def render_GET(self, request):
58 return self.asyncRenderHelper(request, self.renderIndex)
59
60 def parseCustomTemplateDir(self, template_dir):
61 res = {}
62 allowed_ext = [".html"]
63 try:
64 import pyjade # pylint: disable=import-outside-toplevel
65 allowed_ext.append(".jade")
66 except ImportError: # pragma: no cover
67 log.msg("pyjade not installed. Ignoring .jade files from {}".format(template_dir))
68 pyjade = None
69 for root, dirs, files in os.walk(template_dir):
70 if root == template_dir:
71 template_name = posixpath.join("views", "%s.html")
72 else:
73 # template_name is a url, so we really want '/'
74 # root is a os.path, though
75 template_name = posixpath.join(
76 os.path.basename(root), "views", "%s.html")
77 for f in files:
78 fn = os.path.join(root, f)
79 basename, ext = os.path.splitext(f)
80 if ext not in allowed_ext:
81 continue
82 if ext == ".html":
83 with open(fn) as f:
84 html = f.read().strip()
85 elif ext == ".jade":
86 with open(fn) as f:
87 jade = f.read()
88 parser = pyjade.parser.Parser(jade)
89 block = parser.parse()
90 compiler = pyjade.ext.html.Compiler(
91 block, pretty=False)
92 html = compiler.compile()
93 res[template_name % (basename,)] = html
94
95 return res
96
97 @staticmethod
98 def getEnvironmentVersions():
99 import sys # pylint: disable=import-outside-toplevel
100 import twisted # pylint: disable=import-outside-toplevel
101 from buildbot import version as bbversion # pylint: disable=import-outside-toplevel
102
103 pyversion = '.'.join(map(str, sys.version_info[:3]))
104
105 tx_version_info = (twisted.version.major,
106 twisted.version.minor,
107 twisted.version.micro)
108 txversion = '.'.join(map(str, tx_version_info))
109
110 return [
111 ('Python', pyversion),
112 ('Buildbot', bbversion),
113 ('Twisted', txversion),
114 ]
115
116 @defer.inlineCallbacks
117 def renderIndex(self, request):
118 config = {}
119 request.setHeader(b"content-type", b'text/html')
120 request.setHeader(b"Cache-Control", b"public;max-age=0")
121
122 try:
123 yield self.config['auth'].maybeAutoLogin(request)
124 except Error as e:
125 config["on_load_warning"] = e.message
126
127 user_info = self.master.www.getUserInfos(request)
128 config.update({"user": user_info})
129
130 config.update(self.config)
131 config['buildbotURL'] = self.master.config.buildbotURL
132 config['title'] = self.master.config.title
133 config['titleURL'] = self.master.config.titleURL
134 config['multiMaster'] = self.master.config.multiMaster
135
136 # delete things that may contain secrets
137 if 'change_hook_dialects' in config:
138 del config['change_hook_dialects']
139
140 def toJson(obj):
141 try:
142 obj = IConfigured(obj).getConfigDict()
143 except TypeError:
144 # this happens for old style classes (not deriving objects)
145 pass
146 if isinstance(obj, dict):
147 return obj
148 # don't leak object memory address
149 obj = obj.__class__.__module__ + "." + obj.__class__.__name__
150 return repr(obj) + " not yet IConfigured"
151
152 tpl = self.jinja.get_template('index.html')
153 # we use Jinja in order to render some server side dynamic stuff
154 # For example, custom_templates javascript is generated by the
155 # layout.jade jinja template
156 tpl = tpl.render(configjson=json.dumps(config, default=toJson),
157 custom_templates=self.custom_templates,
158 config=self.config)
159 return unicode2bytes(tpl, encoding='ascii')
160
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/master/buildbot/www/config.py b/master/buildbot/www/config.py
--- a/master/buildbot/www/config.py
+++ b/master/buildbot/www/config.py
@@ -117,7 +117,7 @@
def renderIndex(self, request):
config = {}
request.setHeader(b"content-type", b'text/html')
- request.setHeader(b"Cache-Control", b"public;max-age=0")
+ request.setHeader(b"Cache-Control", b"public,max-age=0")
try:
yield self.config['auth'].maybeAutoLogin(request)
| {"golden_diff": "diff --git a/master/buildbot/www/config.py b/master/buildbot/www/config.py\n--- a/master/buildbot/www/config.py\n+++ b/master/buildbot/www/config.py\n@@ -117,7 +117,7 @@\n def renderIndex(self, request):\n config = {}\n request.setHeader(b\"content-type\", b'text/html')\n- request.setHeader(b\"Cache-Control\", b\"public;max-age=0\")\n+ request.setHeader(b\"Cache-Control\", b\"public,max-age=0\")\n \n try:\n yield self.config['auth'].maybeAutoLogin(request)\n", "issue": "Cache-control header is filled incorrectly\nWhen a cache-control header is formed a ';' character is used as a delimiter:\r\n\r\nhttps://github.com/buildbot/buildbot/blob/144eb7e82dc261e6506f1f68493446bcb24d77a0/master/buildbot/www/config.py#L120\r\n\r\nThis is not allowed by [RFC 7234](https://tools.ietf.org/html/rfc7234). The RFC states the following format of the header:\r\n```\r\nCache-Control = *( \",\" OWS ) cache-directive *( OWS \",\" [ OWS\r\n cache-directive ] )\r\n cache-directive = token [ \"=\" ( token / quoted-string ) ]\r\n```\r\n\r\nThus a replace `;` -> `, ` is required.\n", "before_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\n\nimport json\nimport os\nimport posixpath\n\nimport jinja2\n\nfrom twisted.internet import defer\nfrom twisted.python import log\nfrom twisted.web.error import Error\n\nfrom buildbot.interfaces import IConfigured\nfrom buildbot.util import unicode2bytes\nfrom buildbot.www import resource\n\n\nclass IndexResource(resource.Resource):\n # enable reconfigResource calls\n needsReconfig = True\n\n def __init__(self, master, staticdir):\n super().__init__(master)\n loader = jinja2.FileSystemLoader(staticdir)\n self.jinja = jinja2.Environment(\n loader=loader, undefined=jinja2.StrictUndefined)\n\n def reconfigResource(self, new_config):\n self.config = new_config.www\n\n versions = self.getEnvironmentVersions()\n vs = self.config.get('versions')\n if isinstance(vs, list):\n versions += vs\n self.config['versions'] = versions\n\n self.custom_templates = {}\n template_dir = self.config.pop('custom_templates_dir', None)\n if template_dir is not None:\n template_dir = os.path.join(self.master.basedir, template_dir)\n self.custom_templates = self.parseCustomTemplateDir(template_dir)\n\n def render_GET(self, request):\n return self.asyncRenderHelper(request, self.renderIndex)\n\n def parseCustomTemplateDir(self, template_dir):\n res = {}\n allowed_ext = [\".html\"]\n try:\n import pyjade # pylint: disable=import-outside-toplevel\n allowed_ext.append(\".jade\")\n except ImportError: # pragma: no cover\n log.msg(\"pyjade not installed. Ignoring .jade files from {}\".format(template_dir))\n pyjade = None\n for root, dirs, files in os.walk(template_dir):\n if root == template_dir:\n template_name = posixpath.join(\"views\", \"%s.html\")\n else:\n # template_name is a url, so we really want '/'\n # root is a os.path, though\n template_name = posixpath.join(\n os.path.basename(root), \"views\", \"%s.html\")\n for f in files:\n fn = os.path.join(root, f)\n basename, ext = os.path.splitext(f)\n if ext not in allowed_ext:\n continue\n if ext == \".html\":\n with open(fn) as f:\n html = f.read().strip()\n elif ext == \".jade\":\n with open(fn) as f:\n jade = f.read()\n parser = pyjade.parser.Parser(jade)\n block = parser.parse()\n compiler = pyjade.ext.html.Compiler(\n block, pretty=False)\n html = compiler.compile()\n res[template_name % (basename,)] = html\n\n return res\n\n @staticmethod\n def getEnvironmentVersions():\n import sys # pylint: disable=import-outside-toplevel\n import twisted # pylint: disable=import-outside-toplevel\n from buildbot import version as bbversion # pylint: disable=import-outside-toplevel\n\n pyversion = '.'.join(map(str, sys.version_info[:3]))\n\n tx_version_info = (twisted.version.major,\n twisted.version.minor,\n twisted.version.micro)\n txversion = '.'.join(map(str, tx_version_info))\n\n return [\n ('Python', pyversion),\n ('Buildbot', bbversion),\n ('Twisted', txversion),\n ]\n\n @defer.inlineCallbacks\n def renderIndex(self, request):\n config = {}\n request.setHeader(b\"content-type\", b'text/html')\n request.setHeader(b\"Cache-Control\", b\"public;max-age=0\")\n\n try:\n yield self.config['auth'].maybeAutoLogin(request)\n except Error as e:\n config[\"on_load_warning\"] = e.message\n\n user_info = self.master.www.getUserInfos(request)\n config.update({\"user\": user_info})\n\n config.update(self.config)\n config['buildbotURL'] = self.master.config.buildbotURL\n config['title'] = self.master.config.title\n config['titleURL'] = self.master.config.titleURL\n config['multiMaster'] = self.master.config.multiMaster\n\n # delete things that may contain secrets\n if 'change_hook_dialects' in config:\n del config['change_hook_dialects']\n\n def toJson(obj):\n try:\n obj = IConfigured(obj).getConfigDict()\n except TypeError:\n # this happens for old style classes (not deriving objects)\n pass\n if isinstance(obj, dict):\n return obj\n # don't leak object memory address\n obj = obj.__class__.__module__ + \".\" + obj.__class__.__name__\n return repr(obj) + \" not yet IConfigured\"\n\n tpl = self.jinja.get_template('index.html')\n # we use Jinja in order to render some server side dynamic stuff\n # For example, custom_templates javascript is generated by the\n # layout.jade jinja template\n tpl = tpl.render(configjson=json.dumps(config, default=toJson),\n custom_templates=self.custom_templates,\n config=self.config)\n return unicode2bytes(tpl, encoding='ascii')\n", "path": "master/buildbot/www/config.py"}], "after_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\n\nimport json\nimport os\nimport posixpath\n\nimport jinja2\n\nfrom twisted.internet import defer\nfrom twisted.python import log\nfrom twisted.web.error import Error\n\nfrom buildbot.interfaces import IConfigured\nfrom buildbot.util import unicode2bytes\nfrom buildbot.www import resource\n\n\nclass IndexResource(resource.Resource):\n # enable reconfigResource calls\n needsReconfig = True\n\n def __init__(self, master, staticdir):\n super().__init__(master)\n loader = jinja2.FileSystemLoader(staticdir)\n self.jinja = jinja2.Environment(\n loader=loader, undefined=jinja2.StrictUndefined)\n\n def reconfigResource(self, new_config):\n self.config = new_config.www\n\n versions = self.getEnvironmentVersions()\n vs = self.config.get('versions')\n if isinstance(vs, list):\n versions += vs\n self.config['versions'] = versions\n\n self.custom_templates = {}\n template_dir = self.config.pop('custom_templates_dir', None)\n if template_dir is not None:\n template_dir = os.path.join(self.master.basedir, template_dir)\n self.custom_templates = self.parseCustomTemplateDir(template_dir)\n\n def render_GET(self, request):\n return self.asyncRenderHelper(request, self.renderIndex)\n\n def parseCustomTemplateDir(self, template_dir):\n res = {}\n allowed_ext = [\".html\"]\n try:\n import pyjade # pylint: disable=import-outside-toplevel\n allowed_ext.append(\".jade\")\n except ImportError: # pragma: no cover\n log.msg(\"pyjade not installed. Ignoring .jade files from {}\".format(template_dir))\n pyjade = None\n for root, dirs, files in os.walk(template_dir):\n if root == template_dir:\n template_name = posixpath.join(\"views\", \"%s.html\")\n else:\n # template_name is a url, so we really want '/'\n # root is a os.path, though\n template_name = posixpath.join(\n os.path.basename(root), \"views\", \"%s.html\")\n for f in files:\n fn = os.path.join(root, f)\n basename, ext = os.path.splitext(f)\n if ext not in allowed_ext:\n continue\n if ext == \".html\":\n with open(fn) as f:\n html = f.read().strip()\n elif ext == \".jade\":\n with open(fn) as f:\n jade = f.read()\n parser = pyjade.parser.Parser(jade)\n block = parser.parse()\n compiler = pyjade.ext.html.Compiler(\n block, pretty=False)\n html = compiler.compile()\n res[template_name % (basename,)] = html\n\n return res\n\n @staticmethod\n def getEnvironmentVersions():\n import sys # pylint: disable=import-outside-toplevel\n import twisted # pylint: disable=import-outside-toplevel\n from buildbot import version as bbversion # pylint: disable=import-outside-toplevel\n\n pyversion = '.'.join(map(str, sys.version_info[:3]))\n\n tx_version_info = (twisted.version.major,\n twisted.version.minor,\n twisted.version.micro)\n txversion = '.'.join(map(str, tx_version_info))\n\n return [\n ('Python', pyversion),\n ('Buildbot', bbversion),\n ('Twisted', txversion),\n ]\n\n @defer.inlineCallbacks\n def renderIndex(self, request):\n config = {}\n request.setHeader(b\"content-type\", b'text/html')\n request.setHeader(b\"Cache-Control\", b\"public,max-age=0\")\n\n try:\n yield self.config['auth'].maybeAutoLogin(request)\n except Error as e:\n config[\"on_load_warning\"] = e.message\n\n user_info = self.master.www.getUserInfos(request)\n config.update({\"user\": user_info})\n\n config.update(self.config)\n config['buildbotURL'] = self.master.config.buildbotURL\n config['title'] = self.master.config.title\n config['titleURL'] = self.master.config.titleURL\n config['multiMaster'] = self.master.config.multiMaster\n\n # delete things that may contain secrets\n if 'change_hook_dialects' in config:\n del config['change_hook_dialects']\n\n def toJson(obj):\n try:\n obj = IConfigured(obj).getConfigDict()\n except TypeError:\n # this happens for old style classes (not deriving objects)\n pass\n if isinstance(obj, dict):\n return obj\n # don't leak object memory address\n obj = obj.__class__.__module__ + \".\" + obj.__class__.__name__\n return repr(obj) + \" not yet IConfigured\"\n\n tpl = self.jinja.get_template('index.html')\n # we use Jinja in order to render some server side dynamic stuff\n # For example, custom_templates javascript is generated by the\n # layout.jade jinja template\n tpl = tpl.render(configjson=json.dumps(config, default=toJson),\n custom_templates=self.custom_templates,\n config=self.config)\n return unicode2bytes(tpl, encoding='ascii')\n", "path": "master/buildbot/www/config.py"}]} | 2,082 | 126 |
gh_patches_debug_41965 | rasdani/github-patches | git_diff | gratipay__gratipay.com-3163 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
login requires Referer
This just started recently. When I try to log in I get a blank page. Inspecting the response in my console shows I'm getting a 500 from the server.
My browser (ABrowser): Mozilla/5.0 (X11; Linux i686; rv:33.0) Gecko/20100101 Firefox/33.0
login requires Referer
This just started recently. When I try to log in I get a blank page. Inspecting the response in my console shows I'm getting a 500 from the server.
My browser (ABrowser): Mozilla/5.0 (X11; Linux i686; rv:33.0) Gecko/20100101 Firefox/33.0
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gratipay/security/csrf.py`
Content:
```
1 """Cross Site Request Forgery middleware, borrowed from Django.
2
3 See also:
4
5 https://github.com/django/django/blob/master/django/middleware/csrf.py
6 https://docs.djangoproject.com/en/dev/ref/contrib/csrf/
7 https://github.com/gratipay/gratipay.com/issues/88
8
9 """
10
11 from datetime import timedelta
12 import re
13 import urlparse
14 from aspen import log_dammit
15
16
17 #from django.utils.cache import patch_vary_headers
18 cc_delim_re = re.compile(r'\s*,\s*')
19 def patch_vary_headers(response, newheaders):
20 """
21 Adds (or updates) the "Vary" header in the given HttpResponse object.
22 newheaders is a list of header names that should be in "Vary". Existing
23 headers in "Vary" aren't removed.
24 """
25 # Note that we need to keep the original order intact, because cache
26 # implementations may rely on the order of the Vary contents in, say,
27 # computing an MD5 hash.
28 if 'Vary' in response.headers:
29 vary_headers = cc_delim_re.split(response.headers['Vary'])
30 else:
31 vary_headers = []
32 # Use .lower() here so we treat headers as case-insensitive.
33 existing_headers = set([header.lower() for header in vary_headers])
34 additional_headers = [newheader for newheader in newheaders
35 if newheader.lower() not in existing_headers]
36 response.headers['Vary'] = ', '.join(vary_headers + additional_headers)
37
38
39 #from django.utils.http import same_origin
40 def same_origin(url1, url2):
41 """
42 Checks if two URLs are 'same-origin'
43 """
44 p1, p2 = urlparse.urlparse(url1), urlparse.urlparse(url2)
45 return (p1.scheme, p1.hostname, p1.port) == (p2.scheme, p2.hostname, p2.port)
46
47
48 from aspen import Response
49 from crypto import constant_time_compare, get_random_string
50
51 REASON_NO_REFERER = "Referer checking failed - no Referer."
52 REASON_BAD_REFERER = "Referer checking failed - %s does not match %s."
53 REASON_NO_CSRF_COOKIE = "CSRF cookie not set."
54 REASON_BAD_TOKEN = "CSRF token missing or incorrect."
55
56 TOKEN_LENGTH = 32
57 CSRF_TIMEOUT = timedelta(days=7)
58
59
60 def _get_new_csrf_key():
61 return get_random_string(TOKEN_LENGTH)
62
63
64 def _sanitize_token(token):
65 # Allow only alphanum, and ensure we return a 'str' for the sake
66 # of the post processing middleware.
67 if len(token) > TOKEN_LENGTH:
68 return _get_new_csrf_key()
69 token = re.sub('[^a-zA-Z0-9]+', '', str(token.decode('ascii', 'ignore')))
70 if token == "":
71 # In case the cookie has been truncated to nothing at some point.
72 return _get_new_csrf_key()
73 return token
74
75 def _is_secure(request):
76 import gratipay
77 return gratipay.canonical_scheme == 'https'
78
79 def _get_host(request):
80 """Returns the HTTP host using the request headers.
81 """
82 return request.headers.get('X-Forwarded-Host', request.headers['Host'])
83
84
85
86 def get_csrf_token_from_request(request):
87 """Given a Request object, reject it if it's a forgery.
88 """
89 if request.line.uri.startswith('/assets/'): return
90 if request.line.uri.startswith('/callbacks/'): return
91
92 try:
93 csrf_token = _sanitize_token(request.headers.cookie['csrf_token'].value)
94 except KeyError:
95 csrf_token = None
96
97 request.context['csrf_token'] = csrf_token or _get_new_csrf_key()
98
99 # Assume that anything not defined as 'safe' by RC2616 needs protection
100 if request.line.method not in ('GET', 'HEAD', 'OPTIONS', 'TRACE'):
101
102 if _is_secure(request):
103 # Suppose user visits http://example.com/
104 # An active network attacker (man-in-the-middle, MITM) sends a
105 # POST form that targets https://example.com/detonate-bomb/ and
106 # submits it via JavaScript.
107 #
108 # The attacker will need to provide a CSRF cookie and token, but
109 # that's no problem for a MITM and the session-independent
110 # nonce we're using. So the MITM can circumvent the CSRF
111 # protection. This is true for any HTTP connection, but anyone
112 # using HTTPS expects better! For this reason, for
113 # https://example.com/ we need additional protection that treats
114 # http://example.com/ as completely untrusted. Under HTTPS,
115 # Barth et al. found that the Referer header is missing for
116 # same-domain requests in only about 0.2% of cases or less, so
117 # we can use strict Referer checking.
118 referer = request.headers.get('Referer')
119 if referer is None:
120 raise Response(403, REASON_NO_REFERER)
121
122 good_referer = 'https://%s/' % _get_host(request)
123 if not same_origin(referer, good_referer):
124 reason = REASON_BAD_REFERER % (referer, good_referer)
125 log_dammit(reason)
126 raise Response(403, reason)
127
128 if csrf_token is None:
129 raise Response(403, REASON_NO_CSRF_COOKIE)
130
131 # Check non-cookie token for match.
132 request_csrf_token = ""
133 if request.line.method == "POST":
134 if isinstance(request.body, dict):
135 request_csrf_token = request.body.get('csrf_token', '')
136
137 if request_csrf_token == "":
138 # Fall back to X-CSRF-TOKEN, to make things easier for AJAX,
139 # and possible for PUT/DELETE.
140 request_csrf_token = request.headers.get('X-CSRF-TOKEN', '')
141
142 if not constant_time_compare(request_csrf_token, csrf_token):
143 raise Response(403, REASON_BAD_TOKEN)
144
145
146 def add_csrf_token_to_response(response, request=None):
147 """Store the latest CSRF token as a cookie.
148 """
149 if request is None:
150 return # early parsing must've failed
151 csrf_token = request.context.get('csrf_token')
152 if csrf_token:
153 # Don't set httponly so that we can POST using XHR.
154 # https://github.com/gratipay/gratipay.com/issues/3030
155 response.set_cookie('csrf_token', csrf_token, expires=CSRF_TIMEOUT, httponly=False)
156
157 # Content varies with the CSRF cookie, so set the Vary header.
158 patch_vary_headers(response, ('Cookie',))
159
```
Path: `gratipay/security/authentication.py`
Content:
```
1 """Defines website authentication helpers.
2 """
3 import binascii
4 from datetime import date, datetime
5
6 from aspen import Response
7 from aspen.utils import to_rfc822
8 from gratipay.models.participant import Participant
9 from gratipay.security import csrf
10 from gratipay.security.crypto import constant_time_compare
11 from gratipay.security.user import User, SESSION
12
13
14 ANON = User()
15 BEGINNING_OF_EPOCH = to_rfc822(datetime(1970, 1, 1))
16
17 def _get_user_via_api_key(api_key):
18 """Given an api_key, return a User. This auth method is deprecated.
19 """
20 user = User()
21 user.participant = Participant._from_thing('api_key', api_key)
22 if user.participant:
23 p = user.participant
24 today = date.today()
25 if p.old_auth_usage != today:
26 Participant.db.run("""
27 UPDATE participants
28 SET old_auth_usage = %s
29 WHERE id = %s
30 """, (today, p.id))
31 return user
32
33 def _get_user_via_basic_auth(auth_header):
34 """Given a basic auth header, return a User object.
35 """
36 try:
37 creds = binascii.a2b_base64(auth_header[len('Basic '):]).split(':', 1)
38 except binascii.Error:
39 raise Response(400, 'Malformed "Authorization" header')
40 if len(creds) != 2:
41 raise Response(401)
42 userid, api_key = creds
43 if len(userid) == 36 and '-' in userid:
44 user = _get_user_via_api_key(userid) # For backward-compatibility
45 else:
46 try:
47 userid = int(userid)
48 except ValueError:
49 raise Response(401)
50 user = User.from_id(userid)
51 if user.ANON or not constant_time_compare(user.participant.api_key, api_key):
52 raise Response(401)
53 return user
54
55 def _turn_off_csrf(request):
56 """Given a request, short-circuit CSRF.
57 """
58 csrf_token = csrf._get_new_csrf_key()
59 request.headers.cookie['csrf_token'] = csrf_token
60 request.headers['X-CSRF-TOKEN'] = csrf_token
61 if 'Referer' not in request.headers:
62 request.headers['Referer'] = 'https://%s/' % csrf._get_host(request)
63
64 def set_request_context_user(request):
65 """Set request.context['user']. This signs the user in.
66 """
67
68 request.context['user'] = user = ANON # Make sure we always have a user object, even if
69 # there's an exception in the rest of this function.
70
71 if request.line.uri.startswith('/assets/'):
72 pass
73 elif 'Authorization' in request.headers:
74 header = request.headers['authorization']
75 if header.startswith('Basic '):
76 user = _get_user_via_basic_auth(header)
77 if not user.ANON:
78 _turn_off_csrf(request)
79 elif SESSION in request.headers.cookie:
80 token = request.headers.cookie[SESSION].value
81 user = User.from_session_token(token)
82
83 request.context['user'] = user
84
85 def add_auth_to_response(response, request=None):
86 if request is None:
87 return # early parsing must've failed
88 if request.line.uri.startswith('/assets/'):
89 return # assets never get auth headers and have their own caching done elsewhere
90
91 response.headers['Expires'] = BEGINNING_OF_EPOCH # don't cache
92
93 if SESSION in request.headers.cookie:
94 user = request.context.get('user') or ANON
95 if not user.ANON:
96 user.keep_signed_in(response.headers.cookie)
97
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gratipay/security/authentication.py b/gratipay/security/authentication.py
--- a/gratipay/security/authentication.py
+++ b/gratipay/security/authentication.py
@@ -58,8 +58,6 @@
csrf_token = csrf._get_new_csrf_key()
request.headers.cookie['csrf_token'] = csrf_token
request.headers['X-CSRF-TOKEN'] = csrf_token
- if 'Referer' not in request.headers:
- request.headers['Referer'] = 'https://%s/' % csrf._get_host(request)
def set_request_context_user(request):
"""Set request.context['user']. This signs the user in.
diff --git a/gratipay/security/csrf.py b/gratipay/security/csrf.py
--- a/gratipay/security/csrf.py
+++ b/gratipay/security/csrf.py
@@ -10,8 +10,6 @@
from datetime import timedelta
import re
-import urlparse
-from aspen import log_dammit
#from django.utils.cache import patch_vary_headers
@@ -36,20 +34,9 @@
response.headers['Vary'] = ', '.join(vary_headers + additional_headers)
-#from django.utils.http import same_origin
-def same_origin(url1, url2):
- """
- Checks if two URLs are 'same-origin'
- """
- p1, p2 = urlparse.urlparse(url1), urlparse.urlparse(url2)
- return (p1.scheme, p1.hostname, p1.port) == (p2.scheme, p2.hostname, p2.port)
-
-
from aspen import Response
from crypto import constant_time_compare, get_random_string
-REASON_NO_REFERER = "Referer checking failed - no Referer."
-REASON_BAD_REFERER = "Referer checking failed - %s does not match %s."
REASON_NO_CSRF_COOKIE = "CSRF cookie not set."
REASON_BAD_TOKEN = "CSRF token missing or incorrect."
@@ -72,15 +59,6 @@
return _get_new_csrf_key()
return token
-def _is_secure(request):
- import gratipay
- return gratipay.canonical_scheme == 'https'
-
-def _get_host(request):
- """Returns the HTTP host using the request headers.
- """
- return request.headers.get('X-Forwarded-Host', request.headers['Host'])
-
def get_csrf_token_from_request(request):
@@ -99,32 +77,6 @@
# Assume that anything not defined as 'safe' by RC2616 needs protection
if request.line.method not in ('GET', 'HEAD', 'OPTIONS', 'TRACE'):
- if _is_secure(request):
- # Suppose user visits http://example.com/
- # An active network attacker (man-in-the-middle, MITM) sends a
- # POST form that targets https://example.com/detonate-bomb/ and
- # submits it via JavaScript.
- #
- # The attacker will need to provide a CSRF cookie and token, but
- # that's no problem for a MITM and the session-independent
- # nonce we're using. So the MITM can circumvent the CSRF
- # protection. This is true for any HTTP connection, but anyone
- # using HTTPS expects better! For this reason, for
- # https://example.com/ we need additional protection that treats
- # http://example.com/ as completely untrusted. Under HTTPS,
- # Barth et al. found that the Referer header is missing for
- # same-domain requests in only about 0.2% of cases or less, so
- # we can use strict Referer checking.
- referer = request.headers.get('Referer')
- if referer is None:
- raise Response(403, REASON_NO_REFERER)
-
- good_referer = 'https://%s/' % _get_host(request)
- if not same_origin(referer, good_referer):
- reason = REASON_BAD_REFERER % (referer, good_referer)
- log_dammit(reason)
- raise Response(403, reason)
-
if csrf_token is None:
raise Response(403, REASON_NO_CSRF_COOKIE)
| {"golden_diff": "diff --git a/gratipay/security/authentication.py b/gratipay/security/authentication.py\n--- a/gratipay/security/authentication.py\n+++ b/gratipay/security/authentication.py\n@@ -58,8 +58,6 @@\n csrf_token = csrf._get_new_csrf_key()\n request.headers.cookie['csrf_token'] = csrf_token\n request.headers['X-CSRF-TOKEN'] = csrf_token\n- if 'Referer' not in request.headers:\n- request.headers['Referer'] = 'https://%s/' % csrf._get_host(request)\n \n def set_request_context_user(request):\n \"\"\"Set request.context['user']. This signs the user in.\ndiff --git a/gratipay/security/csrf.py b/gratipay/security/csrf.py\n--- a/gratipay/security/csrf.py\n+++ b/gratipay/security/csrf.py\n@@ -10,8 +10,6 @@\n \n from datetime import timedelta\n import re\n-import urlparse\n-from aspen import log_dammit\n \n \n #from django.utils.cache import patch_vary_headers\n@@ -36,20 +34,9 @@\n response.headers['Vary'] = ', '.join(vary_headers + additional_headers)\n \n \n-#from django.utils.http import same_origin\n-def same_origin(url1, url2):\n- \"\"\"\n- Checks if two URLs are 'same-origin'\n- \"\"\"\n- p1, p2 = urlparse.urlparse(url1), urlparse.urlparse(url2)\n- return (p1.scheme, p1.hostname, p1.port) == (p2.scheme, p2.hostname, p2.port)\n-\n-\n from aspen import Response\n from crypto import constant_time_compare, get_random_string\n \n-REASON_NO_REFERER = \"Referer checking failed - no Referer.\"\n-REASON_BAD_REFERER = \"Referer checking failed - %s does not match %s.\"\n REASON_NO_CSRF_COOKIE = \"CSRF cookie not set.\"\n REASON_BAD_TOKEN = \"CSRF token missing or incorrect.\"\n \n@@ -72,15 +59,6 @@\n return _get_new_csrf_key()\n return token\n \n-def _is_secure(request):\n- import gratipay\n- return gratipay.canonical_scheme == 'https'\n-\n-def _get_host(request):\n- \"\"\"Returns the HTTP host using the request headers.\n- \"\"\"\n- return request.headers.get('X-Forwarded-Host', request.headers['Host'])\n-\n \n \n def get_csrf_token_from_request(request):\n@@ -99,32 +77,6 @@\n # Assume that anything not defined as 'safe' by RC2616 needs protection\n if request.line.method not in ('GET', 'HEAD', 'OPTIONS', 'TRACE'):\n \n- if _is_secure(request):\n- # Suppose user visits http://example.com/\n- # An active network attacker (man-in-the-middle, MITM) sends a\n- # POST form that targets https://example.com/detonate-bomb/ and\n- # submits it via JavaScript.\n- #\n- # The attacker will need to provide a CSRF cookie and token, but\n- # that's no problem for a MITM and the session-independent\n- # nonce we're using. So the MITM can circumvent the CSRF\n- # protection. This is true for any HTTP connection, but anyone\n- # using HTTPS expects better! For this reason, for\n- # https://example.com/ we need additional protection that treats\n- # http://example.com/ as completely untrusted. Under HTTPS,\n- # Barth et al. found that the Referer header is missing for\n- # same-domain requests in only about 0.2% of cases or less, so\n- # we can use strict Referer checking.\n- referer = request.headers.get('Referer')\n- if referer is None:\n- raise Response(403, REASON_NO_REFERER)\n-\n- good_referer = 'https://%s/' % _get_host(request)\n- if not same_origin(referer, good_referer):\n- reason = REASON_BAD_REFERER % (referer, good_referer)\n- log_dammit(reason)\n- raise Response(403, reason)\n-\n if csrf_token is None:\n raise Response(403, REASON_NO_CSRF_COOKIE)\n", "issue": "login requires Referer\nThis just started recently. When I try to log in I get a blank page. Inspecting the response in my console shows I'm getting a 500 from the server.\n\nMy browser (ABrowser): Mozilla/5.0 (X11; Linux i686; rv:33.0) Gecko/20100101 Firefox/33.0\n\nlogin requires Referer\nThis just started recently. When I try to log in I get a blank page. Inspecting the response in my console shows I'm getting a 500 from the server.\n\nMy browser (ABrowser): Mozilla/5.0 (X11; Linux i686; rv:33.0) Gecko/20100101 Firefox/33.0\n\n", "before_files": [{"content": "\"\"\"Cross Site Request Forgery middleware, borrowed from Django.\n\nSee also:\n\n https://github.com/django/django/blob/master/django/middleware/csrf.py\n https://docs.djangoproject.com/en/dev/ref/contrib/csrf/\n https://github.com/gratipay/gratipay.com/issues/88\n\n\"\"\"\n\nfrom datetime import timedelta\nimport re\nimport urlparse\nfrom aspen import log_dammit\n\n\n#from django.utils.cache import patch_vary_headers\ncc_delim_re = re.compile(r'\\s*,\\s*')\ndef patch_vary_headers(response, newheaders):\n \"\"\"\n Adds (or updates) the \"Vary\" header in the given HttpResponse object.\n newheaders is a list of header names that should be in \"Vary\". Existing\n headers in \"Vary\" aren't removed.\n \"\"\"\n # Note that we need to keep the original order intact, because cache\n # implementations may rely on the order of the Vary contents in, say,\n # computing an MD5 hash.\n if 'Vary' in response.headers:\n vary_headers = cc_delim_re.split(response.headers['Vary'])\n else:\n vary_headers = []\n # Use .lower() here so we treat headers as case-insensitive.\n existing_headers = set([header.lower() for header in vary_headers])\n additional_headers = [newheader for newheader in newheaders\n if newheader.lower() not in existing_headers]\n response.headers['Vary'] = ', '.join(vary_headers + additional_headers)\n\n\n#from django.utils.http import same_origin\ndef same_origin(url1, url2):\n \"\"\"\n Checks if two URLs are 'same-origin'\n \"\"\"\n p1, p2 = urlparse.urlparse(url1), urlparse.urlparse(url2)\n return (p1.scheme, p1.hostname, p1.port) == (p2.scheme, p2.hostname, p2.port)\n\n\nfrom aspen import Response\nfrom crypto import constant_time_compare, get_random_string\n\nREASON_NO_REFERER = \"Referer checking failed - no Referer.\"\nREASON_BAD_REFERER = \"Referer checking failed - %s does not match %s.\"\nREASON_NO_CSRF_COOKIE = \"CSRF cookie not set.\"\nREASON_BAD_TOKEN = \"CSRF token missing or incorrect.\"\n\nTOKEN_LENGTH = 32\nCSRF_TIMEOUT = timedelta(days=7)\n\n\ndef _get_new_csrf_key():\n return get_random_string(TOKEN_LENGTH)\n\n\ndef _sanitize_token(token):\n # Allow only alphanum, and ensure we return a 'str' for the sake\n # of the post processing middleware.\n if len(token) > TOKEN_LENGTH:\n return _get_new_csrf_key()\n token = re.sub('[^a-zA-Z0-9]+', '', str(token.decode('ascii', 'ignore')))\n if token == \"\":\n # In case the cookie has been truncated to nothing at some point.\n return _get_new_csrf_key()\n return token\n\ndef _is_secure(request):\n import gratipay\n return gratipay.canonical_scheme == 'https'\n\ndef _get_host(request):\n \"\"\"Returns the HTTP host using the request headers.\n \"\"\"\n return request.headers.get('X-Forwarded-Host', request.headers['Host'])\n\n\n\ndef get_csrf_token_from_request(request):\n \"\"\"Given a Request object, reject it if it's a forgery.\n \"\"\"\n if request.line.uri.startswith('/assets/'): return\n if request.line.uri.startswith('/callbacks/'): return\n\n try:\n csrf_token = _sanitize_token(request.headers.cookie['csrf_token'].value)\n except KeyError:\n csrf_token = None\n\n request.context['csrf_token'] = csrf_token or _get_new_csrf_key()\n\n # Assume that anything not defined as 'safe' by RC2616 needs protection\n if request.line.method not in ('GET', 'HEAD', 'OPTIONS', 'TRACE'):\n\n if _is_secure(request):\n # Suppose user visits http://example.com/\n # An active network attacker (man-in-the-middle, MITM) sends a\n # POST form that targets https://example.com/detonate-bomb/ and\n # submits it via JavaScript.\n #\n # The attacker will need to provide a CSRF cookie and token, but\n # that's no problem for a MITM and the session-independent\n # nonce we're using. So the MITM can circumvent the CSRF\n # protection. This is true for any HTTP connection, but anyone\n # using HTTPS expects better! For this reason, for\n # https://example.com/ we need additional protection that treats\n # http://example.com/ as completely untrusted. Under HTTPS,\n # Barth et al. found that the Referer header is missing for\n # same-domain requests in only about 0.2% of cases or less, so\n # we can use strict Referer checking.\n referer = request.headers.get('Referer')\n if referer is None:\n raise Response(403, REASON_NO_REFERER)\n\n good_referer = 'https://%s/' % _get_host(request)\n if not same_origin(referer, good_referer):\n reason = REASON_BAD_REFERER % (referer, good_referer)\n log_dammit(reason)\n raise Response(403, reason)\n\n if csrf_token is None:\n raise Response(403, REASON_NO_CSRF_COOKIE)\n\n # Check non-cookie token for match.\n request_csrf_token = \"\"\n if request.line.method == \"POST\":\n if isinstance(request.body, dict):\n request_csrf_token = request.body.get('csrf_token', '')\n\n if request_csrf_token == \"\":\n # Fall back to X-CSRF-TOKEN, to make things easier for AJAX,\n # and possible for PUT/DELETE.\n request_csrf_token = request.headers.get('X-CSRF-TOKEN', '')\n\n if not constant_time_compare(request_csrf_token, csrf_token):\n raise Response(403, REASON_BAD_TOKEN)\n\n\ndef add_csrf_token_to_response(response, request=None):\n \"\"\"Store the latest CSRF token as a cookie.\n \"\"\"\n if request is None:\n return # early parsing must've failed\n csrf_token = request.context.get('csrf_token')\n if csrf_token:\n # Don't set httponly so that we can POST using XHR.\n # https://github.com/gratipay/gratipay.com/issues/3030\n response.set_cookie('csrf_token', csrf_token, expires=CSRF_TIMEOUT, httponly=False)\n\n # Content varies with the CSRF cookie, so set the Vary header.\n patch_vary_headers(response, ('Cookie',))\n", "path": "gratipay/security/csrf.py"}, {"content": "\"\"\"Defines website authentication helpers.\n\"\"\"\nimport binascii\nfrom datetime import date, datetime\n\nfrom aspen import Response\nfrom aspen.utils import to_rfc822\nfrom gratipay.models.participant import Participant\nfrom gratipay.security import csrf\nfrom gratipay.security.crypto import constant_time_compare\nfrom gratipay.security.user import User, SESSION\n\n\nANON = User()\nBEGINNING_OF_EPOCH = to_rfc822(datetime(1970, 1, 1))\n\ndef _get_user_via_api_key(api_key):\n \"\"\"Given an api_key, return a User. This auth method is deprecated.\n \"\"\"\n user = User()\n user.participant = Participant._from_thing('api_key', api_key)\n if user.participant:\n p = user.participant\n today = date.today()\n if p.old_auth_usage != today:\n Participant.db.run(\"\"\"\n UPDATE participants\n SET old_auth_usage = %s\n WHERE id = %s\n \"\"\", (today, p.id))\n return user\n\ndef _get_user_via_basic_auth(auth_header):\n \"\"\"Given a basic auth header, return a User object.\n \"\"\"\n try:\n creds = binascii.a2b_base64(auth_header[len('Basic '):]).split(':', 1)\n except binascii.Error:\n raise Response(400, 'Malformed \"Authorization\" header')\n if len(creds) != 2:\n raise Response(401)\n userid, api_key = creds\n if len(userid) == 36 and '-' in userid:\n user = _get_user_via_api_key(userid) # For backward-compatibility\n else:\n try:\n userid = int(userid)\n except ValueError:\n raise Response(401)\n user = User.from_id(userid)\n if user.ANON or not constant_time_compare(user.participant.api_key, api_key):\n raise Response(401)\n return user\n\ndef _turn_off_csrf(request):\n \"\"\"Given a request, short-circuit CSRF.\n \"\"\"\n csrf_token = csrf._get_new_csrf_key()\n request.headers.cookie['csrf_token'] = csrf_token\n request.headers['X-CSRF-TOKEN'] = csrf_token\n if 'Referer' not in request.headers:\n request.headers['Referer'] = 'https://%s/' % csrf._get_host(request)\n\ndef set_request_context_user(request):\n \"\"\"Set request.context['user']. This signs the user in.\n \"\"\"\n\n request.context['user'] = user = ANON # Make sure we always have a user object, even if\n # there's an exception in the rest of this function.\n\n if request.line.uri.startswith('/assets/'):\n pass\n elif 'Authorization' in request.headers:\n header = request.headers['authorization']\n if header.startswith('Basic '):\n user = _get_user_via_basic_auth(header)\n if not user.ANON:\n _turn_off_csrf(request)\n elif SESSION in request.headers.cookie:\n token = request.headers.cookie[SESSION].value\n user = User.from_session_token(token)\n\n request.context['user'] = user\n\ndef add_auth_to_response(response, request=None):\n if request is None:\n return # early parsing must've failed\n if request.line.uri.startswith('/assets/'):\n return # assets never get auth headers and have their own caching done elsewhere\n\n response.headers['Expires'] = BEGINNING_OF_EPOCH # don't cache\n\n if SESSION in request.headers.cookie:\n user = request.context.get('user') or ANON\n if not user.ANON:\n user.keep_signed_in(response.headers.cookie)\n", "path": "gratipay/security/authentication.py"}], "after_files": [{"content": "\"\"\"Cross Site Request Forgery middleware, borrowed from Django.\n\nSee also:\n\n https://github.com/django/django/blob/master/django/middleware/csrf.py\n https://docs.djangoproject.com/en/dev/ref/contrib/csrf/\n https://github.com/gratipay/gratipay.com/issues/88\n\n\"\"\"\n\nfrom datetime import timedelta\nimport re\n\n\n#from django.utils.cache import patch_vary_headers\ncc_delim_re = re.compile(r'\\s*,\\s*')\ndef patch_vary_headers(response, newheaders):\n \"\"\"\n Adds (or updates) the \"Vary\" header in the given HttpResponse object.\n newheaders is a list of header names that should be in \"Vary\". Existing\n headers in \"Vary\" aren't removed.\n \"\"\"\n # Note that we need to keep the original order intact, because cache\n # implementations may rely on the order of the Vary contents in, say,\n # computing an MD5 hash.\n if 'Vary' in response.headers:\n vary_headers = cc_delim_re.split(response.headers['Vary'])\n else:\n vary_headers = []\n # Use .lower() here so we treat headers as case-insensitive.\n existing_headers = set([header.lower() for header in vary_headers])\n additional_headers = [newheader for newheader in newheaders\n if newheader.lower() not in existing_headers]\n response.headers['Vary'] = ', '.join(vary_headers + additional_headers)\n\n\nfrom aspen import Response\nfrom crypto import constant_time_compare, get_random_string\n\nREASON_NO_CSRF_COOKIE = \"CSRF cookie not set.\"\nREASON_BAD_TOKEN = \"CSRF token missing or incorrect.\"\n\nTOKEN_LENGTH = 32\nCSRF_TIMEOUT = timedelta(days=7)\n\n\ndef _get_new_csrf_key():\n return get_random_string(TOKEN_LENGTH)\n\n\ndef _sanitize_token(token):\n # Allow only alphanum, and ensure we return a 'str' for the sake\n # of the post processing middleware.\n if len(token) > TOKEN_LENGTH:\n return _get_new_csrf_key()\n token = re.sub('[^a-zA-Z0-9]+', '', str(token.decode('ascii', 'ignore')))\n if token == \"\":\n # In case the cookie has been truncated to nothing at some point.\n return _get_new_csrf_key()\n return token\n\n\n\ndef get_csrf_token_from_request(request):\n \"\"\"Given a Request object, reject it if it's a forgery.\n \"\"\"\n if request.line.uri.startswith('/assets/'): return\n if request.line.uri.startswith('/callbacks/'): return\n\n try:\n csrf_token = _sanitize_token(request.headers.cookie['csrf_token'].value)\n except KeyError:\n csrf_token = None\n\n request.context['csrf_token'] = csrf_token or _get_new_csrf_key()\n\n # Assume that anything not defined as 'safe' by RC2616 needs protection\n if request.line.method not in ('GET', 'HEAD', 'OPTIONS', 'TRACE'):\n\n if csrf_token is None:\n raise Response(403, REASON_NO_CSRF_COOKIE)\n\n # Check non-cookie token for match.\n request_csrf_token = \"\"\n if request.line.method == \"POST\":\n if isinstance(request.body, dict):\n request_csrf_token = request.body.get('csrf_token', '')\n\n if request_csrf_token == \"\":\n # Fall back to X-CSRF-TOKEN, to make things easier for AJAX,\n # and possible for PUT/DELETE.\n request_csrf_token = request.headers.get('X-CSRF-TOKEN', '')\n\n if not constant_time_compare(request_csrf_token, csrf_token):\n raise Response(403, REASON_BAD_TOKEN)\n\n\ndef add_csrf_token_to_response(response, request=None):\n \"\"\"Store the latest CSRF token as a cookie.\n \"\"\"\n if request is None:\n return # early parsing must've failed\n csrf_token = request.context.get('csrf_token')\n if csrf_token:\n # Don't set httponly so that we can POST using XHR.\n # https://github.com/gratipay/gratipay.com/issues/3030\n response.set_cookie('csrf_token', csrf_token, expires=CSRF_TIMEOUT, httponly=False)\n\n # Content varies with the CSRF cookie, so set the Vary header.\n patch_vary_headers(response, ('Cookie',))\n", "path": "gratipay/security/csrf.py"}, {"content": "\"\"\"Defines website authentication helpers.\n\"\"\"\nimport binascii\nfrom datetime import date, datetime\n\nfrom aspen import Response\nfrom aspen.utils import to_rfc822\nfrom gratipay.models.participant import Participant\nfrom gratipay.security import csrf\nfrom gratipay.security.crypto import constant_time_compare\nfrom gratipay.security.user import User, SESSION\n\n\nANON = User()\nBEGINNING_OF_EPOCH = to_rfc822(datetime(1970, 1, 1))\n\ndef _get_user_via_api_key(api_key):\n \"\"\"Given an api_key, return a User. This auth method is deprecated.\n \"\"\"\n user = User()\n user.participant = Participant._from_thing('api_key', api_key)\n if user.participant:\n p = user.participant\n today = date.today()\n if p.old_auth_usage != today:\n Participant.db.run(\"\"\"\n UPDATE participants\n SET old_auth_usage = %s\n WHERE id = %s\n \"\"\", (today, p.id))\n return user\n\ndef _get_user_via_basic_auth(auth_header):\n \"\"\"Given a basic auth header, return a User object.\n \"\"\"\n try:\n creds = binascii.a2b_base64(auth_header[len('Basic '):]).split(':', 1)\n except binascii.Error:\n raise Response(400, 'Malformed \"Authorization\" header')\n if len(creds) != 2:\n raise Response(401)\n userid, api_key = creds\n if len(userid) == 36 and '-' in userid:\n user = _get_user_via_api_key(userid) # For backward-compatibility\n else:\n try:\n userid = int(userid)\n except ValueError:\n raise Response(401)\n user = User.from_id(userid)\n if user.ANON or not constant_time_compare(user.participant.api_key, api_key):\n raise Response(401)\n return user\n\ndef _turn_off_csrf(request):\n \"\"\"Given a request, short-circuit CSRF.\n \"\"\"\n csrf_token = csrf._get_new_csrf_key()\n request.headers.cookie['csrf_token'] = csrf_token\n request.headers['X-CSRF-TOKEN'] = csrf_token\n\ndef set_request_context_user(request):\n \"\"\"Set request.context['user']. This signs the user in.\n \"\"\"\n\n request.context['user'] = user = ANON # Make sure we always have a user object, even if\n # there's an exception in the rest of this function.\n\n if request.line.uri.startswith('/assets/'):\n pass\n elif 'Authorization' in request.headers:\n header = request.headers['authorization']\n if header.startswith('Basic '):\n user = _get_user_via_basic_auth(header)\n if not user.ANON:\n _turn_off_csrf(request)\n elif SESSION in request.headers.cookie:\n token = request.headers.cookie[SESSION].value\n user = User.from_session_token(token)\n\n request.context['user'] = user\n\ndef add_auth_to_response(response, request=None):\n if request is None:\n return # early parsing must've failed\n if request.line.uri.startswith('/assets/'):\n return # assets never get auth headers and have their own caching done elsewhere\n\n response.headers['Expires'] = BEGINNING_OF_EPOCH # don't cache\n\n if SESSION in request.headers.cookie:\n user = request.context.get('user') or ANON\n if not user.ANON:\n user.keep_signed_in(response.headers.cookie)\n", "path": "gratipay/security/authentication.py"}]} | 3,245 | 935 |
gh_patches_debug_12695 | rasdani/github-patches | git_diff | lutris__lutris-1031 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
“Restore default gamma....” doesn’t restore gamma after game quits
For some reason, with "Restore default gamma..." option enabled, default screen gamma is not restored after a game that changes it quits (even after Lutris kills processes in the prefix)
When running Lutris in debug mode (-d) I get this Warning:
"WARNING 2018-08-08 18:46:47,323 [display.restore_gamma:168]:xgamma is not available on your system"
Would be nice to fix this, as changing screen gamma back manually is a bit annoying.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lutris/util/display.py`
Content:
```
1 import re
2 import subprocess
3 import time
4
5 import gi
6 gi.require_version('GnomeDesktop', '3.0')
7
8 from gi.repository import Gdk, GnomeDesktop, GLib
9
10 from lutris.util import system
11 from lutris.util.log import logger
12
13 XRANDR_CACHE = None
14 XRANDR_CACHE_SET_AT = None
15 XGAMMA_FOUND = False
16
17
18 def cached(function):
19 def wrapper():
20 global XRANDR_CACHE
21 global XRANDR_CACHE_SET_AT
22
23 if XRANDR_CACHE and time.time() - XRANDR_CACHE_SET_AT < 60:
24 return XRANDR_CACHE
25 XRANDR_CACHE = function()
26 XRANDR_CACHE_SET_AT = time.time()
27 return XRANDR_CACHE
28 return wrapper
29
30
31 @cached
32 def get_vidmodes():
33 """Return video modes from XrandR"""
34 logger.debug("Retrieving video modes from XrandR")
35 xrandr_output = subprocess.Popen(["xrandr"],
36 stdout=subprocess.PIPE).communicate()[0]
37 return list([line for line in xrandr_output.decode().split("\n")])
38
39
40 def get_outputs():
41 """Return list of tuples containing output name and geometry."""
42 outputs = []
43 vid_modes = get_vidmodes()
44 if not vid_modes:
45 logger.error("xrandr didn't return anything")
46 return []
47 for line in vid_modes:
48 parts = line.split()
49 if len(parts) < 2:
50 continue
51 if parts[1] == 'connected':
52 if len(parts) == 2:
53 continue
54 if parts[2] != 'primary':
55 geom = parts[2]
56 rotate = parts[3]
57 else:
58 geom = parts[3]
59 rotate = parts[4]
60 if geom.startswith('('): # Screen turned off, no geometry
61 continue
62 if rotate.startswith('('): # Screen not rotated, no need to include
63 outputs.append((parts[0], geom, "normal"))
64 else:
65 if rotate in ("left", "right"):
66 geom_parts = geom.split('+')
67 x_y = geom_parts[0].split('x')
68 geom = "{}x{}+{}+{}".format(x_y[1], x_y[0], geom_parts[1], geom_parts[2])
69 outputs.append((parts[0], geom, rotate))
70 return outputs
71
72
73 def get_output_names():
74 """Return output names from XrandR"""
75 return [output[0] for output in get_outputs()]
76
77
78 def turn_off_except(display):
79 """Use XrandR to turn off displays except the one referenced by `display`"""
80 if not display:
81 logger.error("No active display given, no turning off every display")
82 return
83 for output in get_outputs():
84 if output[0] != display:
85 logger.info("Turning off %s", output[0])
86 subprocess.Popen(["xrandr", "--output", output[0], "--off"])
87
88
89 def get_resolutions():
90 """Return the list of supported screen resolutions."""
91 resolution_list = []
92 for line in get_vidmodes():
93 if line.startswith(" "):
94 resolution_match = re.match(r'.*?(\d+x\d+).*', line)
95 if resolution_match:
96 resolution_list.append(resolution_match.groups()[0])
97 return resolution_list
98
99
100 def get_unique_resolutions():
101 """Return available resolutions, without duplicates and ordered with highest resolution first"""
102 return sorted(set(get_resolutions()), key=lambda x: int(x.split('x')[0]), reverse=True)
103
104
105 def get_current_resolution(monitor=0):
106 """Return the current resolution for the desktop."""
107 resolution = list()
108 for line in get_vidmodes():
109 if line.startswith(" ") and "*" in line:
110 resolution_match = re.match(r'.*?(\d+x\d+).*', line)
111 if resolution_match:
112 resolution.append(resolution_match.groups()[0])
113 if monitor == 'all':
114 return resolution
115 return resolution[monitor]
116
117
118 def change_resolution(resolution):
119 """Change display resolution.
120
121 Takes a string for single monitors or a list of displays as returned
122 by get_outputs().
123 """
124 if not resolution:
125 logger.warning("No resolution provided")
126 return
127 if isinstance(resolution, str):
128 logger.debug("Switching resolution to %s", resolution)
129
130 if resolution not in get_resolutions():
131 logger.warning("Resolution %s doesn't exist.", resolution)
132 else:
133 logger.info("Changing resolution to %s", resolution)
134 subprocess.Popen(["xrandr", "-s", resolution])
135 else:
136 for display in resolution:
137 display_name = display[0]
138 logger.debug("Switching to %s on %s", display[1], display[0])
139 display_geom = display[1].split('+')
140 display_resolution = display_geom[0]
141 position = (display_geom[1], display_geom[2])
142
143 if (
144 len(display) > 2 and
145 display[2] in ('normal', 'left', 'right', 'inverted')
146 ):
147 rotation = display[2]
148 else:
149 rotation = "normal"
150 logger.info("Switching resolution of %s to %s", display_name, display_resolution)
151 subprocess.Popen([
152 "xrandr",
153 "--output", display_name,
154 "--mode", display_resolution,
155 "--pos", "{}x{}".format(position[0], position[1]),
156 "--rotate", rotation
157 ]).communicate()
158
159
160 def restore_gamma():
161 """Restores gamma to a normal level."""
162 global XGAMMA_FOUND
163 if XGAMMA_FOUND is None:
164 XGAMMA_FOUND = system.find_executable('xgamma')
165 if XGAMMA_FOUND is True:
166 subprocess.Popen(["xgamma", "-gamma", "1.0"])
167 else:
168 logger.warning('xgamma is not available on your system')
169
170
171 def get_xrandr_version():
172 """Return the major and minor version of XRandR utility"""
173 pattern = "version"
174 xrandr_output = subprocess.Popen(["xrandr", "--version"],
175 stdout=subprocess.PIPE).communicate()[0].decode()
176 position = xrandr_output.find(pattern) + len(pattern)
177 version_str = xrandr_output[position:].strip().split(".")
178 logger.debug("Found XrandR version %s", version_str)
179 try:
180 return {"major": int(version_str[0]), "minor": int(version_str[1])}
181 except ValueError:
182 logger.error("Can't find version in: %s", xrandr_output)
183 return {"major": 0, "minor": 0}
184
185
186 def get_graphics_adapaters():
187 """Return the list of graphics cards available on a system
188
189 Returns:
190 list: list of tuples containing PCI ID and description of the VGA adapter
191 """
192
193 if not system.find_executable('lspci'):
194 logger.warning('lspci is not available. List of graphics cards not available')
195 return []
196 return [
197 (pci_id, vga_desc.split(': ')[1]) for pci_id, vga_desc in [
198 line.split(maxsplit=1)
199 for line in system.execute('lspci').split('\n')
200 if 'VGA' in line
201 ]
202 ]
203
204
205 def get_providers():
206 """Return the list of available graphic cards"""
207 pattern = "name:"
208 providers = []
209 version = get_xrandr_version()
210
211 if version["major"] == 1 and version["minor"] >= 4:
212 logger.debug("Retrieving providers from XrandR")
213 xrandr_output = subprocess.Popen(["xrandr", "--listproviders"],
214 stdout=subprocess.PIPE).communicate()[0].decode()
215 for line in xrandr_output.split("\n"):
216 if line.find("Provider ") != 0:
217 continue
218 position = line.find(pattern) + len(pattern)
219 providers.append(line[position:].strip())
220
221 return providers
222
223
224 class LegacyDisplayManager:
225 @staticmethod
226 def get_resolutions():
227 return get_resolutions()
228
229 @staticmethod
230 def get_display_names():
231 return get_output_names()
232
233
234 class DisplayManager(object):
235 def __init__(self):
236 self.screen = Gdk.Screen.get_default()
237 self.rr_screen = GnomeDesktop.RRScreen.new(self.screen)
238 self.rr_config = GnomeDesktop.RRConfig.new_current(self.rr_screen)
239 self.rr_config.load_current()
240
241 @property
242 def outputs(self):
243 return self.rr_screen.list_outputs()
244
245 def get_display_names(self):
246 return [output_info.get_display_name() for output_info in self.rr_config.get_outputs()]
247
248 def get_output_modes(self, output):
249 logger.debug("Retrieving modes for %s", output)
250 resolutions = []
251 for mode in output.list_modes():
252 resolution = "%sx%s" % (mode.get_width(), mode.get_height())
253 if resolution not in resolutions:
254 resolutions.append(resolution)
255 return resolutions
256
257 def get_resolutions(self):
258 resolutions = []
259 for mode in self.rr_screen.list_modes():
260 resolutions.append("%sx%s" % (mode.get_width(), mode.get_height()))
261 return sorted(set(resolutions), key=lambda x: int(x.split('x')[0]), reverse=True)
262
263
264 try:
265 DISPLAY_MANAGER = DisplayManager()
266 except GLib.Error:
267 DISPLAY_MANAGER = LegacyDisplayManager()
268
269 USE_DRI_PRIME = len(get_graphics_adapaters()) > 1
270
271
272 def get_resolution_choices():
273 """Return list of available resolutions as label, value tuples
274 suitable for inclusion in drop-downs.
275 """
276 resolutions = DISPLAY_MANAGER.get_resolutions()
277 resolution_choices = list(zip(resolutions, resolutions))
278 resolution_choices.insert(0, ("Keep current", 'off'))
279 return resolution_choices
280
281
282 def get_output_choices():
283 """Return list of outputs for drop-downs"""
284 displays = DISPLAY_MANAGER.get_display_names()
285 output_choices = list(zip(displays, displays))
286 output_choices.insert(0, ("Off", 'off'))
287 return output_choices
288
289
290 def get_output_list():
291 """Return a list of output with their index.
292 This is used to indicate to SDL 1.2 which monitor to use.
293 """
294 choices = [
295 ('Off', 'off'),
296 ]
297 displays = DISPLAY_MANAGER.get_display_names()
298 for index, output in enumerate(displays):
299 # Display name can't be used because they might not be in the right order
300 # Using DISPLAYS to get the number of connected monitors
301 choices.append((output, str(index)))
302 return choices
303
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lutris/util/display.py b/lutris/util/display.py
--- a/lutris/util/display.py
+++ b/lutris/util/display.py
@@ -12,8 +12,7 @@
XRANDR_CACHE = None
XRANDR_CACHE_SET_AT = None
-XGAMMA_FOUND = False
-
+XGAMMA_FOUND = None
def cached(function):
def wrapper():
@@ -161,7 +160,7 @@
"""Restores gamma to a normal level."""
global XGAMMA_FOUND
if XGAMMA_FOUND is None:
- XGAMMA_FOUND = system.find_executable('xgamma')
+ XGAMMA_FOUND = bool(system.find_executable('xgamma'))
if XGAMMA_FOUND is True:
subprocess.Popen(["xgamma", "-gamma", "1.0"])
else:
| {"golden_diff": "diff --git a/lutris/util/display.py b/lutris/util/display.py\n--- a/lutris/util/display.py\n+++ b/lutris/util/display.py\n@@ -12,8 +12,7 @@\n \n XRANDR_CACHE = None\n XRANDR_CACHE_SET_AT = None\n-XGAMMA_FOUND = False\n-\n+XGAMMA_FOUND = None\n \n def cached(function):\n def wrapper():\n@@ -161,7 +160,7 @@\n \"\"\"Restores gamma to a normal level.\"\"\"\n global XGAMMA_FOUND\n if XGAMMA_FOUND is None:\n- XGAMMA_FOUND = system.find_executable('xgamma')\n+ XGAMMA_FOUND = bool(system.find_executable('xgamma'))\n if XGAMMA_FOUND is True:\n subprocess.Popen([\"xgamma\", \"-gamma\", \"1.0\"])\n else:\n", "issue": "\u201cRestore default gamma....\u201d doesn\u2019t restore gamma after game quits\nFor some reason, with \"Restore default gamma...\" option enabled, default screen gamma is not restored after a game that changes it quits (even after Lutris kills processes in the prefix)\r\nWhen running Lutris in debug mode (-d) I get this Warning:\r\n\"WARNING 2018-08-08 18:46:47,323 [display.restore_gamma:168]:xgamma is not available on your system\"\r\nWould be nice to fix this, as changing screen gamma back manually is a bit annoying.\n", "before_files": [{"content": "import re\nimport subprocess\nimport time\n\nimport gi\ngi.require_version('GnomeDesktop', '3.0')\n\nfrom gi.repository import Gdk, GnomeDesktop, GLib\n\nfrom lutris.util import system\nfrom lutris.util.log import logger\n\nXRANDR_CACHE = None\nXRANDR_CACHE_SET_AT = None\nXGAMMA_FOUND = False\n\n\ndef cached(function):\n def wrapper():\n global XRANDR_CACHE\n global XRANDR_CACHE_SET_AT\n\n if XRANDR_CACHE and time.time() - XRANDR_CACHE_SET_AT < 60:\n return XRANDR_CACHE\n XRANDR_CACHE = function()\n XRANDR_CACHE_SET_AT = time.time()\n return XRANDR_CACHE\n return wrapper\n\n\n@cached\ndef get_vidmodes():\n \"\"\"Return video modes from XrandR\"\"\"\n logger.debug(\"Retrieving video modes from XrandR\")\n xrandr_output = subprocess.Popen([\"xrandr\"],\n stdout=subprocess.PIPE).communicate()[0]\n return list([line for line in xrandr_output.decode().split(\"\\n\")])\n\n\ndef get_outputs():\n \"\"\"Return list of tuples containing output name and geometry.\"\"\"\n outputs = []\n vid_modes = get_vidmodes()\n if not vid_modes:\n logger.error(\"xrandr didn't return anything\")\n return []\n for line in vid_modes:\n parts = line.split()\n if len(parts) < 2:\n continue\n if parts[1] == 'connected':\n if len(parts) == 2:\n continue\n if parts[2] != 'primary':\n geom = parts[2]\n rotate = parts[3]\n else:\n geom = parts[3]\n rotate = parts[4]\n if geom.startswith('('): # Screen turned off, no geometry\n continue\n if rotate.startswith('('): # Screen not rotated, no need to include\n outputs.append((parts[0], geom, \"normal\"))\n else:\n if rotate in (\"left\", \"right\"):\n geom_parts = geom.split('+')\n x_y = geom_parts[0].split('x')\n geom = \"{}x{}+{}+{}\".format(x_y[1], x_y[0], geom_parts[1], geom_parts[2])\n outputs.append((parts[0], geom, rotate))\n return outputs\n\n\ndef get_output_names():\n \"\"\"Return output names from XrandR\"\"\"\n return [output[0] for output in get_outputs()]\n\n\ndef turn_off_except(display):\n \"\"\"Use XrandR to turn off displays except the one referenced by `display`\"\"\"\n if not display:\n logger.error(\"No active display given, no turning off every display\")\n return\n for output in get_outputs():\n if output[0] != display:\n logger.info(\"Turning off %s\", output[0])\n subprocess.Popen([\"xrandr\", \"--output\", output[0], \"--off\"])\n\n\ndef get_resolutions():\n \"\"\"Return the list of supported screen resolutions.\"\"\"\n resolution_list = []\n for line in get_vidmodes():\n if line.startswith(\" \"):\n resolution_match = re.match(r'.*?(\\d+x\\d+).*', line)\n if resolution_match:\n resolution_list.append(resolution_match.groups()[0])\n return resolution_list\n\n\ndef get_unique_resolutions():\n \"\"\"Return available resolutions, without duplicates and ordered with highest resolution first\"\"\"\n return sorted(set(get_resolutions()), key=lambda x: int(x.split('x')[0]), reverse=True)\n\n\ndef get_current_resolution(monitor=0):\n \"\"\"Return the current resolution for the desktop.\"\"\"\n resolution = list()\n for line in get_vidmodes():\n if line.startswith(\" \") and \"*\" in line:\n resolution_match = re.match(r'.*?(\\d+x\\d+).*', line)\n if resolution_match:\n resolution.append(resolution_match.groups()[0])\n if monitor == 'all':\n return resolution\n return resolution[monitor]\n\n\ndef change_resolution(resolution):\n \"\"\"Change display resolution.\n\n Takes a string for single monitors or a list of displays as returned\n by get_outputs().\n \"\"\"\n if not resolution:\n logger.warning(\"No resolution provided\")\n return\n if isinstance(resolution, str):\n logger.debug(\"Switching resolution to %s\", resolution)\n\n if resolution not in get_resolutions():\n logger.warning(\"Resolution %s doesn't exist.\", resolution)\n else:\n logger.info(\"Changing resolution to %s\", resolution)\n subprocess.Popen([\"xrandr\", \"-s\", resolution])\n else:\n for display in resolution:\n display_name = display[0]\n logger.debug(\"Switching to %s on %s\", display[1], display[0])\n display_geom = display[1].split('+')\n display_resolution = display_geom[0]\n position = (display_geom[1], display_geom[2])\n\n if (\n len(display) > 2 and\n display[2] in ('normal', 'left', 'right', 'inverted')\n ):\n rotation = display[2]\n else:\n rotation = \"normal\"\n logger.info(\"Switching resolution of %s to %s\", display_name, display_resolution)\n subprocess.Popen([\n \"xrandr\",\n \"--output\", display_name,\n \"--mode\", display_resolution,\n \"--pos\", \"{}x{}\".format(position[0], position[1]),\n \"--rotate\", rotation\n ]).communicate()\n\n\ndef restore_gamma():\n \"\"\"Restores gamma to a normal level.\"\"\"\n global XGAMMA_FOUND\n if XGAMMA_FOUND is None:\n XGAMMA_FOUND = system.find_executable('xgamma')\n if XGAMMA_FOUND is True:\n subprocess.Popen([\"xgamma\", \"-gamma\", \"1.0\"])\n else:\n logger.warning('xgamma is not available on your system')\n\n\ndef get_xrandr_version():\n \"\"\"Return the major and minor version of XRandR utility\"\"\"\n pattern = \"version\"\n xrandr_output = subprocess.Popen([\"xrandr\", \"--version\"],\n stdout=subprocess.PIPE).communicate()[0].decode()\n position = xrandr_output.find(pattern) + len(pattern)\n version_str = xrandr_output[position:].strip().split(\".\")\n logger.debug(\"Found XrandR version %s\", version_str)\n try:\n return {\"major\": int(version_str[0]), \"minor\": int(version_str[1])}\n except ValueError:\n logger.error(\"Can't find version in: %s\", xrandr_output)\n return {\"major\": 0, \"minor\": 0}\n\n\ndef get_graphics_adapaters():\n \"\"\"Return the list of graphics cards available on a system\n\n Returns:\n list: list of tuples containing PCI ID and description of the VGA adapter\n \"\"\"\n\n if not system.find_executable('lspci'):\n logger.warning('lspci is not available. List of graphics cards not available')\n return []\n return [\n (pci_id, vga_desc.split(': ')[1]) for pci_id, vga_desc in [\n line.split(maxsplit=1)\n for line in system.execute('lspci').split('\\n')\n if 'VGA' in line\n ]\n ]\n\n\ndef get_providers():\n \"\"\"Return the list of available graphic cards\"\"\"\n pattern = \"name:\"\n providers = []\n version = get_xrandr_version()\n\n if version[\"major\"] == 1 and version[\"minor\"] >= 4:\n logger.debug(\"Retrieving providers from XrandR\")\n xrandr_output = subprocess.Popen([\"xrandr\", \"--listproviders\"],\n stdout=subprocess.PIPE).communicate()[0].decode()\n for line in xrandr_output.split(\"\\n\"):\n if line.find(\"Provider \") != 0:\n continue\n position = line.find(pattern) + len(pattern)\n providers.append(line[position:].strip())\n\n return providers\n\n\nclass LegacyDisplayManager:\n @staticmethod\n def get_resolutions():\n return get_resolutions()\n\n @staticmethod\n def get_display_names():\n return get_output_names()\n\n\nclass DisplayManager(object):\n def __init__(self):\n self.screen = Gdk.Screen.get_default()\n self.rr_screen = GnomeDesktop.RRScreen.new(self.screen)\n self.rr_config = GnomeDesktop.RRConfig.new_current(self.rr_screen)\n self.rr_config.load_current()\n\n @property\n def outputs(self):\n return self.rr_screen.list_outputs()\n\n def get_display_names(self):\n return [output_info.get_display_name() for output_info in self.rr_config.get_outputs()]\n\n def get_output_modes(self, output):\n logger.debug(\"Retrieving modes for %s\", output)\n resolutions = []\n for mode in output.list_modes():\n resolution = \"%sx%s\" % (mode.get_width(), mode.get_height())\n if resolution not in resolutions:\n resolutions.append(resolution)\n return resolutions\n\n def get_resolutions(self):\n resolutions = []\n for mode in self.rr_screen.list_modes():\n resolutions.append(\"%sx%s\" % (mode.get_width(), mode.get_height()))\n return sorted(set(resolutions), key=lambda x: int(x.split('x')[0]), reverse=True)\n\n\ntry:\n DISPLAY_MANAGER = DisplayManager()\nexcept GLib.Error:\n DISPLAY_MANAGER = LegacyDisplayManager()\n\nUSE_DRI_PRIME = len(get_graphics_adapaters()) > 1\n\n\ndef get_resolution_choices():\n \"\"\"Return list of available resolutions as label, value tuples\n suitable for inclusion in drop-downs.\n \"\"\"\n resolutions = DISPLAY_MANAGER.get_resolutions()\n resolution_choices = list(zip(resolutions, resolutions))\n resolution_choices.insert(0, (\"Keep current\", 'off'))\n return resolution_choices\n\n\ndef get_output_choices():\n \"\"\"Return list of outputs for drop-downs\"\"\"\n displays = DISPLAY_MANAGER.get_display_names()\n output_choices = list(zip(displays, displays))\n output_choices.insert(0, (\"Off\", 'off'))\n return output_choices\n\n\ndef get_output_list():\n \"\"\"Return a list of output with their index.\n This is used to indicate to SDL 1.2 which monitor to use.\n \"\"\"\n choices = [\n ('Off', 'off'),\n ]\n displays = DISPLAY_MANAGER.get_display_names()\n for index, output in enumerate(displays):\n # Display name can't be used because they might not be in the right order\n # Using DISPLAYS to get the number of connected monitors\n choices.append((output, str(index)))\n return choices\n", "path": "lutris/util/display.py"}], "after_files": [{"content": "import re\nimport subprocess\nimport time\n\nimport gi\ngi.require_version('GnomeDesktop', '3.0')\n\nfrom gi.repository import Gdk, GnomeDesktop, GLib\n\nfrom lutris.util import system\nfrom lutris.util.log import logger\n\nXRANDR_CACHE = None\nXRANDR_CACHE_SET_AT = None\nXGAMMA_FOUND = None\n\ndef cached(function):\n def wrapper():\n global XRANDR_CACHE\n global XRANDR_CACHE_SET_AT\n\n if XRANDR_CACHE and time.time() - XRANDR_CACHE_SET_AT < 60:\n return XRANDR_CACHE\n XRANDR_CACHE = function()\n XRANDR_CACHE_SET_AT = time.time()\n return XRANDR_CACHE\n return wrapper\n\n\n@cached\ndef get_vidmodes():\n \"\"\"Return video modes from XrandR\"\"\"\n logger.debug(\"Retrieving video modes from XrandR\")\n xrandr_output = subprocess.Popen([\"xrandr\"],\n stdout=subprocess.PIPE).communicate()[0]\n return list([line for line in xrandr_output.decode().split(\"\\n\")])\n\n\ndef get_outputs():\n \"\"\"Return list of tuples containing output name and geometry.\"\"\"\n outputs = []\n vid_modes = get_vidmodes()\n if not vid_modes:\n logger.error(\"xrandr didn't return anything\")\n return []\n for line in vid_modes:\n parts = line.split()\n if len(parts) < 2:\n continue\n if parts[1] == 'connected':\n if len(parts) == 2:\n continue\n if parts[2] != 'primary':\n geom = parts[2]\n rotate = parts[3]\n else:\n geom = parts[3]\n rotate = parts[4]\n if geom.startswith('('): # Screen turned off, no geometry\n continue\n if rotate.startswith('('): # Screen not rotated, no need to include\n outputs.append((parts[0], geom, \"normal\"))\n else:\n if rotate in (\"left\", \"right\"):\n geom_parts = geom.split('+')\n x_y = geom_parts[0].split('x')\n geom = \"{}x{}+{}+{}\".format(x_y[1], x_y[0], geom_parts[1], geom_parts[2])\n outputs.append((parts[0], geom, rotate))\n return outputs\n\n\ndef get_output_names():\n \"\"\"Return output names from XrandR\"\"\"\n return [output[0] for output in get_outputs()]\n\n\ndef turn_off_except(display):\n \"\"\"Use XrandR to turn off displays except the one referenced by `display`\"\"\"\n if not display:\n logger.error(\"No active display given, no turning off every display\")\n return\n for output in get_outputs():\n if output[0] != display:\n logger.info(\"Turning off %s\", output[0])\n subprocess.Popen([\"xrandr\", \"--output\", output[0], \"--off\"])\n\n\ndef get_resolutions():\n \"\"\"Return the list of supported screen resolutions.\"\"\"\n resolution_list = []\n for line in get_vidmodes():\n if line.startswith(\" \"):\n resolution_match = re.match(r'.*?(\\d+x\\d+).*', line)\n if resolution_match:\n resolution_list.append(resolution_match.groups()[0])\n return resolution_list\n\n\ndef get_unique_resolutions():\n \"\"\"Return available resolutions, without duplicates and ordered with highest resolution first\"\"\"\n return sorted(set(get_resolutions()), key=lambda x: int(x.split('x')[0]), reverse=True)\n\n\ndef get_current_resolution(monitor=0):\n \"\"\"Return the current resolution for the desktop.\"\"\"\n resolution = list()\n for line in get_vidmodes():\n if line.startswith(\" \") and \"*\" in line:\n resolution_match = re.match(r'.*?(\\d+x\\d+).*', line)\n if resolution_match:\n resolution.append(resolution_match.groups()[0])\n if monitor == 'all':\n return resolution\n return resolution[monitor]\n\n\ndef change_resolution(resolution):\n \"\"\"Change display resolution.\n\n Takes a string for single monitors or a list of displays as returned\n by get_outputs().\n \"\"\"\n if not resolution:\n logger.warning(\"No resolution provided\")\n return\n if isinstance(resolution, str):\n logger.debug(\"Switching resolution to %s\", resolution)\n\n if resolution not in get_resolutions():\n logger.warning(\"Resolution %s doesn't exist.\", resolution)\n else:\n logger.info(\"Changing resolution to %s\", resolution)\n subprocess.Popen([\"xrandr\", \"-s\", resolution])\n else:\n for display in resolution:\n display_name = display[0]\n logger.debug(\"Switching to %s on %s\", display[1], display[0])\n display_geom = display[1].split('+')\n display_resolution = display_geom[0]\n position = (display_geom[1], display_geom[2])\n\n if (\n len(display) > 2 and\n display[2] in ('normal', 'left', 'right', 'inverted')\n ):\n rotation = display[2]\n else:\n rotation = \"normal\"\n logger.info(\"Switching resolution of %s to %s\", display_name, display_resolution)\n subprocess.Popen([\n \"xrandr\",\n \"--output\", display_name,\n \"--mode\", display_resolution,\n \"--pos\", \"{}x{}\".format(position[0], position[1]),\n \"--rotate\", rotation\n ]).communicate()\n\n\ndef restore_gamma():\n \"\"\"Restores gamma to a normal level.\"\"\"\n global XGAMMA_FOUND\n if XGAMMA_FOUND is None:\n XGAMMA_FOUND = bool(system.find_executable('xgamma'))\n if XGAMMA_FOUND is True:\n subprocess.Popen([\"xgamma\", \"-gamma\", \"1.0\"])\n else:\n logger.warning('xgamma is not available on your system')\n\n\ndef get_xrandr_version():\n \"\"\"Return the major and minor version of XRandR utility\"\"\"\n pattern = \"version\"\n xrandr_output = subprocess.Popen([\"xrandr\", \"--version\"],\n stdout=subprocess.PIPE).communicate()[0].decode()\n position = xrandr_output.find(pattern) + len(pattern)\n version_str = xrandr_output[position:].strip().split(\".\")\n logger.debug(\"Found XrandR version %s\", version_str)\n try:\n return {\"major\": int(version_str[0]), \"minor\": int(version_str[1])}\n except ValueError:\n logger.error(\"Can't find version in: %s\", xrandr_output)\n return {\"major\": 0, \"minor\": 0}\n\n\ndef get_graphics_adapaters():\n \"\"\"Return the list of graphics cards available on a system\n\n Returns:\n list: list of tuples containing PCI ID and description of the VGA adapter\n \"\"\"\n\n if not system.find_executable('lspci'):\n logger.warning('lspci is not available. List of graphics cards not available')\n return []\n return [\n (pci_id, vga_desc.split(': ')[1]) for pci_id, vga_desc in [\n line.split(maxsplit=1)\n for line in system.execute('lspci').split('\\n')\n if 'VGA' in line\n ]\n ]\n\n\ndef get_providers():\n \"\"\"Return the list of available graphic cards\"\"\"\n pattern = \"name:\"\n providers = []\n version = get_xrandr_version()\n\n if version[\"major\"] == 1 and version[\"minor\"] >= 4:\n logger.debug(\"Retrieving providers from XrandR\")\n xrandr_output = subprocess.Popen([\"xrandr\", \"--listproviders\"],\n stdout=subprocess.PIPE).communicate()[0].decode()\n for line in xrandr_output.split(\"\\n\"):\n if line.find(\"Provider \") != 0:\n continue\n position = line.find(pattern) + len(pattern)\n providers.append(line[position:].strip())\n\n return providers\n\n\nclass LegacyDisplayManager:\n @staticmethod\n def get_resolutions():\n return get_resolutions()\n\n @staticmethod\n def get_display_names():\n return get_output_names()\n\n\nclass DisplayManager(object):\n def __init__(self):\n self.screen = Gdk.Screen.get_default()\n self.rr_screen = GnomeDesktop.RRScreen.new(self.screen)\n self.rr_config = GnomeDesktop.RRConfig.new_current(self.rr_screen)\n self.rr_config.load_current()\n\n @property\n def outputs(self):\n return self.rr_screen.list_outputs()\n\n def get_display_names(self):\n return [output_info.get_display_name() for output_info in self.rr_config.get_outputs()]\n\n def get_output_modes(self, output):\n logger.debug(\"Retrieving modes for %s\", output)\n resolutions = []\n for mode in output.list_modes():\n resolution = \"%sx%s\" % (mode.get_width(), mode.get_height())\n if resolution not in resolutions:\n resolutions.append(resolution)\n return resolutions\n\n def get_resolutions(self):\n resolutions = []\n for mode in self.rr_screen.list_modes():\n resolutions.append(\"%sx%s\" % (mode.get_width(), mode.get_height()))\n return sorted(set(resolutions), key=lambda x: int(x.split('x')[0]), reverse=True)\n\n\ntry:\n DISPLAY_MANAGER = DisplayManager()\nexcept GLib.Error:\n DISPLAY_MANAGER = LegacyDisplayManager()\n\nUSE_DRI_PRIME = len(get_graphics_adapaters()) > 1\n\n\ndef get_resolution_choices():\n \"\"\"Return list of available resolutions as label, value tuples\n suitable for inclusion in drop-downs.\n \"\"\"\n resolutions = DISPLAY_MANAGER.get_resolutions()\n resolution_choices = list(zip(resolutions, resolutions))\n resolution_choices.insert(0, (\"Keep current\", 'off'))\n return resolution_choices\n\n\ndef get_output_choices():\n \"\"\"Return list of outputs for drop-downs\"\"\"\n displays = DISPLAY_MANAGER.get_display_names()\n output_choices = list(zip(displays, displays))\n output_choices.insert(0, (\"Off\", 'off'))\n return output_choices\n\n\ndef get_output_list():\n \"\"\"Return a list of output with their index.\n This is used to indicate to SDL 1.2 which monitor to use.\n \"\"\"\n choices = [\n ('Off', 'off'),\n ]\n displays = DISPLAY_MANAGER.get_display_names()\n for index, output in enumerate(displays):\n # Display name can't be used because they might not be in the right order\n # Using DISPLAYS to get the number of connected monitors\n choices.append((output, str(index)))\n return choices\n", "path": "lutris/util/display.py"}]} | 3,468 | 194 |
gh_patches_debug_26910 | rasdani/github-patches | git_diff | frappe__frappe-24662 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ImportError: cannot import name 'LazyTranslate' from partially initialized module 'frappe.translate' (most likely due to a circular import)
```
> bench get-untranslated --app erpnext de untranslated.csv
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "apps/frappe/frappe/utils/bench_helper.py", line 114, in <module>
main()
File "apps/frappe/frappe/utils/bench_helper.py", line 20, in main
click.Group(commands=commands)(prog_name="bench")
File "env/lib/python3.11/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "env/lib/python3.11/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "env/lib/python3.11/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "env/lib/python3.11/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "env/lib/python3.11/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "env/lib/python3.11/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "env/lib/python3.11/site-packages/click/decorators.py", line 33, in new_func
return f(get_current_context(), *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "apps/frappe/frappe/commands/__init__.py", line 29, in _func
ret = f(frappe._dict(ctx.obj), *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "apps/frappe/frappe/commands/translate.py", line 59, in get_untranslated
import frappe.translate
File "apps/frappe/frappe/translate.py", line 23, in <module>
from frappe.gettext.extractors.utils import extract_messages_from_code, is_translatable
File "apps/frappe/frappe/gettext/extractors/utils.py", line 4, in <module>
from frappe.model.utils import InvalidIncludePath, render_include
File "apps/frappe/frappe/model/__init__.py", line 137, in <module>
{"fieldname": "name", "fieldtype": "Link", "label": _lt("ID")},
^^^^^^^^^
File "apps/frappe/frappe/__init__.py", line 133, in _lt
from frappe.translate import LazyTranslate
ImportError: cannot import name 'LazyTranslate' from partially initialized module 'frappe.translate' (most likely due to a circular import) (apps/frappe/frappe/translate.py)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `frappe/model/__init__.py`
Content:
```
1 # Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors
2 # License: MIT. See LICENSE
3
4 # model __init__.py
5 import frappe
6 from frappe import _, _lt
7
8 data_fieldtypes = (
9 "Currency",
10 "Int",
11 "Long Int",
12 "Float",
13 "Percent",
14 "Check",
15 "Small Text",
16 "Long Text",
17 "Code",
18 "Text Editor",
19 "Markdown Editor",
20 "HTML Editor",
21 "Date",
22 "Datetime",
23 "Time",
24 "Text",
25 "Data",
26 "Link",
27 "Dynamic Link",
28 "Password",
29 "Select",
30 "Rating",
31 "Read Only",
32 "Attach",
33 "Attach Image",
34 "Signature",
35 "Color",
36 "Barcode",
37 "Geolocation",
38 "Duration",
39 "Icon",
40 "Phone",
41 "Autocomplete",
42 "JSON",
43 )
44
45 float_like_fields = {"Float", "Currency", "Percent"}
46 datetime_fields = {"Datetime", "Date", "Time"}
47
48 attachment_fieldtypes = (
49 "Attach",
50 "Attach Image",
51 )
52
53 no_value_fields = (
54 "Section Break",
55 "Column Break",
56 "Tab Break",
57 "HTML",
58 "Table",
59 "Table MultiSelect",
60 "Button",
61 "Image",
62 "Fold",
63 "Heading",
64 )
65
66 display_fieldtypes = (
67 "Section Break",
68 "Column Break",
69 "Tab Break",
70 "HTML",
71 "Button",
72 "Image",
73 "Fold",
74 "Heading",
75 )
76
77 numeric_fieldtypes = ("Currency", "Int", "Long Int", "Float", "Percent", "Check")
78
79 data_field_options = ("Email", "Name", "Phone", "URL", "Barcode")
80
81 default_fields = (
82 "doctype",
83 "name",
84 "owner",
85 "creation",
86 "modified",
87 "modified_by",
88 "docstatus",
89 "idx",
90 )
91
92 child_table_fields = ("parent", "parentfield", "parenttype")
93
94 optional_fields = ("_user_tags", "_comments", "_assign", "_liked_by", "_seen")
95
96 table_fields = ("Table", "Table MultiSelect")
97
98 core_doctypes_list = (
99 "DefaultValue",
100 "DocType",
101 "DocField",
102 "DocPerm",
103 "DocType Action",
104 "DocType Link",
105 "User",
106 "Role",
107 "Has Role",
108 "Page",
109 "Module Def",
110 "Print Format",
111 "Report",
112 "Customize Form",
113 "Customize Form Field",
114 "Property Setter",
115 "Custom Field",
116 "Client Script",
117 )
118
119 log_types = (
120 "Version",
121 "Error Log",
122 "Scheduled Job Log",
123 "Event Sync Log",
124 "Event Update Log",
125 "Access Log",
126 "View Log",
127 "Activity Log",
128 "Energy Point Log",
129 "Notification Log",
130 "Email Queue",
131 "DocShare",
132 "Document Follow",
133 "Console Log",
134 )
135
136 std_fields = [
137 {"fieldname": "name", "fieldtype": "Link", "label": _lt("ID")},
138 {"fieldname": "owner", "fieldtype": "Link", "label": _lt("Created By"), "options": "User"},
139 {"fieldname": "idx", "fieldtype": "Int", "label": _lt("Index")},
140 {"fieldname": "creation", "fieldtype": "Datetime", "label": _lt("Created On")},
141 {"fieldname": "modified", "fieldtype": "Datetime", "label": _lt("Last Updated On")},
142 {
143 "fieldname": "modified_by",
144 "fieldtype": "Link",
145 "label": _lt("Last Updated By"),
146 "options": "User",
147 },
148 {"fieldname": "_user_tags", "fieldtype": "Data", "label": _lt("Tags")},
149 {"fieldname": "_liked_by", "fieldtype": "Data", "label": _lt("Liked By")},
150 {"fieldname": "_comments", "fieldtype": "Text", "label": _lt("Comments")},
151 {"fieldname": "_assign", "fieldtype": "Text", "label": _lt("Assigned To")},
152 {"fieldname": "docstatus", "fieldtype": "Int", "label": _lt("Document Status")},
153 ]
154
155
156 def delete_fields(args_dict, delete=0):
157 """
158 Delete a field.
159 * Deletes record from `tabDocField`
160 * If not single doctype: Drops column from table
161 * If single, deletes record from `tabSingles`
162 args_dict = { dt: [field names] }
163 """
164 import frappe.utils
165
166 for dt in args_dict:
167 fields = args_dict[dt]
168 if not fields:
169 continue
170
171 frappe.db.delete(
172 "DocField",
173 {
174 "parent": dt,
175 "fieldname": ("in", fields),
176 },
177 )
178
179 # Delete the data/column only if delete is specified
180 if not delete:
181 continue
182
183 if frappe.db.get_value("DocType", dt, "issingle"):
184 frappe.db.delete(
185 "Singles",
186 {
187 "doctype": dt,
188 "field": ("in", fields),
189 },
190 )
191 else:
192 existing_fields = frappe.db.describe(dt)
193 existing_fields = existing_fields and [e[0] for e in existing_fields] or []
194 fields_need_to_delete = set(fields) & set(existing_fields)
195 if not fields_need_to_delete:
196 continue
197
198 if frappe.db.db_type == "mariadb":
199 # mariadb implicitly commits before DDL, make it explicit
200 frappe.db.commit()
201
202 query = "ALTER TABLE `tab%s` " % dt + ", ".join(
203 "DROP COLUMN `%s`" % f for f in fields_need_to_delete
204 )
205 frappe.db.sql(query)
206
207 if frappe.db.db_type == "postgres":
208 # commit the results to db
209 frappe.db.commit()
210
211
212 def get_permitted_fields(
213 doctype: str,
214 parenttype: str | None = None,
215 user: str | None = None,
216 permission_type: str | None = None,
217 *,
218 ignore_virtual=False,
219 ) -> list[str]:
220 meta = frappe.get_meta(doctype)
221 valid_columns = meta.get_valid_columns()
222
223 if doctype in core_doctypes_list:
224 return valid_columns
225
226 # DocType has only fields of type Table (Table, Table MultiSelect)
227 if set(valid_columns).issubset(default_fields):
228 return valid_columns
229
230 if permission_type is None:
231 permission_type = "select" if frappe.only_has_select_perm(doctype, user=user) else "read"
232
233 meta_fields = meta.default_fields.copy()
234 optional_meta_fields = [x for x in optional_fields if x in valid_columns]
235
236 if permitted_fields := meta.get_permitted_fieldnames(
237 parenttype=parenttype,
238 user=user,
239 permission_type=permission_type,
240 with_virtual_fields=not ignore_virtual,
241 ):
242 if permission_type == "select":
243 return permitted_fields
244
245 if meta.istable:
246 meta_fields.extend(child_table_fields)
247
248 return meta_fields + permitted_fields + optional_meta_fields
249
250 return meta_fields + optional_meta_fields
251
252
253 def is_default_field(fieldname: str) -> bool:
254 return fieldname in default_fields
255
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/frappe/model/__init__.py b/frappe/model/__init__.py
--- a/frappe/model/__init__.py
+++ b/frappe/model/__init__.py
@@ -134,22 +134,22 @@
)
std_fields = [
- {"fieldname": "name", "fieldtype": "Link", "label": _lt("ID")},
- {"fieldname": "owner", "fieldtype": "Link", "label": _lt("Created By"), "options": "User"},
- {"fieldname": "idx", "fieldtype": "Int", "label": _lt("Index")},
- {"fieldname": "creation", "fieldtype": "Datetime", "label": _lt("Created On")},
- {"fieldname": "modified", "fieldtype": "Datetime", "label": _lt("Last Updated On")},
+ {"fieldname": "name", "fieldtype": "Link", "label": "ID"},
+ {"fieldname": "owner", "fieldtype": "Link", "label": "Created By", "options": "User"},
+ {"fieldname": "idx", "fieldtype": "Int", "label": "Index"},
+ {"fieldname": "creation", "fieldtype": "Datetime", "label": "Created On"},
+ {"fieldname": "modified", "fieldtype": "Datetime", "label": "Last Updated On"},
{
"fieldname": "modified_by",
"fieldtype": "Link",
- "label": _lt("Last Updated By"),
+ "label": "Last Updated By",
"options": "User",
},
- {"fieldname": "_user_tags", "fieldtype": "Data", "label": _lt("Tags")},
- {"fieldname": "_liked_by", "fieldtype": "Data", "label": _lt("Liked By")},
- {"fieldname": "_comments", "fieldtype": "Text", "label": _lt("Comments")},
- {"fieldname": "_assign", "fieldtype": "Text", "label": _lt("Assigned To")},
- {"fieldname": "docstatus", "fieldtype": "Int", "label": _lt("Document Status")},
+ {"fieldname": "_user_tags", "fieldtype": "Data", "label": "Tags"},
+ {"fieldname": "_liked_by", "fieldtype": "Data", "label": "Liked By"},
+ {"fieldname": "_comments", "fieldtype": "Text", "label": "Comments"},
+ {"fieldname": "_assign", "fieldtype": "Text", "label": "Assigned To"},
+ {"fieldname": "docstatus", "fieldtype": "Int", "label": "Document Status"},
]
| {"golden_diff": "diff --git a/frappe/model/__init__.py b/frappe/model/__init__.py\n--- a/frappe/model/__init__.py\n+++ b/frappe/model/__init__.py\n@@ -134,22 +134,22 @@\n )\n \n std_fields = [\n-\t{\"fieldname\": \"name\", \"fieldtype\": \"Link\", \"label\": _lt(\"ID\")},\n-\t{\"fieldname\": \"owner\", \"fieldtype\": \"Link\", \"label\": _lt(\"Created By\"), \"options\": \"User\"},\n-\t{\"fieldname\": \"idx\", \"fieldtype\": \"Int\", \"label\": _lt(\"Index\")},\n-\t{\"fieldname\": \"creation\", \"fieldtype\": \"Datetime\", \"label\": _lt(\"Created On\")},\n-\t{\"fieldname\": \"modified\", \"fieldtype\": \"Datetime\", \"label\": _lt(\"Last Updated On\")},\n+\t{\"fieldname\": \"name\", \"fieldtype\": \"Link\", \"label\": \"ID\"},\n+\t{\"fieldname\": \"owner\", \"fieldtype\": \"Link\", \"label\": \"Created By\", \"options\": \"User\"},\n+\t{\"fieldname\": \"idx\", \"fieldtype\": \"Int\", \"label\": \"Index\"},\n+\t{\"fieldname\": \"creation\", \"fieldtype\": \"Datetime\", \"label\": \"Created On\"},\n+\t{\"fieldname\": \"modified\", \"fieldtype\": \"Datetime\", \"label\": \"Last Updated On\"},\n \t{\n \t\t\"fieldname\": \"modified_by\",\n \t\t\"fieldtype\": \"Link\",\n-\t\t\"label\": _lt(\"Last Updated By\"),\n+\t\t\"label\": \"Last Updated By\",\n \t\t\"options\": \"User\",\n \t},\n-\t{\"fieldname\": \"_user_tags\", \"fieldtype\": \"Data\", \"label\": _lt(\"Tags\")},\n-\t{\"fieldname\": \"_liked_by\", \"fieldtype\": \"Data\", \"label\": _lt(\"Liked By\")},\n-\t{\"fieldname\": \"_comments\", \"fieldtype\": \"Text\", \"label\": _lt(\"Comments\")},\n-\t{\"fieldname\": \"_assign\", \"fieldtype\": \"Text\", \"label\": _lt(\"Assigned To\")},\n-\t{\"fieldname\": \"docstatus\", \"fieldtype\": \"Int\", \"label\": _lt(\"Document Status\")},\n+\t{\"fieldname\": \"_user_tags\", \"fieldtype\": \"Data\", \"label\": \"Tags\"},\n+\t{\"fieldname\": \"_liked_by\", \"fieldtype\": \"Data\", \"label\": \"Liked By\"},\n+\t{\"fieldname\": \"_comments\", \"fieldtype\": \"Text\", \"label\": \"Comments\"},\n+\t{\"fieldname\": \"_assign\", \"fieldtype\": \"Text\", \"label\": \"Assigned To\"},\n+\t{\"fieldname\": \"docstatus\", \"fieldtype\": \"Int\", \"label\": \"Document Status\"},\n ]\n", "issue": "ImportError: cannot import name 'LazyTranslate' from partially initialized module 'frappe.translate' (most likely due to a circular import)\n```\r\n> bench get-untranslated --app erpnext de untranslated.csv\r\n\r\nTraceback (most recent call last):\r\n File \"<frozen runpy>\", line 198, in _run_module_as_main\r\n File \"<frozen runpy>\", line 88, in _run_code\r\n File \"apps/frappe/frappe/utils/bench_helper.py\", line 114, in <module>\r\n main()\r\n File \"apps/frappe/frappe/utils/bench_helper.py\", line 20, in main\r\n click.Group(commands=commands)(prog_name=\"bench\")\r\n File \"env/lib/python3.11/site-packages/click/core.py\", line 1157, in __call__\r\n return self.main(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"env/lib/python3.11/site-packages/click/core.py\", line 1078, in main\r\n rv = self.invoke(ctx)\r\n ^^^^^^^^^^^^^^^^\r\n File \"env/lib/python3.11/site-packages/click/core.py\", line 1688, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"env/lib/python3.11/site-packages/click/core.py\", line 1688, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"env/lib/python3.11/site-packages/click/core.py\", line 1434, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"env/lib/python3.11/site-packages/click/core.py\", line 783, in invoke\r\n return __callback(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"env/lib/python3.11/site-packages/click/decorators.py\", line 33, in new_func\r\n return f(get_current_context(), *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"apps/frappe/frappe/commands/__init__.py\", line 29, in _func\r\n ret = f(frappe._dict(ctx.obj), *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"apps/frappe/frappe/commands/translate.py\", line 59, in get_untranslated\r\n import frappe.translate\r\n File \"apps/frappe/frappe/translate.py\", line 23, in <module>\r\n from frappe.gettext.extractors.utils import extract_messages_from_code, is_translatable\r\n File \"apps/frappe/frappe/gettext/extractors/utils.py\", line 4, in <module>\r\n from frappe.model.utils import InvalidIncludePath, render_include\r\n File \"apps/frappe/frappe/model/__init__.py\", line 137, in <module>\r\n {\"fieldname\": \"name\", \"fieldtype\": \"Link\", \"label\": _lt(\"ID\")},\r\n ^^^^^^^^^\r\n File \"apps/frappe/frappe/__init__.py\", line 133, in _lt\r\n from frappe.translate import LazyTranslate\r\nImportError: cannot import name 'LazyTranslate' from partially initialized module 'frappe.translate' (most likely due to a circular import) (apps/frappe/frappe/translate.py)\r\n```\n", "before_files": [{"content": "# Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors\n# License: MIT. See LICENSE\n\n# model __init__.py\nimport frappe\nfrom frappe import _, _lt\n\ndata_fieldtypes = (\n\t\"Currency\",\n\t\"Int\",\n\t\"Long Int\",\n\t\"Float\",\n\t\"Percent\",\n\t\"Check\",\n\t\"Small Text\",\n\t\"Long Text\",\n\t\"Code\",\n\t\"Text Editor\",\n\t\"Markdown Editor\",\n\t\"HTML Editor\",\n\t\"Date\",\n\t\"Datetime\",\n\t\"Time\",\n\t\"Text\",\n\t\"Data\",\n\t\"Link\",\n\t\"Dynamic Link\",\n\t\"Password\",\n\t\"Select\",\n\t\"Rating\",\n\t\"Read Only\",\n\t\"Attach\",\n\t\"Attach Image\",\n\t\"Signature\",\n\t\"Color\",\n\t\"Barcode\",\n\t\"Geolocation\",\n\t\"Duration\",\n\t\"Icon\",\n\t\"Phone\",\n\t\"Autocomplete\",\n\t\"JSON\",\n)\n\nfloat_like_fields = {\"Float\", \"Currency\", \"Percent\"}\ndatetime_fields = {\"Datetime\", \"Date\", \"Time\"}\n\nattachment_fieldtypes = (\n\t\"Attach\",\n\t\"Attach Image\",\n)\n\nno_value_fields = (\n\t\"Section Break\",\n\t\"Column Break\",\n\t\"Tab Break\",\n\t\"HTML\",\n\t\"Table\",\n\t\"Table MultiSelect\",\n\t\"Button\",\n\t\"Image\",\n\t\"Fold\",\n\t\"Heading\",\n)\n\ndisplay_fieldtypes = (\n\t\"Section Break\",\n\t\"Column Break\",\n\t\"Tab Break\",\n\t\"HTML\",\n\t\"Button\",\n\t\"Image\",\n\t\"Fold\",\n\t\"Heading\",\n)\n\nnumeric_fieldtypes = (\"Currency\", \"Int\", \"Long Int\", \"Float\", \"Percent\", \"Check\")\n\ndata_field_options = (\"Email\", \"Name\", \"Phone\", \"URL\", \"Barcode\")\n\ndefault_fields = (\n\t\"doctype\",\n\t\"name\",\n\t\"owner\",\n\t\"creation\",\n\t\"modified\",\n\t\"modified_by\",\n\t\"docstatus\",\n\t\"idx\",\n)\n\nchild_table_fields = (\"parent\", \"parentfield\", \"parenttype\")\n\noptional_fields = (\"_user_tags\", \"_comments\", \"_assign\", \"_liked_by\", \"_seen\")\n\ntable_fields = (\"Table\", \"Table MultiSelect\")\n\ncore_doctypes_list = (\n\t\"DefaultValue\",\n\t\"DocType\",\n\t\"DocField\",\n\t\"DocPerm\",\n\t\"DocType Action\",\n\t\"DocType Link\",\n\t\"User\",\n\t\"Role\",\n\t\"Has Role\",\n\t\"Page\",\n\t\"Module Def\",\n\t\"Print Format\",\n\t\"Report\",\n\t\"Customize Form\",\n\t\"Customize Form Field\",\n\t\"Property Setter\",\n\t\"Custom Field\",\n\t\"Client Script\",\n)\n\nlog_types = (\n\t\"Version\",\n\t\"Error Log\",\n\t\"Scheduled Job Log\",\n\t\"Event Sync Log\",\n\t\"Event Update Log\",\n\t\"Access Log\",\n\t\"View Log\",\n\t\"Activity Log\",\n\t\"Energy Point Log\",\n\t\"Notification Log\",\n\t\"Email Queue\",\n\t\"DocShare\",\n\t\"Document Follow\",\n\t\"Console Log\",\n)\n\nstd_fields = [\n\t{\"fieldname\": \"name\", \"fieldtype\": \"Link\", \"label\": _lt(\"ID\")},\n\t{\"fieldname\": \"owner\", \"fieldtype\": \"Link\", \"label\": _lt(\"Created By\"), \"options\": \"User\"},\n\t{\"fieldname\": \"idx\", \"fieldtype\": \"Int\", \"label\": _lt(\"Index\")},\n\t{\"fieldname\": \"creation\", \"fieldtype\": \"Datetime\", \"label\": _lt(\"Created On\")},\n\t{\"fieldname\": \"modified\", \"fieldtype\": \"Datetime\", \"label\": _lt(\"Last Updated On\")},\n\t{\n\t\t\"fieldname\": \"modified_by\",\n\t\t\"fieldtype\": \"Link\",\n\t\t\"label\": _lt(\"Last Updated By\"),\n\t\t\"options\": \"User\",\n\t},\n\t{\"fieldname\": \"_user_tags\", \"fieldtype\": \"Data\", \"label\": _lt(\"Tags\")},\n\t{\"fieldname\": \"_liked_by\", \"fieldtype\": \"Data\", \"label\": _lt(\"Liked By\")},\n\t{\"fieldname\": \"_comments\", \"fieldtype\": \"Text\", \"label\": _lt(\"Comments\")},\n\t{\"fieldname\": \"_assign\", \"fieldtype\": \"Text\", \"label\": _lt(\"Assigned To\")},\n\t{\"fieldname\": \"docstatus\", \"fieldtype\": \"Int\", \"label\": _lt(\"Document Status\")},\n]\n\n\ndef delete_fields(args_dict, delete=0):\n\t\"\"\"\n\tDelete a field.\n\t* Deletes record from `tabDocField`\n\t* If not single doctype: Drops column from table\n\t* If single, deletes record from `tabSingles`\n\targs_dict = { dt: [field names] }\n\t\"\"\"\n\timport frappe.utils\n\n\tfor dt in args_dict:\n\t\tfields = args_dict[dt]\n\t\tif not fields:\n\t\t\tcontinue\n\n\t\tfrappe.db.delete(\n\t\t\t\"DocField\",\n\t\t\t{\n\t\t\t\t\"parent\": dt,\n\t\t\t\t\"fieldname\": (\"in\", fields),\n\t\t\t},\n\t\t)\n\n\t\t# Delete the data/column only if delete is specified\n\t\tif not delete:\n\t\t\tcontinue\n\n\t\tif frappe.db.get_value(\"DocType\", dt, \"issingle\"):\n\t\t\tfrappe.db.delete(\n\t\t\t\t\"Singles\",\n\t\t\t\t{\n\t\t\t\t\t\"doctype\": dt,\n\t\t\t\t\t\"field\": (\"in\", fields),\n\t\t\t\t},\n\t\t\t)\n\t\telse:\n\t\t\texisting_fields = frappe.db.describe(dt)\n\t\t\texisting_fields = existing_fields and [e[0] for e in existing_fields] or []\n\t\t\tfields_need_to_delete = set(fields) & set(existing_fields)\n\t\t\tif not fields_need_to_delete:\n\t\t\t\tcontinue\n\n\t\t\tif frappe.db.db_type == \"mariadb\":\n\t\t\t\t# mariadb implicitly commits before DDL, make it explicit\n\t\t\t\tfrappe.db.commit()\n\n\t\t\tquery = \"ALTER TABLE `tab%s` \" % dt + \", \".join(\n\t\t\t\t\"DROP COLUMN `%s`\" % f for f in fields_need_to_delete\n\t\t\t)\n\t\t\tfrappe.db.sql(query)\n\n\t\tif frappe.db.db_type == \"postgres\":\n\t\t\t# commit the results to db\n\t\t\tfrappe.db.commit()\n\n\ndef get_permitted_fields(\n\tdoctype: str,\n\tparenttype: str | None = None,\n\tuser: str | None = None,\n\tpermission_type: str | None = None,\n\t*,\n\tignore_virtual=False,\n) -> list[str]:\n\tmeta = frappe.get_meta(doctype)\n\tvalid_columns = meta.get_valid_columns()\n\n\tif doctype in core_doctypes_list:\n\t\treturn valid_columns\n\n\t# DocType has only fields of type Table (Table, Table MultiSelect)\n\tif set(valid_columns).issubset(default_fields):\n\t\treturn valid_columns\n\n\tif permission_type is None:\n\t\tpermission_type = \"select\" if frappe.only_has_select_perm(doctype, user=user) else \"read\"\n\n\tmeta_fields = meta.default_fields.copy()\n\toptional_meta_fields = [x for x in optional_fields if x in valid_columns]\n\n\tif permitted_fields := meta.get_permitted_fieldnames(\n\t\tparenttype=parenttype,\n\t\tuser=user,\n\t\tpermission_type=permission_type,\n\t\twith_virtual_fields=not ignore_virtual,\n\t):\n\t\tif permission_type == \"select\":\n\t\t\treturn permitted_fields\n\n\t\tif meta.istable:\n\t\t\tmeta_fields.extend(child_table_fields)\n\n\t\treturn meta_fields + permitted_fields + optional_meta_fields\n\n\treturn meta_fields + optional_meta_fields\n\n\ndef is_default_field(fieldname: str) -> bool:\n\treturn fieldname in default_fields\n", "path": "frappe/model/__init__.py"}], "after_files": [{"content": "# Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors\n# License: MIT. See LICENSE\n\n# model __init__.py\nimport frappe\nfrom frappe import _, _lt\n\ndata_fieldtypes = (\n\t\"Currency\",\n\t\"Int\",\n\t\"Long Int\",\n\t\"Float\",\n\t\"Percent\",\n\t\"Check\",\n\t\"Small Text\",\n\t\"Long Text\",\n\t\"Code\",\n\t\"Text Editor\",\n\t\"Markdown Editor\",\n\t\"HTML Editor\",\n\t\"Date\",\n\t\"Datetime\",\n\t\"Time\",\n\t\"Text\",\n\t\"Data\",\n\t\"Link\",\n\t\"Dynamic Link\",\n\t\"Password\",\n\t\"Select\",\n\t\"Rating\",\n\t\"Read Only\",\n\t\"Attach\",\n\t\"Attach Image\",\n\t\"Signature\",\n\t\"Color\",\n\t\"Barcode\",\n\t\"Geolocation\",\n\t\"Duration\",\n\t\"Icon\",\n\t\"Phone\",\n\t\"Autocomplete\",\n\t\"JSON\",\n)\n\nfloat_like_fields = {\"Float\", \"Currency\", \"Percent\"}\ndatetime_fields = {\"Datetime\", \"Date\", \"Time\"}\n\nattachment_fieldtypes = (\n\t\"Attach\",\n\t\"Attach Image\",\n)\n\nno_value_fields = (\n\t\"Section Break\",\n\t\"Column Break\",\n\t\"Tab Break\",\n\t\"HTML\",\n\t\"Table\",\n\t\"Table MultiSelect\",\n\t\"Button\",\n\t\"Image\",\n\t\"Fold\",\n\t\"Heading\",\n)\n\ndisplay_fieldtypes = (\n\t\"Section Break\",\n\t\"Column Break\",\n\t\"Tab Break\",\n\t\"HTML\",\n\t\"Button\",\n\t\"Image\",\n\t\"Fold\",\n\t\"Heading\",\n)\n\nnumeric_fieldtypes = (\"Currency\", \"Int\", \"Long Int\", \"Float\", \"Percent\", \"Check\")\n\ndata_field_options = (\"Email\", \"Name\", \"Phone\", \"URL\", \"Barcode\")\n\ndefault_fields = (\n\t\"doctype\",\n\t\"name\",\n\t\"owner\",\n\t\"creation\",\n\t\"modified\",\n\t\"modified_by\",\n\t\"docstatus\",\n\t\"idx\",\n)\n\nchild_table_fields = (\"parent\", \"parentfield\", \"parenttype\")\n\noptional_fields = (\"_user_tags\", \"_comments\", \"_assign\", \"_liked_by\", \"_seen\")\n\ntable_fields = (\"Table\", \"Table MultiSelect\")\n\ncore_doctypes_list = (\n\t\"DefaultValue\",\n\t\"DocType\",\n\t\"DocField\",\n\t\"DocPerm\",\n\t\"DocType Action\",\n\t\"DocType Link\",\n\t\"User\",\n\t\"Role\",\n\t\"Has Role\",\n\t\"Page\",\n\t\"Module Def\",\n\t\"Print Format\",\n\t\"Report\",\n\t\"Customize Form\",\n\t\"Customize Form Field\",\n\t\"Property Setter\",\n\t\"Custom Field\",\n\t\"Client Script\",\n)\n\nlog_types = (\n\t\"Version\",\n\t\"Error Log\",\n\t\"Scheduled Job Log\",\n\t\"Event Sync Log\",\n\t\"Event Update Log\",\n\t\"Access Log\",\n\t\"View Log\",\n\t\"Activity Log\",\n\t\"Energy Point Log\",\n\t\"Notification Log\",\n\t\"Email Queue\",\n\t\"DocShare\",\n\t\"Document Follow\",\n\t\"Console Log\",\n)\n\nstd_fields = [\n\t{\"fieldname\": \"name\", \"fieldtype\": \"Link\", \"label\": \"ID\"},\n\t{\"fieldname\": \"owner\", \"fieldtype\": \"Link\", \"label\": \"Created By\", \"options\": \"User\"},\n\t{\"fieldname\": \"idx\", \"fieldtype\": \"Int\", \"label\": \"Index\"},\n\t{\"fieldname\": \"creation\", \"fieldtype\": \"Datetime\", \"label\": \"Created On\"},\n\t{\"fieldname\": \"modified\", \"fieldtype\": \"Datetime\", \"label\": \"Last Updated On\"},\n\t{\n\t\t\"fieldname\": \"modified_by\",\n\t\t\"fieldtype\": \"Link\",\n\t\t\"label\": \"Last Updated By\",\n\t\t\"options\": \"User\",\n\t},\n\t{\"fieldname\": \"_user_tags\", \"fieldtype\": \"Data\", \"label\": \"Tags\"},\n\t{\"fieldname\": \"_liked_by\", \"fieldtype\": \"Data\", \"label\": \"Liked By\"},\n\t{\"fieldname\": \"_comments\", \"fieldtype\": \"Text\", \"label\": \"Comments\"},\n\t{\"fieldname\": \"_assign\", \"fieldtype\": \"Text\", \"label\": \"Assigned To\"},\n\t{\"fieldname\": \"docstatus\", \"fieldtype\": \"Int\", \"label\": \"Document Status\"},\n]\n\n\ndef delete_fields(args_dict, delete=0):\n\t\"\"\"\n\tDelete a field.\n\t* Deletes record from `tabDocField`\n\t* If not single doctype: Drops column from table\n\t* If single, deletes record from `tabSingles`\n\targs_dict = { dt: [field names] }\n\t\"\"\"\n\timport frappe.utils\n\n\tfor dt in args_dict:\n\t\tfields = args_dict[dt]\n\t\tif not fields:\n\t\t\tcontinue\n\n\t\tfrappe.db.delete(\n\t\t\t\"DocField\",\n\t\t\t{\n\t\t\t\t\"parent\": dt,\n\t\t\t\t\"fieldname\": (\"in\", fields),\n\t\t\t},\n\t\t)\n\n\t\t# Delete the data/column only if delete is specified\n\t\tif not delete:\n\t\t\tcontinue\n\n\t\tif frappe.db.get_value(\"DocType\", dt, \"issingle\"):\n\t\t\tfrappe.db.delete(\n\t\t\t\t\"Singles\",\n\t\t\t\t{\n\t\t\t\t\t\"doctype\": dt,\n\t\t\t\t\t\"field\": (\"in\", fields),\n\t\t\t\t},\n\t\t\t)\n\t\telse:\n\t\t\texisting_fields = frappe.db.describe(dt)\n\t\t\texisting_fields = existing_fields and [e[0] for e in existing_fields] or []\n\t\t\tfields_need_to_delete = set(fields) & set(existing_fields)\n\t\t\tif not fields_need_to_delete:\n\t\t\t\tcontinue\n\n\t\t\tif frappe.db.db_type == \"mariadb\":\n\t\t\t\t# mariadb implicitly commits before DDL, make it explicit\n\t\t\t\tfrappe.db.commit()\n\n\t\t\tquery = \"ALTER TABLE `tab%s` \" % dt + \", \".join(\n\t\t\t\t\"DROP COLUMN `%s`\" % f for f in fields_need_to_delete\n\t\t\t)\n\t\t\tfrappe.db.sql(query)\n\n\t\tif frappe.db.db_type == \"postgres\":\n\t\t\t# commit the results to db\n\t\t\tfrappe.db.commit()\n\n\ndef get_permitted_fields(\n\tdoctype: str,\n\tparenttype: str | None = None,\n\tuser: str | None = None,\n\tpermission_type: str | None = None,\n\t*,\n\tignore_virtual=False,\n) -> list[str]:\n\tmeta = frappe.get_meta(doctype)\n\tvalid_columns = meta.get_valid_columns()\n\n\tif doctype in core_doctypes_list:\n\t\treturn valid_columns\n\n\t# DocType has only fields of type Table (Table, Table MultiSelect)\n\tif set(valid_columns).issubset(default_fields):\n\t\treturn valid_columns\n\n\tif permission_type is None:\n\t\tpermission_type = \"select\" if frappe.only_has_select_perm(doctype, user=user) else \"read\"\n\n\tmeta_fields = meta.default_fields.copy()\n\toptional_meta_fields = [x for x in optional_fields if x in valid_columns]\n\n\tif permitted_fields := meta.get_permitted_fieldnames(\n\t\tparenttype=parenttype,\n\t\tuser=user,\n\t\tpermission_type=permission_type,\n\t\twith_virtual_fields=not ignore_virtual,\n\t):\n\t\tif permission_type == \"select\":\n\t\t\treturn permitted_fields\n\n\t\tif meta.istable:\n\t\t\tmeta_fields.extend(child_table_fields)\n\n\t\treturn meta_fields + permitted_fields + optional_meta_fields\n\n\treturn meta_fields + optional_meta_fields\n\n\ndef is_default_field(fieldname: str) -> bool:\n\treturn fieldname in default_fields\n", "path": "frappe/model/__init__.py"}]} | 3,443 | 589 |
gh_patches_debug_2715 | rasdani/github-patches | git_diff | dotkom__onlineweb4-810 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Active feedbacks bug
Minor bug where feedbacks where everyone answers does not get set to inactive.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/feedback/mommy.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import datetime
3 import socket
4 import locale
5 import logging
6
7 from django.utils import timezone
8 from django.contrib.contenttypes.models import ContentType
9 from django.conf import settings
10 from django.core.mail import EmailMessage
11
12 from apps.events.models import Event, AttendanceEvent, Attendee
13 from apps.feedback.models import FeedbackRelation
14 from apps.marks.models import Mark, UserEntry
15 from apps.mommy import Task, schedule
16
17 class FeedbackMail(Task):
18
19 @staticmethod
20 def run():
21 logger = logging.getLogger("feedback")
22 logger.info("Feedback job started")
23 locale.setlocale(locale.LC_ALL, "nb_NO.UTF-8")
24 active_feedbacks = FeedbackRelation.objects.filter(active=True)
25
26 for feedback in active_feedbacks:
27 message = FeedbackMail.generate_message(feedback, logger)
28
29 if message.send:
30 EmailMessage(message.subject, unicode(message), message.committee_mail, [], message.attended_mails).send()
31 logger.info('Emails sent to: ' + str(message.attended_mails))
32
33 if message.results_message:
34 EmailMessage("Feedback resultat", message.results_message,"[email protected]", [message.committee_mail]).send()
35 logger.info('Results mail sent to :' + message.committee_mail)
36
37 @staticmethod
38 def generate_message(feedback, logger):
39 logger.info('Processing: "' + feedback.get_title() + '"')
40 today = timezone.now().date()
41 yesterday = today + datetime.timedelta(days=-1)
42 not_responded = FeedbackMail.get_users(feedback)
43 logger.info('Not responded: ' + str(not_responded))
44 message = Message()
45
46 #return if everyone has answered
47 if not not_responded:
48 logger.info('Everyone has answered')
49 return message
50
51
52 message.attended_mails = FeedbackMail.get_user_mails(not_responded)
53
54 message.committee_mail = FeedbackMail.get_committee_email(feedback)
55 deadline = feedback.deadline.strftime("%d. %B").encode("utf-8")
56 title = str(FeedbackMail.get_title(feedback)).encode("utf-8")
57 message.link = str(u"\n\n" + FeedbackMail.get_link(feedback)).encode("utf-8")
58 results_link = str(FeedbackMail.get_link(feedback) + "results").encode("utf-8")
59
60 start_date = feedback.get_start_date()
61 deadline_diff = (feedback.deadline - today).days
62
63 message.subject = u"Feedback: %s" % (title)
64 message.intro = u"Hei, vi ønsker tilbakemelding på \"%s\"" % (title)
65 message.mark = FeedbackMail.mark_message(feedback)
66 message.contact = u"\n\nEventuelle spørsmål sendes til %s " % (message.committee_mail)
67 message.start_date = FeedbackMail.start_date_message(start_date)
68
69 if deadline_diff < 0: #Deadline passed
70 feedback.active = False
71 feedback.save()
72 logger.info("Deadline passed feedback set to inactive")
73
74 if feedback.gives_mark:
75 FeedbackMail.set_marks(title, not_responded)
76
77 message.intro = u"Fristen for å svare på \"%s\" har gått ut og du har fått en prikk." % (title)
78 message.mark = ""
79 message.start_date = ""
80 message.link = ""
81 message.send = True
82
83 logger.info("Marks given to: " + str(not_responded))
84
85 elif deadline_diff < 1: #Last warning
86 message.deadline = u"\n\nI dag innen 23:59 er siste frist til å svare på skjemaet."
87
88 message.results_message = u"Hei, siste purremail på feedback skjema har blitt sendt til alle " \
89 u"gjenværende deltagere på \"%s\".\nDere kan se feedback-resultatene på:\n%s\n" % \
90 (title, results_link)
91 message.send = True
92 logger.info("Last warning message generated")
93 elif deadline_diff < 3 and feedback.gives_mark: # 3 days from the deadline
94 message.deadline = u"\n\nFristen for å svare på skjema er %s innen kl 23:59." % (deadline)
95 message.send = True
96 logger.info("Warning message generated")
97 elif FeedbackMail.send_first_notification(feedback): #Day after the event or feedback creation
98 message.deadline = u"\n\nFristen for å svare på skjema er %s innen kl 23:59." % (deadline)
99
100 message.results_message = u"Hei, nå har feedbackmail blitt sendt til alle " \
101 u"deltagere på \"%s\".\nDere kan se feedback-resultatene på:\n%s\n" % \
102 (title, results_link)
103 message.send = True
104 logger.info("First message generated")
105 else:
106 logger.info("No message generated")
107
108 return message
109
110 @staticmethod
111 def send_first_notification(feedback):
112 start_date = FeedbackMail.start_date(feedback)
113
114 #The object that requires feedback doesnt have a start date
115 if not start_date:
116 yesterday = timezone.now().date() - datetime.timedelta(days=1)
117 if feedback.created_date == yesterday.date():
118 #Send the first notification the day after the feedback relation was created
119 return True
120 else:
121 day_after_event = start_date + datetime.timedelta(1)
122 if day_after_event == datetime.datetime.date(timezone.now()):
123 #Send the first notification the day after the event
124 return True
125 return False
126
127 @staticmethod
128 def start_date(feedback):
129 start_date = feedback.get_start_date()
130
131 if start_date:
132 return start_date.date()
133 else:
134 return False
135
136 @staticmethod
137 def start_date_message(start_date):
138 #If the object(event) doesnt have start date it will send
139 #the first notification the day after the feedbackrelation is made
140 if start_date:
141 start_date_string = start_date.strftime("%d. %B").encode("utf-8")
142 message_start_date = u"som du var med på den %s:" % (start_date_string)
143 else:
144 message_start_date = ""
145
146 return message_start_date
147
148 @staticmethod
149 def get_users(feedback):
150 return feedback.get_slackers()
151
152 @staticmethod
153 def get_user_mails(not_responded):
154 return [user.email for user in not_responded]
155
156 @staticmethod
157 def get_link(feedback):
158 return str(settings.BASE_URL + feedback.get_absolute_url())
159
160 @staticmethod
161 def get_title(feedback):
162 return feedback.get_title()
163
164 @staticmethod
165 def get_committee_email(feedback):
166 return feedback.get_email()
167
168 @staticmethod
169 def mark_message(feedback):
170 if feedback.gives_mark:
171 return u"\nVær oppmerksom på at du får prikk dersom du ikke svarer " \
172 u"på disse spørsmålene innen fristen."
173 else:
174 return ""
175
176 @staticmethod
177 def set_marks(title, not_responded):
178 mark = Mark()
179 mark.title = u"Manglende tilbakemelding på %s" % (title)
180 mark.category = 4 #Missed feedback
181 mark.description = u"Du har fått en prikk fordi du ikke har levert tilbakemelding."
182 mark.save()
183
184 for user in not_responded:
185 user_entry = UserEntry()
186 user_entry.user = user
187 user_entry.mark = mark
188 user_entry.save()
189
190 class Message():
191 subject = ""
192 intro = ""
193 start_date = ""
194 deadline = ""
195 mark = ""
196 contact = ""
197 link = ""
198 send = False
199 end = u"\n\nMvh\nLinjeforeningen Online"
200 results_message = False
201
202 committee_mail = ""
203 attended_mails = False
204
205
206 def __unicode__(self):
207 message = "%s %s %s %s %s %s %s" % (
208 self.intro,
209 self.start_date,
210 self.link,
211 self.deadline,
212 self.mark,
213 self.contact,
214 self.end)
215 return message
216
217 schedule.register(FeedbackMail, day_of_week='mon-sun', hour=8, minute=00)
218
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apps/feedback/mommy.py b/apps/feedback/mommy.py
--- a/apps/feedback/mommy.py
+++ b/apps/feedback/mommy.py
@@ -45,7 +45,10 @@
#return if everyone has answered
if not not_responded:
+ feedback.active = False
+ feedback.save()
logger.info('Everyone has answered')
+ logger.info('Feedback set to innactive')
return message
| {"golden_diff": "diff --git a/apps/feedback/mommy.py b/apps/feedback/mommy.py\n--- a/apps/feedback/mommy.py\n+++ b/apps/feedback/mommy.py\n@@ -45,7 +45,10 @@\n \n #return if everyone has answered\n if not not_responded:\n+ feedback.active = False\n+ feedback.save()\n logger.info('Everyone has answered')\n+ logger.info('Feedback set to innactive')\n return message\n", "issue": "Active feedbacks bug\nMinor bug where feedbacks where everyone answers does not get set to inactive.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport datetime\nimport socket\nimport locale\nimport logging\n\nfrom django.utils import timezone\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.conf import settings\nfrom django.core.mail import EmailMessage\n\nfrom apps.events.models import Event, AttendanceEvent, Attendee\nfrom apps.feedback.models import FeedbackRelation\nfrom apps.marks.models import Mark, UserEntry\nfrom apps.mommy import Task, schedule\n\nclass FeedbackMail(Task):\n\n @staticmethod\n def run():\n logger = logging.getLogger(\"feedback\")\n logger.info(\"Feedback job started\")\n locale.setlocale(locale.LC_ALL, \"nb_NO.UTF-8\")\n active_feedbacks = FeedbackRelation.objects.filter(active=True)\n \n for feedback in active_feedbacks:\n message = FeedbackMail.generate_message(feedback, logger)\n\n if message.send:\n EmailMessage(message.subject, unicode(message), message.committee_mail, [], message.attended_mails).send()\n logger.info('Emails sent to: ' + str(message.attended_mails))\n\n if message.results_message:\n EmailMessage(\"Feedback resultat\", message.results_message,\"[email protected]\", [message.committee_mail]).send() \n logger.info('Results mail sent to :' + message.committee_mail)\n\n @staticmethod\n def generate_message(feedback, logger):\n logger.info('Processing: \"' + feedback.get_title() + '\"')\n today = timezone.now().date()\n yesterday = today + datetime.timedelta(days=-1)\n not_responded = FeedbackMail.get_users(feedback)\n logger.info('Not responded: ' + str(not_responded))\n message = Message()\n\n #return if everyone has answered\n if not not_responded:\n logger.info('Everyone has answered')\n return message\n\n \n message.attended_mails = FeedbackMail.get_user_mails(not_responded)\n\n message.committee_mail = FeedbackMail.get_committee_email(feedback)\n deadline = feedback.deadline.strftime(\"%d. %B\").encode(\"utf-8\")\n title = str(FeedbackMail.get_title(feedback)).encode(\"utf-8\")\n message.link = str(u\"\\n\\n\" + FeedbackMail.get_link(feedback)).encode(\"utf-8\")\n results_link = str(FeedbackMail.get_link(feedback) + \"results\").encode(\"utf-8\")\n \n start_date = feedback.get_start_date()\n deadline_diff = (feedback.deadline - today).days\n\n message.subject = u\"Feedback: %s\" % (title)\n message.intro = u\"Hei, vi \u00f8nsker tilbakemelding p\u00e5 \\\"%s\\\"\" % (title)\n message.mark = FeedbackMail.mark_message(feedback)\n message.contact = u\"\\n\\nEventuelle sp\u00f8rsm\u00e5l sendes til %s \" % (message.committee_mail)\n message.start_date = FeedbackMail.start_date_message(start_date)\n\n if deadline_diff < 0: #Deadline passed\n feedback.active = False\n feedback.save()\n logger.info(\"Deadline passed feedback set to inactive\")\n\n if feedback.gives_mark:\n FeedbackMail.set_marks(title, not_responded) \n \n message.intro = u\"Fristen for \u00e5 svare p\u00e5 \\\"%s\\\" har g\u00e5tt ut og du har f\u00e5tt en prikk.\" % (title)\n message.mark = \"\"\n message.start_date = \"\"\n message.link = \"\"\n message.send = True\n \n logger.info(\"Marks given to: \" + str(not_responded))\n\n elif deadline_diff < 1: #Last warning\n message.deadline = u\"\\n\\nI dag innen 23:59 er siste frist til \u00e5 svare p\u00e5 skjemaet.\"\n \n message.results_message = u\"Hei, siste purremail p\u00e5 feedback skjema har blitt sendt til alle \" \\\n u\"gjenv\u00e6rende deltagere p\u00e5 \\\"%s\\\".\\nDere kan se feedback-resultatene p\u00e5:\\n%s\\n\" % \\\n (title, results_link)\n message.send = True\n logger.info(\"Last warning message generated\")\n elif deadline_diff < 3 and feedback.gives_mark: # 3 days from the deadline\n message.deadline = u\"\\n\\nFristen for \u00e5 svare p\u00e5 skjema er %s innen kl 23:59.\" % (deadline)\n message.send = True\n logger.info(\"Warning message generated\")\n elif FeedbackMail.send_first_notification(feedback): #Day after the event or feedback creation \n message.deadline = u\"\\n\\nFristen for \u00e5 svare p\u00e5 skjema er %s innen kl 23:59.\" % (deadline)\n \n message.results_message = u\"Hei, n\u00e5 har feedbackmail blitt sendt til alle \" \\\n u\"deltagere p\u00e5 \\\"%s\\\".\\nDere kan se feedback-resultatene p\u00e5:\\n%s\\n\" % \\\n (title, results_link)\n message.send = True\n logger.info(\"First message generated\")\n else:\n logger.info(\"No message generated\")\n\n return message\n \n @staticmethod\n def send_first_notification(feedback):\n start_date = FeedbackMail.start_date(feedback)\n\n #The object that requires feedback doesnt have a start date\n if not start_date:\n yesterday = timezone.now().date() - datetime.timedelta(days=1)\n if feedback.created_date == yesterday.date():\n #Send the first notification the day after the feedback relation was created\n return True\n else:\n day_after_event = start_date + datetime.timedelta(1)\n if day_after_event == datetime.datetime.date(timezone.now()):\n #Send the first notification the day after the event\n return True\n return False\n\n @staticmethod\n def start_date(feedback):\n start_date = feedback.get_start_date()\n \n if start_date:\n return start_date.date()\n else:\n return False\n\n @staticmethod\n def start_date_message(start_date):\n #If the object(event) doesnt have start date it will send \n #the first notification the day after the feedbackrelation is made\n if start_date:\n start_date_string = start_date.strftime(\"%d. %B\").encode(\"utf-8\")\n message_start_date = u\"som du var med p\u00e5 den %s:\" % (start_date_string)\n else:\n message_start_date = \"\"\n \n return message_start_date \n\n @staticmethod\n def get_users(feedback):\n return feedback.get_slackers()\n\n @staticmethod\n def get_user_mails(not_responded):\n return [user.email for user in not_responded]\n\n @staticmethod\n def get_link(feedback):\n return str(settings.BASE_URL + feedback.get_absolute_url())\n\n @staticmethod\n def get_title(feedback):\n return feedback.get_title()\n\n @staticmethod\n def get_committee_email(feedback):\n return feedback.get_email()\n\n @staticmethod\n def mark_message(feedback):\n if feedback.gives_mark:\n return u\"\\nV\u00e6r oppmerksom p\u00e5 at du f\u00e5r prikk dersom du ikke svarer \" \\\n u\"p\u00e5 disse sp\u00f8rsm\u00e5lene innen fristen.\"\n else:\n return \"\"\n\n @staticmethod\n def set_marks(title, not_responded):\n mark = Mark()\n mark.title = u\"Manglende tilbakemelding p\u00e5 %s\" % (title)\n mark.category = 4 #Missed feedback\n mark.description = u\"Du har f\u00e5tt en prikk fordi du ikke har levert tilbakemelding.\"\n mark.save()\n \n for user in not_responded:\n user_entry = UserEntry()\n user_entry.user = user\n user_entry.mark = mark\n user_entry.save()\n \nclass Message():\n subject = \"\"\n intro = \"\"\n start_date = \"\"\n deadline = \"\"\n mark = \"\"\n contact = \"\"\n link = \"\"\n send = False\n end = u\"\\n\\nMvh\\nLinjeforeningen Online\"\n results_message = False\n\n committee_mail = \"\"\n attended_mails = False\n\n\n def __unicode__(self):\n message = \"%s %s %s %s %s %s %s\" % (\n self.intro, \n self.start_date, \n self.link, \n self.deadline, \n self.mark, \n self.contact, \n self.end)\n return message\n\nschedule.register(FeedbackMail, day_of_week='mon-sun', hour=8, minute=00)\n", "path": "apps/feedback/mommy.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport datetime\nimport socket\nimport locale\nimport logging\n\nfrom django.utils import timezone\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.conf import settings\nfrom django.core.mail import EmailMessage\n\nfrom apps.events.models import Event, AttendanceEvent, Attendee\nfrom apps.feedback.models import FeedbackRelation\nfrom apps.marks.models import Mark, UserEntry\nfrom apps.mommy import Task, schedule\n\nclass FeedbackMail(Task):\n\n @staticmethod\n def run():\n logger = logging.getLogger(\"feedback\")\n logger.info(\"Feedback job started\")\n locale.setlocale(locale.LC_ALL, \"nb_NO.UTF-8\")\n active_feedbacks = FeedbackRelation.objects.filter(active=True)\n \n for feedback in active_feedbacks:\n message = FeedbackMail.generate_message(feedback, logger)\n\n if message.send:\n EmailMessage(message.subject, unicode(message), message.committee_mail, [], message.attended_mails).send()\n logger.info('Emails sent to: ' + str(message.attended_mails))\n\n if message.results_message:\n EmailMessage(\"Feedback resultat\", message.results_message,\"[email protected]\", [message.committee_mail]).send() \n logger.info('Results mail sent to :' + message.committee_mail)\n\n @staticmethod\n def generate_message(feedback, logger):\n logger.info('Processing: \"' + feedback.get_title() + '\"')\n today = timezone.now().date()\n yesterday = today + datetime.timedelta(days=-1)\n not_responded = FeedbackMail.get_users(feedback)\n logger.info('Not responded: ' + str(not_responded))\n message = Message()\n\n #return if everyone has answered\n if not not_responded:\n feedback.active = False\n feedback.save()\n logger.info('Everyone has answered')\n logger.info('Feedback set to innactive')\n return message\n\n \n message.attended_mails = FeedbackMail.get_user_mails(not_responded)\n\n message.committee_mail = FeedbackMail.get_committee_email(feedback)\n deadline = feedback.deadline.strftime(\"%d. %B\").encode(\"utf-8\")\n title = str(FeedbackMail.get_title(feedback)).encode(\"utf-8\")\n message.link = str(u\"\\n\\n\" + FeedbackMail.get_link(feedback)).encode(\"utf-8\")\n results_link = str(FeedbackMail.get_link(feedback) + \"results\").encode(\"utf-8\")\n \n start_date = feedback.get_start_date()\n deadline_diff = (feedback.deadline - today).days\n\n message.subject = u\"Feedback: %s\" % (title)\n message.intro = u\"Hei, vi \u00f8nsker tilbakemelding p\u00e5 \\\"%s\\\"\" % (title)\n message.mark = FeedbackMail.mark_message(feedback)\n message.contact = u\"\\n\\nEventuelle sp\u00f8rsm\u00e5l sendes til %s \" % (message.committee_mail)\n message.start_date = FeedbackMail.start_date_message(start_date)\n\n if deadline_diff < 0: #Deadline passed\n feedback.active = False\n feedback.save()\n logger.info(\"Deadline passed feedback set to inactive\")\n\n if feedback.gives_mark:\n FeedbackMail.set_marks(title, not_responded) \n \n message.intro = u\"Fristen for \u00e5 svare p\u00e5 \\\"%s\\\" har g\u00e5tt ut og du har f\u00e5tt en prikk.\" % (title)\n message.mark = \"\"\n message.start_date = \"\"\n message.link = \"\"\n message.send = True\n \n logger.info(\"Marks given to: \" + str(not_responded))\n\n elif deadline_diff < 1: #Last warning\n message.deadline = u\"\\n\\nI dag innen 23:59 er siste frist til \u00e5 svare p\u00e5 skjemaet.\"\n \n message.results_message = u\"Hei, siste purremail p\u00e5 feedback skjema har blitt sendt til alle \" \\\n u\"gjenv\u00e6rende deltagere p\u00e5 \\\"%s\\\".\\nDere kan se feedback-resultatene p\u00e5:\\n%s\\n\" % \\\n (title, results_link)\n message.send = True\n logger.info(\"Last warning message generated\")\n elif deadline_diff < 3 and feedback.gives_mark: # 3 days from the deadline\n message.deadline = u\"\\n\\nFristen for \u00e5 svare p\u00e5 skjema er %s innen kl 23:59.\" % (deadline)\n message.send = True\n logger.info(\"Warning message generated\")\n elif FeedbackMail.send_first_notification(feedback): #Day after the event or feedback creation \n message.deadline = u\"\\n\\nFristen for \u00e5 svare p\u00e5 skjema er %s innen kl 23:59.\" % (deadline)\n \n message.results_message = u\"Hei, n\u00e5 har feedbackmail blitt sendt til alle \" \\\n u\"deltagere p\u00e5 \\\"%s\\\".\\nDere kan se feedback-resultatene p\u00e5:\\n%s\\n\" % \\\n (title, results_link)\n message.send = True\n logger.info(\"First message generated\")\n else:\n logger.info(\"No message generated\")\n\n return message\n \n @staticmethod\n def send_first_notification(feedback):\n start_date = FeedbackMail.start_date(feedback)\n\n #The object that requires feedback doesnt have a start date\n if not start_date:\n yesterday = timezone.now().date() - datetime.timedelta(days=1)\n if feedback.created_date == yesterday.date():\n #Send the first notification the day after the feedback relation was created\n return True\n else:\n day_after_event = start_date + datetime.timedelta(1)\n if day_after_event == datetime.datetime.date(timezone.now()):\n #Send the first notification the day after the event\n return True\n return False\n\n @staticmethod\n def start_date(feedback):\n start_date = feedback.get_start_date()\n \n if start_date:\n return start_date.date()\n else:\n return False\n\n @staticmethod\n def start_date_message(start_date):\n #If the object(event) doesnt have start date it will send \n #the first notification the day after the feedbackrelation is made\n if start_date:\n start_date_string = start_date.strftime(\"%d. %B\").encode(\"utf-8\")\n message_start_date = u\"som du var med p\u00e5 den %s:\" % (start_date_string)\n else:\n message_start_date = \"\"\n \n return message_start_date \n\n @staticmethod\n def get_users(feedback):\n return feedback.get_slackers()\n\n @staticmethod\n def get_user_mails(not_responded):\n return [user.email for user in not_responded]\n\n @staticmethod\n def get_link(feedback):\n return str(settings.BASE_URL + feedback.get_absolute_url())\n\n @staticmethod\n def get_title(feedback):\n return feedback.get_title()\n\n @staticmethod\n def get_committee_email(feedback):\n return feedback.get_email()\n\n @staticmethod\n def mark_message(feedback):\n if feedback.gives_mark:\n return u\"\\nV\u00e6r oppmerksom p\u00e5 at du f\u00e5r prikk dersom du ikke svarer \" \\\n u\"p\u00e5 disse sp\u00f8rsm\u00e5lene innen fristen.\"\n else:\n return \"\"\n\n @staticmethod\n def set_marks(title, not_responded):\n mark = Mark()\n mark.title = u\"Manglende tilbakemelding p\u00e5 %s\" % (title)\n mark.category = 4 #Missed feedback\n mark.description = u\"Du har f\u00e5tt en prikk fordi du ikke har levert tilbakemelding.\"\n mark.save()\n \n for user in not_responded:\n user_entry = UserEntry()\n user_entry.user = user\n user_entry.mark = mark\n user_entry.save()\n \nclass Message():\n subject = \"\"\n intro = \"\"\n start_date = \"\"\n deadline = \"\"\n mark = \"\"\n contact = \"\"\n link = \"\"\n send = False\n end = u\"\\n\\nMvh\\nLinjeforeningen Online\"\n results_message = False\n\n committee_mail = \"\"\n attended_mails = False\n\n\n def __unicode__(self):\n message = \"%s %s %s %s %s %s %s\" % (\n self.intro, \n self.start_date, \n self.link, \n self.deadline, \n self.mark, \n self.contact, \n self.end)\n return message\n\nschedule.register(FeedbackMail, day_of_week='mon-sun', hour=8, minute=00)\n", "path": "apps/feedback/mommy.py"}]} | 2,659 | 105 |
gh_patches_debug_31710 | rasdani/github-patches | git_diff | pypa__pip-8474 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Way to clear items from pip cache of specified age.
I use pip a lot and had never considered anything about caching, and find I have a 1.7gb pip cache.
It would be useful if there was a command that could clear it of items beyond a specified age.
That way I could could create a script to run every day to delete anything in pip that is older than a month (and to do the same for unrelated things like yarn etc).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pip/_internal/commands/cache.py`
Content:
```
1 from __future__ import absolute_import
2
3 import logging
4 import os
5 import textwrap
6
7 import pip._internal.utils.filesystem as filesystem
8 from pip._internal.cli.base_command import Command
9 from pip._internal.cli.status_codes import ERROR, SUCCESS
10 from pip._internal.exceptions import CommandError, PipError
11 from pip._internal.utils.typing import MYPY_CHECK_RUNNING
12
13 if MYPY_CHECK_RUNNING:
14 from optparse import Values
15 from typing import Any, List
16
17
18 logger = logging.getLogger(__name__)
19
20
21 class CacheCommand(Command):
22 """
23 Inspect and manage pip's wheel cache.
24
25 Subcommands:
26
27 - dir: Show the cache directory.
28 - info: Show information about the cache.
29 - list: List filenames of packages stored in the cache.
30 - remove: Remove one or more package from the cache.
31 - purge: Remove all items from the cache.
32
33 ``<pattern>`` can be a glob expression or a package name.
34 """
35
36 ignore_require_venv = True
37 usage = """
38 %prog dir
39 %prog info
40 %prog list [<pattern>]
41 %prog remove <pattern>
42 %prog purge
43 """
44
45 def run(self, options, args):
46 # type: (Values, List[Any]) -> int
47 handlers = {
48 "dir": self.get_cache_dir,
49 "info": self.get_cache_info,
50 "list": self.list_cache_items,
51 "remove": self.remove_cache_items,
52 "purge": self.purge_cache,
53 }
54
55 if not options.cache_dir:
56 logger.error("pip cache commands can not "
57 "function since cache is disabled.")
58 return ERROR
59
60 # Determine action
61 if not args or args[0] not in handlers:
62 logger.error(
63 "Need an action (%s) to perform.",
64 ", ".join(sorted(handlers)),
65 )
66 return ERROR
67
68 action = args[0]
69
70 # Error handling happens here, not in the action-handlers.
71 try:
72 handlers[action](options, args[1:])
73 except PipError as e:
74 logger.error(e.args[0])
75 return ERROR
76
77 return SUCCESS
78
79 def get_cache_dir(self, options, args):
80 # type: (Values, List[Any]) -> None
81 if args:
82 raise CommandError('Too many arguments')
83
84 logger.info(options.cache_dir)
85
86 def get_cache_info(self, options, args):
87 # type: (Values, List[Any]) -> None
88 if args:
89 raise CommandError('Too many arguments')
90
91 num_packages = len(self._find_wheels(options, '*'))
92
93 cache_location = self._wheels_cache_dir(options)
94 cache_size = filesystem.format_directory_size(cache_location)
95
96 message = textwrap.dedent("""
97 Location: {location}
98 Size: {size}
99 Number of wheels: {package_count}
100 """).format(
101 location=cache_location,
102 package_count=num_packages,
103 size=cache_size,
104 ).strip()
105
106 logger.info(message)
107
108 def list_cache_items(self, options, args):
109 # type: (Values, List[Any]) -> None
110 if len(args) > 1:
111 raise CommandError('Too many arguments')
112
113 if args:
114 pattern = args[0]
115 else:
116 pattern = '*'
117
118 files = self._find_wheels(options, pattern)
119
120 if not files:
121 logger.info('Nothing cached.')
122 return
123
124 results = []
125 for filename in files:
126 wheel = os.path.basename(filename)
127 size = filesystem.format_file_size(filename)
128 results.append(' - {} ({})'.format(wheel, size))
129 logger.info('Cache contents:\n')
130 logger.info('\n'.join(sorted(results)))
131
132 def remove_cache_items(self, options, args):
133 # type: (Values, List[Any]) -> None
134 if len(args) > 1:
135 raise CommandError('Too many arguments')
136
137 if not args:
138 raise CommandError('Please provide a pattern')
139
140 files = self._find_wheels(options, args[0])
141 if not files:
142 raise CommandError('No matching packages')
143
144 for filename in files:
145 os.unlink(filename)
146 logger.debug('Removed %s', filename)
147 logger.info('Files removed: %s', len(files))
148
149 def purge_cache(self, options, args):
150 # type: (Values, List[Any]) -> None
151 if args:
152 raise CommandError('Too many arguments')
153
154 return self.remove_cache_items(options, ['*'])
155
156 def _wheels_cache_dir(self, options):
157 # type: (Values) -> str
158 return os.path.join(options.cache_dir, 'wheels')
159
160 def _find_wheels(self, options, pattern):
161 # type: (Values, str) -> List[str]
162 wheel_dir = self._wheels_cache_dir(options)
163
164 # The wheel filename format, as specified in PEP 427, is:
165 # {distribution}-{version}(-{build})?-{python}-{abi}-{platform}.whl
166 #
167 # Additionally, non-alphanumeric values in the distribution are
168 # normalized to underscores (_), meaning hyphens can never occur
169 # before `-{version}`.
170 #
171 # Given that information:
172 # - If the pattern we're given contains a hyphen (-), the user is
173 # providing at least the version. Thus, we can just append `*.whl`
174 # to match the rest of it.
175 # - If the pattern we're given doesn't contain a hyphen (-), the
176 # user is only providing the name. Thus, we append `-*.whl` to
177 # match the hyphen before the version, followed by anything else.
178 #
179 # PEP 427: https://www.python.org/dev/peps/pep-0427/
180 pattern = pattern + ("*.whl" if "-" in pattern else "-*.whl")
181
182 return filesystem.find_files(wheel_dir, pattern)
183
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/pip/_internal/commands/cache.py b/src/pip/_internal/commands/cache.py
--- a/src/pip/_internal/commands/cache.py
+++ b/src/pip/_internal/commands/cache.py
@@ -37,11 +37,25 @@
usage = """
%prog dir
%prog info
- %prog list [<pattern>]
+ %prog list [<pattern>] [--format=[human, abspath]]
%prog remove <pattern>
%prog purge
"""
+ def add_options(self):
+ # type: () -> None
+
+ self.cmd_opts.add_option(
+ '--format',
+ action='store',
+ dest='list_format',
+ default="human",
+ choices=('human', 'abspath'),
+ help="Select the output format among: human (default) or abspath"
+ )
+
+ self.parser.insert_option_group(0, self.cmd_opts)
+
def run(self, options, args):
# type: (Values, List[Any]) -> int
handlers = {
@@ -116,7 +130,13 @@
pattern = '*'
files = self._find_wheels(options, pattern)
+ if options.list_format == 'human':
+ self.format_for_human(files)
+ else:
+ self.format_for_abspath(files)
+ def format_for_human(self, files):
+ # type: (List[str]) -> None
if not files:
logger.info('Nothing cached.')
return
@@ -129,6 +149,17 @@
logger.info('Cache contents:\n')
logger.info('\n'.join(sorted(results)))
+ def format_for_abspath(self, files):
+ # type: (List[str]) -> None
+ if not files:
+ return
+
+ results = []
+ for filename in files:
+ results.append(filename)
+
+ logger.info('\n'.join(sorted(results)))
+
def remove_cache_items(self, options, args):
# type: (Values, List[Any]) -> None
if len(args) > 1:
| {"golden_diff": "diff --git a/src/pip/_internal/commands/cache.py b/src/pip/_internal/commands/cache.py\n--- a/src/pip/_internal/commands/cache.py\n+++ b/src/pip/_internal/commands/cache.py\n@@ -37,11 +37,25 @@\n usage = \"\"\"\n %prog dir\n %prog info\n- %prog list [<pattern>]\n+ %prog list [<pattern>] [--format=[human, abspath]]\n %prog remove <pattern>\n %prog purge\n \"\"\"\n \n+ def add_options(self):\n+ # type: () -> None\n+\n+ self.cmd_opts.add_option(\n+ '--format',\n+ action='store',\n+ dest='list_format',\n+ default=\"human\",\n+ choices=('human', 'abspath'),\n+ help=\"Select the output format among: human (default) or abspath\"\n+ )\n+\n+ self.parser.insert_option_group(0, self.cmd_opts)\n+\n def run(self, options, args):\n # type: (Values, List[Any]) -> int\n handlers = {\n@@ -116,7 +130,13 @@\n pattern = '*'\n \n files = self._find_wheels(options, pattern)\n+ if options.list_format == 'human':\n+ self.format_for_human(files)\n+ else:\n+ self.format_for_abspath(files)\n \n+ def format_for_human(self, files):\n+ # type: (List[str]) -> None\n if not files:\n logger.info('Nothing cached.')\n return\n@@ -129,6 +149,17 @@\n logger.info('Cache contents:\\n')\n logger.info('\\n'.join(sorted(results)))\n \n+ def format_for_abspath(self, files):\n+ # type: (List[str]) -> None\n+ if not files:\n+ return\n+\n+ results = []\n+ for filename in files:\n+ results.append(filename)\n+\n+ logger.info('\\n'.join(sorted(results)))\n+\n def remove_cache_items(self, options, args):\n # type: (Values, List[Any]) -> None\n if len(args) > 1:\n", "issue": "Way to clear items from pip cache of specified age.\nI use pip a lot and had never considered anything about caching, and find I have a 1.7gb pip cache.\r\n\r\nIt would be useful if there was a command that could clear it of items beyond a specified age.\r\n\r\nThat way I could could create a script to run every day to delete anything in pip that is older than a month (and to do the same for unrelated things like yarn etc).\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport logging\nimport os\nimport textwrap\n\nimport pip._internal.utils.filesystem as filesystem\nfrom pip._internal.cli.base_command import Command\nfrom pip._internal.cli.status_codes import ERROR, SUCCESS\nfrom pip._internal.exceptions import CommandError, PipError\nfrom pip._internal.utils.typing import MYPY_CHECK_RUNNING\n\nif MYPY_CHECK_RUNNING:\n from optparse import Values\n from typing import Any, List\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass CacheCommand(Command):\n \"\"\"\n Inspect and manage pip's wheel cache.\n\n Subcommands:\n\n - dir: Show the cache directory.\n - info: Show information about the cache.\n - list: List filenames of packages stored in the cache.\n - remove: Remove one or more package from the cache.\n - purge: Remove all items from the cache.\n\n ``<pattern>`` can be a glob expression or a package name.\n \"\"\"\n\n ignore_require_venv = True\n usage = \"\"\"\n %prog dir\n %prog info\n %prog list [<pattern>]\n %prog remove <pattern>\n %prog purge\n \"\"\"\n\n def run(self, options, args):\n # type: (Values, List[Any]) -> int\n handlers = {\n \"dir\": self.get_cache_dir,\n \"info\": self.get_cache_info,\n \"list\": self.list_cache_items,\n \"remove\": self.remove_cache_items,\n \"purge\": self.purge_cache,\n }\n\n if not options.cache_dir:\n logger.error(\"pip cache commands can not \"\n \"function since cache is disabled.\")\n return ERROR\n\n # Determine action\n if not args or args[0] not in handlers:\n logger.error(\n \"Need an action (%s) to perform.\",\n \", \".join(sorted(handlers)),\n )\n return ERROR\n\n action = args[0]\n\n # Error handling happens here, not in the action-handlers.\n try:\n handlers[action](options, args[1:])\n except PipError as e:\n logger.error(e.args[0])\n return ERROR\n\n return SUCCESS\n\n def get_cache_dir(self, options, args):\n # type: (Values, List[Any]) -> None\n if args:\n raise CommandError('Too many arguments')\n\n logger.info(options.cache_dir)\n\n def get_cache_info(self, options, args):\n # type: (Values, List[Any]) -> None\n if args:\n raise CommandError('Too many arguments')\n\n num_packages = len(self._find_wheels(options, '*'))\n\n cache_location = self._wheels_cache_dir(options)\n cache_size = filesystem.format_directory_size(cache_location)\n\n message = textwrap.dedent(\"\"\"\n Location: {location}\n Size: {size}\n Number of wheels: {package_count}\n \"\"\").format(\n location=cache_location,\n package_count=num_packages,\n size=cache_size,\n ).strip()\n\n logger.info(message)\n\n def list_cache_items(self, options, args):\n # type: (Values, List[Any]) -> None\n if len(args) > 1:\n raise CommandError('Too many arguments')\n\n if args:\n pattern = args[0]\n else:\n pattern = '*'\n\n files = self._find_wheels(options, pattern)\n\n if not files:\n logger.info('Nothing cached.')\n return\n\n results = []\n for filename in files:\n wheel = os.path.basename(filename)\n size = filesystem.format_file_size(filename)\n results.append(' - {} ({})'.format(wheel, size))\n logger.info('Cache contents:\\n')\n logger.info('\\n'.join(sorted(results)))\n\n def remove_cache_items(self, options, args):\n # type: (Values, List[Any]) -> None\n if len(args) > 1:\n raise CommandError('Too many arguments')\n\n if not args:\n raise CommandError('Please provide a pattern')\n\n files = self._find_wheels(options, args[0])\n if not files:\n raise CommandError('No matching packages')\n\n for filename in files:\n os.unlink(filename)\n logger.debug('Removed %s', filename)\n logger.info('Files removed: %s', len(files))\n\n def purge_cache(self, options, args):\n # type: (Values, List[Any]) -> None\n if args:\n raise CommandError('Too many arguments')\n\n return self.remove_cache_items(options, ['*'])\n\n def _wheels_cache_dir(self, options):\n # type: (Values) -> str\n return os.path.join(options.cache_dir, 'wheels')\n\n def _find_wheels(self, options, pattern):\n # type: (Values, str) -> List[str]\n wheel_dir = self._wheels_cache_dir(options)\n\n # The wheel filename format, as specified in PEP 427, is:\n # {distribution}-{version}(-{build})?-{python}-{abi}-{platform}.whl\n #\n # Additionally, non-alphanumeric values in the distribution are\n # normalized to underscores (_), meaning hyphens can never occur\n # before `-{version}`.\n #\n # Given that information:\n # - If the pattern we're given contains a hyphen (-), the user is\n # providing at least the version. Thus, we can just append `*.whl`\n # to match the rest of it.\n # - If the pattern we're given doesn't contain a hyphen (-), the\n # user is only providing the name. Thus, we append `-*.whl` to\n # match the hyphen before the version, followed by anything else.\n #\n # PEP 427: https://www.python.org/dev/peps/pep-0427/\n pattern = pattern + (\"*.whl\" if \"-\" in pattern else \"-*.whl\")\n\n return filesystem.find_files(wheel_dir, pattern)\n", "path": "src/pip/_internal/commands/cache.py"}], "after_files": [{"content": "from __future__ import absolute_import\n\nimport logging\nimport os\nimport textwrap\n\nimport pip._internal.utils.filesystem as filesystem\nfrom pip._internal.cli.base_command import Command\nfrom pip._internal.cli.status_codes import ERROR, SUCCESS\nfrom pip._internal.exceptions import CommandError, PipError\nfrom pip._internal.utils.typing import MYPY_CHECK_RUNNING\n\nif MYPY_CHECK_RUNNING:\n from optparse import Values\n from typing import Any, List\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass CacheCommand(Command):\n \"\"\"\n Inspect and manage pip's wheel cache.\n\n Subcommands:\n\n - dir: Show the cache directory.\n - info: Show information about the cache.\n - list: List filenames of packages stored in the cache.\n - remove: Remove one or more package from the cache.\n - purge: Remove all items from the cache.\n\n ``<pattern>`` can be a glob expression or a package name.\n \"\"\"\n\n ignore_require_venv = True\n usage = \"\"\"\n %prog dir\n %prog info\n %prog list [<pattern>] [--format=[human, abspath]]\n %prog remove <pattern>\n %prog purge\n \"\"\"\n\n def add_options(self):\n # type: () -> None\n\n self.cmd_opts.add_option(\n '--format',\n action='store',\n dest='list_format',\n default=\"human\",\n choices=('human', 'abspath'),\n help=\"Select the output format among: human (default) or abspath\"\n )\n\n self.parser.insert_option_group(0, self.cmd_opts)\n\n def run(self, options, args):\n # type: (Values, List[Any]) -> int\n handlers = {\n \"dir\": self.get_cache_dir,\n \"info\": self.get_cache_info,\n \"list\": self.list_cache_items,\n \"remove\": self.remove_cache_items,\n \"purge\": self.purge_cache,\n }\n\n if not options.cache_dir:\n logger.error(\"pip cache commands can not \"\n \"function since cache is disabled.\")\n return ERROR\n\n # Determine action\n if not args or args[0] not in handlers:\n logger.error(\n \"Need an action (%s) to perform.\",\n \", \".join(sorted(handlers)),\n )\n return ERROR\n\n action = args[0]\n\n # Error handling happens here, not in the action-handlers.\n try:\n handlers[action](options, args[1:])\n except PipError as e:\n logger.error(e.args[0])\n return ERROR\n\n return SUCCESS\n\n def get_cache_dir(self, options, args):\n # type: (Values, List[Any]) -> None\n if args:\n raise CommandError('Too many arguments')\n\n logger.info(options.cache_dir)\n\n def get_cache_info(self, options, args):\n # type: (Values, List[Any]) -> None\n if args:\n raise CommandError('Too many arguments')\n\n num_packages = len(self._find_wheels(options, '*'))\n\n cache_location = self._wheels_cache_dir(options)\n cache_size = filesystem.format_directory_size(cache_location)\n\n message = textwrap.dedent(\"\"\"\n Location: {location}\n Size: {size}\n Number of wheels: {package_count}\n \"\"\").format(\n location=cache_location,\n package_count=num_packages,\n size=cache_size,\n ).strip()\n\n logger.info(message)\n\n def list_cache_items(self, options, args):\n # type: (Values, List[Any]) -> None\n if len(args) > 1:\n raise CommandError('Too many arguments')\n\n if args:\n pattern = args[0]\n else:\n pattern = '*'\n\n files = self._find_wheels(options, pattern)\n if options.list_format == 'human':\n self.format_for_human(files)\n else:\n self.format_for_abspath(files)\n\n def format_for_human(self, files):\n # type: (List[str]) -> None\n if not files:\n logger.info('Nothing cached.')\n return\n\n results = []\n for filename in files:\n wheel = os.path.basename(filename)\n size = filesystem.format_file_size(filename)\n results.append(' - {} ({})'.format(wheel, size))\n logger.info('Cache contents:\\n')\n logger.info('\\n'.join(sorted(results)))\n\n def format_for_abspath(self, files):\n # type: (List[str]) -> None\n if not files:\n return\n\n results = []\n for filename in files:\n results.append(filename)\n\n logger.info('\\n'.join(sorted(results)))\n\n def remove_cache_items(self, options, args):\n # type: (Values, List[Any]) -> None\n if len(args) > 1:\n raise CommandError('Too many arguments')\n\n if not args:\n raise CommandError('Please provide a pattern')\n\n files = self._find_wheels(options, args[0])\n if not files:\n raise CommandError('No matching packages')\n\n for filename in files:\n os.unlink(filename)\n logger.debug('Removed %s', filename)\n logger.info('Files removed: %s', len(files))\n\n def purge_cache(self, options, args):\n # type: (Values, List[Any]) -> None\n if args:\n raise CommandError('Too many arguments')\n\n return self.remove_cache_items(options, ['*'])\n\n def _wheels_cache_dir(self, options):\n # type: (Values) -> str\n return os.path.join(options.cache_dir, 'wheels')\n\n def _find_wheels(self, options, pattern):\n # type: (Values, str) -> List[str]\n wheel_dir = self._wheels_cache_dir(options)\n\n # The wheel filename format, as specified in PEP 427, is:\n # {distribution}-{version}(-{build})?-{python}-{abi}-{platform}.whl\n #\n # Additionally, non-alphanumeric values in the distribution are\n # normalized to underscores (_), meaning hyphens can never occur\n # before `-{version}`.\n #\n # Given that information:\n # - If the pattern we're given contains a hyphen (-), the user is\n # providing at least the version. Thus, we can just append `*.whl`\n # to match the rest of it.\n # - If the pattern we're given doesn't contain a hyphen (-), the\n # user is only providing the name. Thus, we append `-*.whl` to\n # match the hyphen before the version, followed by anything else.\n #\n # PEP 427: https://www.python.org/dev/peps/pep-0427/\n pattern = pattern + (\"*.whl\" if \"-\" in pattern else \"-*.whl\")\n\n return filesystem.find_files(wheel_dir, pattern)\n", "path": "src/pip/_internal/commands/cache.py"}]} | 2,107 | 473 |
gh_patches_debug_416 | rasdani/github-patches | git_diff | automl__auto-sklearn-1361 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Check if test requirement `flaky` can be removed
We currently have a test dependancy [flaky](https://pypi.org/project/flaky/) used to annotate a test `KernelPCAComponentTest::test_default_configuration_classify()`. This is the only place it's used.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # -*- encoding: utf-8 -*-
2 import os
3 import sys
4 from setuptools import setup, find_packages
5
6
7 # Check if Auto-sklearn *could* run on the given system
8 if os.name != 'posix':
9 raise ValueError(
10 'Detected unsupported operating system: %s. Please check '
11 'the compability information of auto-sklearn: https://automl.github.io'
12 '/auto-sklearn/master/installation.html#windows-osx-compatibility' %
13 sys.platform
14 )
15
16 if sys.version_info < (3, 7):
17 raise ValueError(
18 'Unsupported Python version %d.%d.%d found. Auto-sklearn requires Python '
19 '3.7 or higher.' % (sys.version_info.major, sys.version_info.minor, sys.version_info.micro)
20 )
21
22 HERE = os.path.abspath(os.path.dirname(__file__))
23 with open(os.path.join(HERE, 'requirements.txt')) as fp:
24 install_reqs = [r.rstrip() for r in fp.readlines()
25 if not r.startswith('#') and not r.startswith('git+')]
26
27 extras_reqs={
28 "test": [
29 "pytest>=4.6",
30 "mypy",
31 "pytest-xdist",
32 "pytest-timeout",
33 "flaky",
34 "openml",
35 "pre-commit",
36 "pytest-cov",
37 ],
38 "examples": [
39 "matplotlib",
40 "jupyter",
41 "notebook",
42 "seaborn",
43 ],
44 "docs": [
45 "sphinx<4.3",
46 "sphinx-gallery",
47 "sphinx_bootstrap_theme",
48 "numpydoc",
49 "sphinx_toolbox",
50 "docutils==0.16"
51 ],
52 }
53
54 with open(os.path.join(HERE, 'autosklearn', '__version__.py')) as fh:
55 version = fh.readlines()[-1].split()[-1].strip("\"'")
56
57
58 with open(os.path.join(HERE, 'README.md')) as fh:
59 long_description = fh.read()
60
61
62 setup(
63 name='auto-sklearn',
64 author='Matthias Feurer',
65 author_email='[email protected]',
66 description='Automated machine learning.',
67 long_description=long_description,
68 long_description_content_type='text/markdown',
69 version=version,
70 packages=find_packages(exclude=['test', 'scripts', 'examples']),
71 extras_require=extras_reqs,
72 install_requires=install_reqs,
73 include_package_data=True,
74 license='BSD3',
75 platforms=['Linux'],
76 classifiers=[
77 "Environment :: Console",
78 "Intended Audience :: Developers",
79 "Intended Audience :: Education",
80 "Intended Audience :: Science/Research",
81 "Intended Audience :: Information Technology",
82 "License :: OSI Approved :: BSD License",
83 "Natural Language :: English",
84 "Operating System :: OS Independent",
85 "Topic :: Scientific/Engineering :: Artificial Intelligence",
86 "Topic :: Scientific/Engineering :: Information Analysis",
87 'Programming Language :: Python :: 3.7',
88 'Programming Language :: Python :: 3.8',
89 'Programming Language :: Python :: 3.9',
90 ],
91 python_requires='>=3.7',
92 url='https://automl.github.io/auto-sklearn',
93 )
94
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -30,7 +30,6 @@
"mypy",
"pytest-xdist",
"pytest-timeout",
- "flaky",
"openml",
"pre-commit",
"pytest-cov",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -30,7 +30,6 @@\n \"mypy\",\n \"pytest-xdist\",\n \"pytest-timeout\",\n- \"flaky\",\n \"openml\",\n \"pre-commit\",\n \"pytest-cov\",\n", "issue": "Check if test requirement `flaky` can be removed\nWe currently have a test dependancy [flaky](https://pypi.org/project/flaky/) used to annotate a test `KernelPCAComponentTest::test_default_configuration_classify()`. This is the only place it's used.\n", "before_files": [{"content": "# -*- encoding: utf-8 -*-\nimport os\nimport sys\nfrom setuptools import setup, find_packages\n\n\n# Check if Auto-sklearn *could* run on the given system\nif os.name != 'posix':\n raise ValueError(\n 'Detected unsupported operating system: %s. Please check '\n 'the compability information of auto-sklearn: https://automl.github.io'\n '/auto-sklearn/master/installation.html#windows-osx-compatibility' %\n sys.platform\n )\n\nif sys.version_info < (3, 7):\n raise ValueError(\n 'Unsupported Python version %d.%d.%d found. Auto-sklearn requires Python '\n '3.7 or higher.' % (sys.version_info.major, sys.version_info.minor, sys.version_info.micro)\n )\n\nHERE = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(HERE, 'requirements.txt')) as fp:\n install_reqs = [r.rstrip() for r in fp.readlines()\n if not r.startswith('#') and not r.startswith('git+')]\n\nextras_reqs={\n \"test\": [\n \"pytest>=4.6\",\n \"mypy\",\n \"pytest-xdist\",\n \"pytest-timeout\",\n \"flaky\",\n \"openml\",\n \"pre-commit\",\n \"pytest-cov\",\n ],\n \"examples\": [\n \"matplotlib\",\n \"jupyter\",\n \"notebook\",\n \"seaborn\",\n ],\n \"docs\": [\n \"sphinx<4.3\",\n \"sphinx-gallery\",\n \"sphinx_bootstrap_theme\",\n \"numpydoc\",\n \"sphinx_toolbox\",\n \"docutils==0.16\"\n ],\n}\n\nwith open(os.path.join(HERE, 'autosklearn', '__version__.py')) as fh:\n version = fh.readlines()[-1].split()[-1].strip(\"\\\"'\")\n\n\nwith open(os.path.join(HERE, 'README.md')) as fh:\n long_description = fh.read()\n\n\nsetup(\n name='auto-sklearn',\n author='Matthias Feurer',\n author_email='[email protected]',\n description='Automated machine learning.',\n long_description=long_description,\n long_description_content_type='text/markdown',\n version=version,\n packages=find_packages(exclude=['test', 'scripts', 'examples']),\n extras_require=extras_reqs,\n install_requires=install_reqs,\n include_package_data=True,\n license='BSD3',\n platforms=['Linux'],\n classifiers=[\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Science/Research\",\n \"Intended Audience :: Information Technology\",\n \"License :: OSI Approved :: BSD License\",\n \"Natural Language :: English\",\n \"Operating System :: OS Independent\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Topic :: Scientific/Engineering :: Information Analysis\",\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n ],\n python_requires='>=3.7',\n url='https://automl.github.io/auto-sklearn',\n)\n", "path": "setup.py"}], "after_files": [{"content": "# -*- encoding: utf-8 -*-\nimport os\nimport sys\nfrom setuptools import setup, find_packages\n\n\n# Check if Auto-sklearn *could* run on the given system\nif os.name != 'posix':\n raise ValueError(\n 'Detected unsupported operating system: %s. Please check '\n 'the compability information of auto-sklearn: https://automl.github.io'\n '/auto-sklearn/master/installation.html#windows-osx-compatibility' %\n sys.platform\n )\n\nif sys.version_info < (3, 7):\n raise ValueError(\n 'Unsupported Python version %d.%d.%d found. Auto-sklearn requires Python '\n '3.7 or higher.' % (sys.version_info.major, sys.version_info.minor, sys.version_info.micro)\n )\n\nHERE = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(HERE, 'requirements.txt')) as fp:\n install_reqs = [r.rstrip() for r in fp.readlines()\n if not r.startswith('#') and not r.startswith('git+')]\n\nextras_reqs={\n \"test\": [\n \"pytest>=4.6\",\n \"mypy\",\n \"pytest-xdist\",\n \"pytest-timeout\",\n \"openml\",\n \"pre-commit\",\n \"pytest-cov\",\n ],\n \"examples\": [\n \"matplotlib\",\n \"jupyter\",\n \"notebook\",\n \"seaborn\",\n ],\n \"docs\": [\n \"sphinx<4.3\",\n \"sphinx-gallery\",\n \"sphinx_bootstrap_theme\",\n \"numpydoc\",\n \"sphinx_toolbox\",\n \"docutils==0.16\"\n ],\n}\n\nwith open(os.path.join(HERE, 'autosklearn', '__version__.py')) as fh:\n version = fh.readlines()[-1].split()[-1].strip(\"\\\"'\")\n\n\nwith open(os.path.join(HERE, 'README.md')) as fh:\n long_description = fh.read()\n\n\nsetup(\n name='auto-sklearn',\n author='Matthias Feurer',\n author_email='[email protected]',\n description='Automated machine learning.',\n long_description=long_description,\n long_description_content_type='text/markdown',\n version=version,\n packages=find_packages(exclude=['test', 'scripts', 'examples']),\n extras_require=extras_reqs,\n install_requires=install_reqs,\n include_package_data=True,\n license='BSD3',\n platforms=['Linux'],\n classifiers=[\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Science/Research\",\n \"Intended Audience :: Information Technology\",\n \"License :: OSI Approved :: BSD License\",\n \"Natural Language :: English\",\n \"Operating System :: OS Independent\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Topic :: Scientific/Engineering :: Information Analysis\",\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n ],\n python_requires='>=3.7',\n url='https://automl.github.io/auto-sklearn',\n)\n", "path": "setup.py"}]} | 1,186 | 71 |
gh_patches_debug_57395 | rasdani/github-patches | git_diff | translate__pootle-3380 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Core: drop MySQL dependence on MyISAM
Core depends on MyISAM at the moment because of low level features used for changeid tracking. We need to migrate that to a more general approach that works on InnoDB and other supported DB engines.
- [x] Make resources list work in all DB backends (#3539)
- [x] Switch revision counter to Redis (#3364)
- [x] Ensure tests run on InnoDB (#3777)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # Copyright 2008-2013 Zuza Software Foundation
5 # Copyright 2014 Evernote Corporation
6 #
7 # This file is part of Pootle.
8 #
9 # This program is free software; you can redistribute it and/or modify
10 # it under the terms of the GNU General Public License as published by
11 # the Free Software Foundation; either version 2 of the License, or
12 # (at your option) any later version.
13 #
14 # This program is distributed in the hope that it will be useful,
15 # but WITHOUT ANY WARRANTY; without even the implied warranty of
16 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
17 # GNU General Public License for more details.
18 #
19 # You should have received a copy of the GNU General Public License
20 # along with this program; if not, see <http://www.gnu.org/licenses/>.
21
22 import glob
23 import os
24 import re
25 import sys
26
27 from distutils import log
28 from distutils.command.build import build as DistutilsBuild
29 from distutils.errors import DistutilsOptionError
30
31 from setuptools import find_packages, setup
32 from setuptools.command.test import test as TestCommand
33
34 from pootle.__version__ import sver as pootle_version
35
36
37 def parse_requirements(file_name):
38 """Parses a pip requirements file and returns a list of packages.
39
40 Use the result of this function in the ``install_requires`` field.
41 Copied from cburgmer/pdfserver.
42 """
43 requirements = []
44 for line in open(file_name, 'r').read().split('\n'):
45 # Ignore comments, blank lines and included requirements files
46 if re.match(r'(\s*#)|(\s*$)|(-r .*$)', line):
47 continue
48
49 if re.match(r'\s*-e\s+', line):
50 requirements.append(re.sub(r'\s*-e\s+.*#egg=(.*)$', r'\1', line))
51 elif re.match(r'\s*-f\s+', line):
52 pass
53 else:
54 requirements.append(line)
55
56 return requirements
57
58
59 class PyTest(TestCommand):
60
61 def finalize_options(self):
62 TestCommand.finalize_options(self)
63 self.test_args = ['--tb=short', 'tests/']
64 self.test_suite = True
65
66 def run_tests(self):
67 #import here, cause outside the eggs aren't loaded
68 import pytest
69 errno = pytest.main(self.test_args)
70 sys.exit(errno)
71
72
73 class PootleBuildMo(DistutilsBuild):
74
75 description = "compile Gettext PO files into MO"
76 user_options = [
77 ('all', None,
78 "compile all language (don't use LINGUAS file)"),
79 ('lang=', 'l',
80 "specify a language to compile"),
81 ]
82 boolean_options = ['all']
83
84 po_path_base = os.path.join('pootle', 'locale')
85 _langs = []
86
87 def initialize_options(self):
88 self.all = False
89 self.lang = None
90
91 def finalize_options(self):
92 if self.all and self.lang is not None:
93 raise DistutilsOptionError(
94 "Can't use --all and --lang together"
95 )
96 if self.lang is not None:
97 self._langs = [self.lang]
98 elif self.all:
99 for lang in os.listdir(self.po_path_base):
100 if (os.path.isdir(os.path.join(self.po_path_base, lang)) and
101 lang != "templates"):
102 self._langs.append(lang)
103 else:
104 for lang in open(os.path.join('pootle', 'locale', 'LINGUAS')):
105 self._langs.append(lang.rstrip())
106
107 def build_mo(self):
108 """Compile .mo files from available .po files"""
109 import subprocess
110 import gettext
111 from translate.storage import factory
112
113 for lang in self._langs:
114 lang = lang.rstrip()
115
116 po_path = os.path.join('pootle', 'locale', lang)
117 mo_path = os.path.join('pootle', 'locale', lang, 'LC_MESSAGES')
118
119 if not os.path.exists(mo_path):
120 os.makedirs(mo_path)
121
122 for po, mo in (('pootle.po', 'django.mo'),
123 ('pootle_js.po', 'djangojs.mo')):
124 po_filename = os.path.join(po_path, po)
125 mo_filename = os.path.join(mo_path, mo)
126
127 if not os.path.exists(po_filename):
128 log.warn("%s: missing file %s", lang, po_filename)
129 continue
130
131 if not os.path.exists(mo_path):
132 os.makedirs(mo_path)
133
134 log.info("compiling %s", lang)
135 try:
136 subprocess.call([
137 'msgfmt', '--strict', '-o', mo_filename, po_filename],
138 stderr=subprocess.STDOUT)
139 except Exception as e:
140 log.warn("%s: skipping, running msgfmt failed: %s",
141 lang, e)
142
143 try:
144 store = factory.getobject(po_filename)
145 gettext.c2py(store.getheaderplural()[1])
146 except Exception:
147 log.warn("%s: invalid plural header in %s",
148 lang, po_filename)
149
150 def run(self):
151 self.build_mo()
152
153
154 setup(
155 name="Pootle",
156 version=pootle_version,
157
158 description="An online collaborative localization tool.",
159 long_description=open(
160 os.path.join(os.path.dirname(__file__), 'README.rst')
161 ).read(),
162
163 author="Translate",
164 author_email="[email protected]",
165 license="GNU General Public License (GPL)",
166 url="http://pootle.translatehouse.org",
167 download_url="http://sourceforge.net/projects/translate/files/Pootle/" + pootle_version,
168
169 install_requires=parse_requirements('requirements/base.txt'),
170 tests_require=parse_requirements('requirements/tests.txt'),
171
172 platforms=["any"],
173 classifiers=[
174 "Development Status :: 5 - Production/Stable",
175 "Environment :: Web Environment",
176 "Framework :: Django",
177 "Intended Audience :: Developers",
178 "Intended Audience :: End Users/Desktop",
179 "Intended Audience :: Information Technology",
180 "License :: OSI Approved :: GNU General Public License (GPL)",
181 "Operating System :: OS Independent",
182 "Operating System :: Microsoft :: Windows",
183 "Operating System :: Unix",
184 "Programming Language :: JavaScript",
185 "Programming Language :: Python",
186 "Topic :: Software Development :: Localization",
187 "Topic :: Text Processing :: Linguistic"
188 ],
189 zip_safe=False,
190 packages=find_packages(exclude=['deploy*']),
191 include_package_data=True,
192
193 entry_points={
194 'console_scripts': [
195 'pootle = pootle.runner:main',
196 ],
197 },
198 cmdclass={
199 'build_mo': PootleBuildMo,
200 'test': PyTest,
201 },
202 )
203
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -43,7 +43,7 @@
requirements = []
for line in open(file_name, 'r').read().split('\n'):
# Ignore comments, blank lines and included requirements files
- if re.match(r'(\s*#)|(\s*$)|(-r .*$)', line):
+ if re.match(r'(\s*#)|(\s*$)|((-r|--allow-external|--allow-unverified) .*$)', line):
continue
if re.match(r'\s*-e\s+', line):
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -43,7 +43,7 @@\n requirements = []\n for line in open(file_name, 'r').read().split('\\n'):\n # Ignore comments, blank lines and included requirements files\n- if re.match(r'(\\s*#)|(\\s*$)|(-r .*$)', line):\n+ if re.match(r'(\\s*#)|(\\s*$)|((-r|--allow-external|--allow-unverified) .*$)', line):\n continue\n \n if re.match(r'\\s*-e\\s+', line):\n", "issue": "Core: drop MySQL dependence on MyISAM\nCore depends on MyISAM at the moment because of low level features used for changeid tracking. We need to migrate that to a more general approach that works on InnoDB and other supported DB engines.\n- [x] Make resources list work in all DB backends (#3539)\n- [x] Switch revision counter to Redis (#3364)\n- [x] Ensure tests run on InnoDB (#3777)\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright 2008-2013 Zuza Software Foundation\n# Copyright 2014 Evernote Corporation\n#\n# This file is part of Pootle.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, see <http://www.gnu.org/licenses/>.\n\nimport glob\nimport os\nimport re\nimport sys\n\nfrom distutils import log\nfrom distutils.command.build import build as DistutilsBuild\nfrom distutils.errors import DistutilsOptionError\n\nfrom setuptools import find_packages, setup\nfrom setuptools.command.test import test as TestCommand\n\nfrom pootle.__version__ import sver as pootle_version\n\n\ndef parse_requirements(file_name):\n \"\"\"Parses a pip requirements file and returns a list of packages.\n\n Use the result of this function in the ``install_requires`` field.\n Copied from cburgmer/pdfserver.\n \"\"\"\n requirements = []\n for line in open(file_name, 'r').read().split('\\n'):\n # Ignore comments, blank lines and included requirements files\n if re.match(r'(\\s*#)|(\\s*$)|(-r .*$)', line):\n continue\n\n if re.match(r'\\s*-e\\s+', line):\n requirements.append(re.sub(r'\\s*-e\\s+.*#egg=(.*)$', r'\\1', line))\n elif re.match(r'\\s*-f\\s+', line):\n pass\n else:\n requirements.append(line)\n\n return requirements\n\n\nclass PyTest(TestCommand):\n\n def finalize_options(self):\n TestCommand.finalize_options(self)\n self.test_args = ['--tb=short', 'tests/']\n self.test_suite = True\n\n def run_tests(self):\n #import here, cause outside the eggs aren't loaded\n import pytest\n errno = pytest.main(self.test_args)\n sys.exit(errno)\n\n\nclass PootleBuildMo(DistutilsBuild):\n\n description = \"compile Gettext PO files into MO\"\n user_options = [\n ('all', None,\n \"compile all language (don't use LINGUAS file)\"),\n ('lang=', 'l',\n \"specify a language to compile\"),\n ]\n boolean_options = ['all']\n\n po_path_base = os.path.join('pootle', 'locale')\n _langs = []\n\n def initialize_options(self):\n self.all = False\n self.lang = None\n\n def finalize_options(self):\n if self.all and self.lang is not None:\n raise DistutilsOptionError(\n \"Can't use --all and --lang together\"\n )\n if self.lang is not None:\n self._langs = [self.lang]\n elif self.all:\n for lang in os.listdir(self.po_path_base):\n if (os.path.isdir(os.path.join(self.po_path_base, lang)) and\n lang != \"templates\"):\n self._langs.append(lang)\n else:\n for lang in open(os.path.join('pootle', 'locale', 'LINGUAS')):\n self._langs.append(lang.rstrip())\n\n def build_mo(self):\n \"\"\"Compile .mo files from available .po files\"\"\"\n import subprocess\n import gettext\n from translate.storage import factory\n\n for lang in self._langs:\n lang = lang.rstrip()\n\n po_path = os.path.join('pootle', 'locale', lang)\n mo_path = os.path.join('pootle', 'locale', lang, 'LC_MESSAGES')\n\n if not os.path.exists(mo_path):\n os.makedirs(mo_path)\n\n for po, mo in (('pootle.po', 'django.mo'),\n ('pootle_js.po', 'djangojs.mo')):\n po_filename = os.path.join(po_path, po)\n mo_filename = os.path.join(mo_path, mo)\n\n if not os.path.exists(po_filename):\n log.warn(\"%s: missing file %s\", lang, po_filename)\n continue\n\n if not os.path.exists(mo_path):\n os.makedirs(mo_path)\n\n log.info(\"compiling %s\", lang)\n try:\n subprocess.call([\n 'msgfmt', '--strict', '-o', mo_filename, po_filename],\n stderr=subprocess.STDOUT)\n except Exception as e:\n log.warn(\"%s: skipping, running msgfmt failed: %s\",\n lang, e)\n\n try:\n store = factory.getobject(po_filename)\n gettext.c2py(store.getheaderplural()[1])\n except Exception:\n log.warn(\"%s: invalid plural header in %s\",\n lang, po_filename)\n\n def run(self):\n self.build_mo()\n\n\nsetup(\n name=\"Pootle\",\n version=pootle_version,\n\n description=\"An online collaborative localization tool.\",\n long_description=open(\n os.path.join(os.path.dirname(__file__), 'README.rst')\n ).read(),\n\n author=\"Translate\",\n author_email=\"[email protected]\",\n license=\"GNU General Public License (GPL)\",\n url=\"http://pootle.translatehouse.org\",\n download_url=\"http://sourceforge.net/projects/translate/files/Pootle/\" + pootle_version,\n\n install_requires=parse_requirements('requirements/base.txt'),\n tests_require=parse_requirements('requirements/tests.txt'),\n\n platforms=[\"any\"],\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Web Environment\",\n \"Framework :: Django\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: End Users/Desktop\",\n \"Intended Audience :: Information Technology\",\n \"License :: OSI Approved :: GNU General Public License (GPL)\",\n \"Operating System :: OS Independent\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: Unix\",\n \"Programming Language :: JavaScript\",\n \"Programming Language :: Python\",\n \"Topic :: Software Development :: Localization\",\n \"Topic :: Text Processing :: Linguistic\"\n ],\n zip_safe=False,\n packages=find_packages(exclude=['deploy*']),\n include_package_data=True,\n\n entry_points={\n 'console_scripts': [\n 'pootle = pootle.runner:main',\n ],\n },\n cmdclass={\n 'build_mo': PootleBuildMo,\n 'test': PyTest,\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright 2008-2013 Zuza Software Foundation\n# Copyright 2014 Evernote Corporation\n#\n# This file is part of Pootle.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, see <http://www.gnu.org/licenses/>.\n\nimport glob\nimport os\nimport re\nimport sys\n\nfrom distutils import log\nfrom distutils.command.build import build as DistutilsBuild\nfrom distutils.errors import DistutilsOptionError\n\nfrom setuptools import find_packages, setup\nfrom setuptools.command.test import test as TestCommand\n\nfrom pootle.__version__ import sver as pootle_version\n\n\ndef parse_requirements(file_name):\n \"\"\"Parses a pip requirements file and returns a list of packages.\n\n Use the result of this function in the ``install_requires`` field.\n Copied from cburgmer/pdfserver.\n \"\"\"\n requirements = []\n for line in open(file_name, 'r').read().split('\\n'):\n # Ignore comments, blank lines and included requirements files\n if re.match(r'(\\s*#)|(\\s*$)|((-r|--allow-external|--allow-unverified) .*$)', line):\n continue\n\n if re.match(r'\\s*-e\\s+', line):\n requirements.append(re.sub(r'\\s*-e\\s+.*#egg=(.*)$', r'\\1', line))\n elif re.match(r'\\s*-f\\s+', line):\n pass\n else:\n requirements.append(line)\n\n return requirements\n\n\nclass PyTest(TestCommand):\n\n def finalize_options(self):\n TestCommand.finalize_options(self)\n self.test_args = ['--tb=short', 'tests/']\n self.test_suite = True\n\n def run_tests(self):\n #import here, cause outside the eggs aren't loaded\n import pytest\n errno = pytest.main(self.test_args)\n sys.exit(errno)\n\n\nclass PootleBuildMo(DistutilsBuild):\n\n description = \"compile Gettext PO files into MO\"\n user_options = [\n ('all', None,\n \"compile all language (don't use LINGUAS file)\"),\n ('lang=', 'l',\n \"specify a language to compile\"),\n ]\n boolean_options = ['all']\n\n po_path_base = os.path.join('pootle', 'locale')\n _langs = []\n\n def initialize_options(self):\n self.all = False\n self.lang = None\n\n def finalize_options(self):\n if self.all and self.lang is not None:\n raise DistutilsOptionError(\n \"Can't use --all and --lang together\"\n )\n if self.lang is not None:\n self._langs = [self.lang]\n elif self.all:\n for lang in os.listdir(self.po_path_base):\n if (os.path.isdir(os.path.join(self.po_path_base, lang)) and\n lang != \"templates\"):\n self._langs.append(lang)\n else:\n for lang in open(os.path.join('pootle', 'locale', 'LINGUAS')):\n self._langs.append(lang.rstrip())\n\n def build_mo(self):\n \"\"\"Compile .mo files from available .po files\"\"\"\n import subprocess\n import gettext\n from translate.storage import factory\n\n for lang in self._langs:\n lang = lang.rstrip()\n\n po_path = os.path.join('pootle', 'locale', lang)\n mo_path = os.path.join('pootle', 'locale', lang, 'LC_MESSAGES')\n\n if not os.path.exists(mo_path):\n os.makedirs(mo_path)\n\n for po, mo in (('pootle.po', 'django.mo'),\n ('pootle_js.po', 'djangojs.mo')):\n po_filename = os.path.join(po_path, po)\n mo_filename = os.path.join(mo_path, mo)\n\n if not os.path.exists(po_filename):\n log.warn(\"%s: missing file %s\", lang, po_filename)\n continue\n\n if not os.path.exists(mo_path):\n os.makedirs(mo_path)\n\n log.info(\"compiling %s\", lang)\n try:\n subprocess.call([\n 'msgfmt', '--strict', '-o', mo_filename, po_filename],\n stderr=subprocess.STDOUT)\n except Exception as e:\n log.warn(\"%s: skipping, running msgfmt failed: %s\",\n lang, e)\n\n try:\n store = factory.getobject(po_filename)\n gettext.c2py(store.getheaderplural()[1])\n except Exception:\n log.warn(\"%s: invalid plural header in %s\",\n lang, po_filename)\n\n def run(self):\n self.build_mo()\n\n\nsetup(\n name=\"Pootle\",\n version=pootle_version,\n\n description=\"An online collaborative localization tool.\",\n long_description=open(\n os.path.join(os.path.dirname(__file__), 'README.rst')\n ).read(),\n\n author=\"Translate\",\n author_email=\"[email protected]\",\n license=\"GNU General Public License (GPL)\",\n url=\"http://pootle.translatehouse.org\",\n download_url=\"http://sourceforge.net/projects/translate/files/Pootle/\" + pootle_version,\n\n install_requires=parse_requirements('requirements/base.txt'),\n tests_require=parse_requirements('requirements/tests.txt'),\n\n platforms=[\"any\"],\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Web Environment\",\n \"Framework :: Django\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: End Users/Desktop\",\n \"Intended Audience :: Information Technology\",\n \"License :: OSI Approved :: GNU General Public License (GPL)\",\n \"Operating System :: OS Independent\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: Unix\",\n \"Programming Language :: JavaScript\",\n \"Programming Language :: Python\",\n \"Topic :: Software Development :: Localization\",\n \"Topic :: Text Processing :: Linguistic\"\n ],\n zip_safe=False,\n packages=find_packages(exclude=['deploy*']),\n include_package_data=True,\n\n entry_points={\n 'console_scripts': [\n 'pootle = pootle.runner:main',\n ],\n },\n cmdclass={\n 'build_mo': PootleBuildMo,\n 'test': PyTest,\n },\n)\n", "path": "setup.py"}]} | 2,327 | 136 |
gh_patches_debug_5141 | rasdani/github-patches | git_diff | scrapy__scrapy-2503 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
scrapy view <url> raise exc in v1.3.0
````
(py35) wingyiu@mbp101:~$scrapy view http://www.scrapy.org
2017-01-19 22:13:54 [scrapy.utils.log] INFO: Scrapy 1.3.0 started (bot: scrapybot)
2017-01-19 22:13:54 [scrapy.utils.log] INFO: Overridden settings: {}
Traceback (most recent call last):
File "/Users/user/venv/py35/bin/scrapy", line 11, in <module>
sys.exit(execute())
File "/Users/user/venv/py35/lib/python3.5/site-packages/scrapy/cmdline.py", line 142, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "/Users/user/venv/py35/lib/python3.5/site-packages/scrapy/cmdline.py", line 88, in _run_print_help
func(*a, **kw)
File "/Users/user/venv/py35/lib/python3.5/site-packages/scrapy/cmdline.py", line 149, in _run_command
cmd.run(args, opts)
File "/Users/user/venv/py35/lib/python3.5/site-packages/scrapy/commands/fetch.py", line 58, in run
if not opts.no_redirect:
AttributeError: 'Values' object has no attribute 'no_redirect'
````
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/commands/view.py`
Content:
```
1 from scrapy.commands import fetch, ScrapyCommand
2 from scrapy.utils.response import open_in_browser
3
4 class Command(fetch.Command):
5
6 def short_desc(self):
7 return "Open URL in browser, as seen by Scrapy"
8
9 def long_desc(self):
10 return "Fetch a URL using the Scrapy downloader and show its " \
11 "contents in a browser"
12
13 def add_options(self, parser):
14 ScrapyCommand.add_options(self, parser)
15 parser.add_option("--spider", dest="spider",
16 help="use this spider")
17
18 def _print_response(self, response, opts):
19 open_in_browser(response)
20
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scrapy/commands/view.py b/scrapy/commands/view.py
--- a/scrapy/commands/view.py
+++ b/scrapy/commands/view.py
@@ -11,9 +11,8 @@
"contents in a browser"
def add_options(self, parser):
- ScrapyCommand.add_options(self, parser)
- parser.add_option("--spider", dest="spider",
- help="use this spider")
+ super(Command, self).add_options(parser)
+ parser.remove_option("--headers")
def _print_response(self, response, opts):
open_in_browser(response)
| {"golden_diff": "diff --git a/scrapy/commands/view.py b/scrapy/commands/view.py\n--- a/scrapy/commands/view.py\n+++ b/scrapy/commands/view.py\n@@ -11,9 +11,8 @@\n \"contents in a browser\"\n \n def add_options(self, parser):\n- ScrapyCommand.add_options(self, parser)\n- parser.add_option(\"--spider\", dest=\"spider\",\n- help=\"use this spider\")\n+ super(Command, self).add_options(parser)\n+ parser.remove_option(\"--headers\")\n \n def _print_response(self, response, opts):\n open_in_browser(response)\n", "issue": "scrapy view <url> raise exc in v1.3.0\n````\r\n(py35) wingyiu@mbp101:~$scrapy view http://www.scrapy.org\r\n2017-01-19 22:13:54 [scrapy.utils.log] INFO: Scrapy 1.3.0 started (bot: scrapybot)\r\n2017-01-19 22:13:54 [scrapy.utils.log] INFO: Overridden settings: {}\r\nTraceback (most recent call last):\r\n File \"/Users/user/venv/py35/bin/scrapy\", line 11, in <module>\r\n sys.exit(execute())\r\n File \"/Users/user/venv/py35/lib/python3.5/site-packages/scrapy/cmdline.py\", line 142, in execute\r\n _run_print_help(parser, _run_command, cmd, args, opts)\r\n File \"/Users/user/venv/py35/lib/python3.5/site-packages/scrapy/cmdline.py\", line 88, in _run_print_help\r\n func(*a, **kw)\r\n File \"/Users/user/venv/py35/lib/python3.5/site-packages/scrapy/cmdline.py\", line 149, in _run_command\r\n cmd.run(args, opts)\r\n File \"/Users/user/venv/py35/lib/python3.5/site-packages/scrapy/commands/fetch.py\", line 58, in run\r\n if not opts.no_redirect:\r\nAttributeError: 'Values' object has no attribute 'no_redirect'\r\n````\r\n\n", "before_files": [{"content": "from scrapy.commands import fetch, ScrapyCommand\nfrom scrapy.utils.response import open_in_browser\n\nclass Command(fetch.Command):\n\n def short_desc(self):\n return \"Open URL in browser, as seen by Scrapy\"\n\n def long_desc(self):\n return \"Fetch a URL using the Scrapy downloader and show its \" \\\n \"contents in a browser\"\n\n def add_options(self, parser):\n ScrapyCommand.add_options(self, parser)\n parser.add_option(\"--spider\", dest=\"spider\",\n help=\"use this spider\")\n\n def _print_response(self, response, opts):\n open_in_browser(response)\n", "path": "scrapy/commands/view.py"}], "after_files": [{"content": "from scrapy.commands import fetch, ScrapyCommand\nfrom scrapy.utils.response import open_in_browser\n\nclass Command(fetch.Command):\n\n def short_desc(self):\n return \"Open URL in browser, as seen by Scrapy\"\n\n def long_desc(self):\n return \"Fetch a URL using the Scrapy downloader and show its \" \\\n \"contents in a browser\"\n\n def add_options(self, parser):\n super(Command, self).add_options(parser)\n parser.remove_option(\"--headers\")\n\n def _print_response(self, response, opts):\n open_in_browser(response)\n", "path": "scrapy/commands/view.py"}]} | 775 | 134 |
gh_patches_debug_15924 | rasdani/github-patches | git_diff | Kinto__kinto-119 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Using the _since querystring filter has no effect
I've tried using the `_since` querystring filter as explained in the tutorial, but it seems to have no effect.
`GET`ing any of those urls returns the exact same list (the full list of records)
```
http GET http://0.0.0.0:8888/v1/buckets/default/collections/tasks/records?_since=1436094288171 -v --auth 'user:password'
http GET http://0.0.0.0:8888/v1/buckets/default/collections/tasks/records?_since=foobar -v --auth 'user:password'
http GET http://0.0.0.0:8888/v1/buckets/default/collections/tasks/records?_since=`date +%s` -v --auth 'user:password'
```
The last one uses the current timestamp as the value, which means it should return an empty list.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kinto/views/buckets.py`
Content:
```
1 from pyramid.httpexceptions import HTTPForbidden, HTTPPreconditionFailed
2 from pyramid.security import NO_PERMISSION_REQUIRED
3 from pyramid.view import view_config
4
5 from cliquet import resource
6 from cliquet.utils import hmac_digest, build_request
7
8 from kinto.views import NameGenerator
9
10
11 def create_bucket(request, bucket_id):
12 """Create a bucket if it doesn't exists."""
13 bucket_put = (request.method.lower() == 'put' and
14 request.path.endswith('buckets/default'))
15
16 if not bucket_put:
17 subrequest = build_request(request, {
18 'method': 'PUT',
19 'path': '/buckets/%s' % bucket_id,
20 'body': {"data": {}},
21 'headers': {'If-None-Match': '*'.encode('utf-8')}
22 })
23
24 try:
25 request.invoke_subrequest(subrequest)
26 except HTTPPreconditionFailed:
27 # The bucket already exists
28 pass
29
30
31 def create_collection(request, bucket_id):
32 subpath = request.matchdict['subpath']
33 if subpath.startswith('/collections/'):
34 collection_id = subpath.split('/')[2]
35 collection_put = (request.method.lower() == 'put' and
36 request.path.endswith(collection_id))
37 if not collection_put:
38 subrequest = build_request(request, {
39 'method': 'PUT',
40 'path': '/buckets/%s/collections/%s' % (
41 bucket_id, collection_id),
42 'body': {"data": {}},
43 'headers': {'If-None-Match': '*'.encode('utf-8')}
44 })
45 try:
46 request.invoke_subrequest(subrequest)
47 except HTTPPreconditionFailed:
48 # The collection already exists
49 pass
50
51
52 @view_config(route_name='default_bucket', permission=NO_PERMISSION_REQUIRED)
53 def default_bucket(request):
54 if getattr(request, 'prefixed_userid', None) is None:
55 raise HTTPForbidden # Pass through the forbidden_view_config
56
57 settings = request.registry.settings
58 hmac_secret = settings['cliquet.userid_hmac_secret']
59 # Build the user unguessable bucket_id UUID from its user_id
60 bucket_id = hmac_digest(hmac_secret, request.prefixed_userid)[:32]
61 path = request.path.replace('default', bucket_id)
62
63 # Make sure bucket exists
64 create_bucket(request, bucket_id)
65
66 # Make sure the collection exists
67 create_collection(request, bucket_id)
68
69 subrequest = build_request(request, {
70 'method': request.method,
71 'path': path,
72 'body': request.body
73 })
74
75 return request.invoke_subrequest(subrequest)
76
77
78 @resource.register(name='bucket',
79 collection_methods=('GET',),
80 collection_path='/buckets',
81 record_path='/buckets/{{id}}')
82 class Bucket(resource.ProtectedResource):
83 permissions = ('read', 'write', 'collection:create', 'group:create')
84
85 def __init__(self, *args, **kwargs):
86 super(Bucket, self).__init__(*args, **kwargs)
87 self.collection.id_generator = NameGenerator()
88
89 def get_parent_id(self, request):
90 # Buckets are not isolated by user, unlike Cliquet resources.
91 return ''
92
93 def delete(self):
94 result = super(Bucket, self).delete()
95
96 # Delete groups.
97 storage = self.collection.storage
98 parent_id = '/buckets/%s' % self.record_id
99 storage.delete_all(collection_id='group', parent_id=parent_id)
100
101 # Delete collections.
102 deleted = storage.delete_all(collection_id='collection',
103 parent_id=parent_id)
104
105 # Delete records.
106 id_field = self.collection.id_field
107 for collection in deleted:
108 parent_id = '/buckets/%s/collections/%s' % (self.record_id,
109 collection[id_field])
110 storage.delete_all(collection_id='record', parent_id=parent_id)
111
112 return result
113
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kinto/views/buckets.py b/kinto/views/buckets.py
--- a/kinto/views/buckets.py
+++ b/kinto/views/buckets.py
@@ -59,6 +59,8 @@
# Build the user unguessable bucket_id UUID from its user_id
bucket_id = hmac_digest(hmac_secret, request.prefixed_userid)[:32]
path = request.path.replace('default', bucket_id)
+ querystring = request.url[(request.url.index(request.path) +
+ len(request.path)):]
# Make sure bucket exists
create_bucket(request, bucket_id)
@@ -68,7 +70,7 @@
subrequest = build_request(request, {
'method': request.method,
- 'path': path,
+ 'path': path + querystring,
'body': request.body
})
| {"golden_diff": "diff --git a/kinto/views/buckets.py b/kinto/views/buckets.py\n--- a/kinto/views/buckets.py\n+++ b/kinto/views/buckets.py\n@@ -59,6 +59,8 @@\n # Build the user unguessable bucket_id UUID from its user_id\n bucket_id = hmac_digest(hmac_secret, request.prefixed_userid)[:32]\n path = request.path.replace('default', bucket_id)\n+ querystring = request.url[(request.url.index(request.path) +\n+ len(request.path)):]\n \n # Make sure bucket exists\n create_bucket(request, bucket_id)\n@@ -68,7 +70,7 @@\n \n subrequest = build_request(request, {\n 'method': request.method,\n- 'path': path,\n+ 'path': path + querystring,\n 'body': request.body\n })\n", "issue": "Using the _since querystring filter has no effect\nI've tried using the `_since` querystring filter as explained in the tutorial, but it seems to have no effect.\n\n`GET`ing any of those urls returns the exact same list (the full list of records)\n\n```\nhttp GET http://0.0.0.0:8888/v1/buckets/default/collections/tasks/records?_since=1436094288171 -v --auth 'user:password'\nhttp GET http://0.0.0.0:8888/v1/buckets/default/collections/tasks/records?_since=foobar -v --auth 'user:password'\nhttp GET http://0.0.0.0:8888/v1/buckets/default/collections/tasks/records?_since=`date +%s` -v --auth 'user:password'\n```\n\nThe last one uses the current timestamp as the value, which means it should return an empty list.\n\n", "before_files": [{"content": "from pyramid.httpexceptions import HTTPForbidden, HTTPPreconditionFailed\nfrom pyramid.security import NO_PERMISSION_REQUIRED\nfrom pyramid.view import view_config\n\nfrom cliquet import resource\nfrom cliquet.utils import hmac_digest, build_request\n\nfrom kinto.views import NameGenerator\n\n\ndef create_bucket(request, bucket_id):\n \"\"\"Create a bucket if it doesn't exists.\"\"\"\n bucket_put = (request.method.lower() == 'put' and\n request.path.endswith('buckets/default'))\n\n if not bucket_put:\n subrequest = build_request(request, {\n 'method': 'PUT',\n 'path': '/buckets/%s' % bucket_id,\n 'body': {\"data\": {}},\n 'headers': {'If-None-Match': '*'.encode('utf-8')}\n })\n\n try:\n request.invoke_subrequest(subrequest)\n except HTTPPreconditionFailed:\n # The bucket already exists\n pass\n\n\ndef create_collection(request, bucket_id):\n subpath = request.matchdict['subpath']\n if subpath.startswith('/collections/'):\n collection_id = subpath.split('/')[2]\n collection_put = (request.method.lower() == 'put' and\n request.path.endswith(collection_id))\n if not collection_put:\n subrequest = build_request(request, {\n 'method': 'PUT',\n 'path': '/buckets/%s/collections/%s' % (\n bucket_id, collection_id),\n 'body': {\"data\": {}},\n 'headers': {'If-None-Match': '*'.encode('utf-8')}\n })\n try:\n request.invoke_subrequest(subrequest)\n except HTTPPreconditionFailed:\n # The collection already exists\n pass\n\n\n@view_config(route_name='default_bucket', permission=NO_PERMISSION_REQUIRED)\ndef default_bucket(request):\n if getattr(request, 'prefixed_userid', None) is None:\n raise HTTPForbidden # Pass through the forbidden_view_config\n\n settings = request.registry.settings\n hmac_secret = settings['cliquet.userid_hmac_secret']\n # Build the user unguessable bucket_id UUID from its user_id\n bucket_id = hmac_digest(hmac_secret, request.prefixed_userid)[:32]\n path = request.path.replace('default', bucket_id)\n\n # Make sure bucket exists\n create_bucket(request, bucket_id)\n\n # Make sure the collection exists\n create_collection(request, bucket_id)\n\n subrequest = build_request(request, {\n 'method': request.method,\n 'path': path,\n 'body': request.body\n })\n\n return request.invoke_subrequest(subrequest)\n\n\[email protected](name='bucket',\n collection_methods=('GET',),\n collection_path='/buckets',\n record_path='/buckets/{{id}}')\nclass Bucket(resource.ProtectedResource):\n permissions = ('read', 'write', 'collection:create', 'group:create')\n\n def __init__(self, *args, **kwargs):\n super(Bucket, self).__init__(*args, **kwargs)\n self.collection.id_generator = NameGenerator()\n\n def get_parent_id(self, request):\n # Buckets are not isolated by user, unlike Cliquet resources.\n return ''\n\n def delete(self):\n result = super(Bucket, self).delete()\n\n # Delete groups.\n storage = self.collection.storage\n parent_id = '/buckets/%s' % self.record_id\n storage.delete_all(collection_id='group', parent_id=parent_id)\n\n # Delete collections.\n deleted = storage.delete_all(collection_id='collection',\n parent_id=parent_id)\n\n # Delete records.\n id_field = self.collection.id_field\n for collection in deleted:\n parent_id = '/buckets/%s/collections/%s' % (self.record_id,\n collection[id_field])\n storage.delete_all(collection_id='record', parent_id=parent_id)\n\n return result\n", "path": "kinto/views/buckets.py"}], "after_files": [{"content": "from pyramid.httpexceptions import HTTPForbidden, HTTPPreconditionFailed\nfrom pyramid.security import NO_PERMISSION_REQUIRED\nfrom pyramid.view import view_config\n\nfrom cliquet import resource\nfrom cliquet.utils import hmac_digest, build_request\n\nfrom kinto.views import NameGenerator\n\n\ndef create_bucket(request, bucket_id):\n \"\"\"Create a bucket if it doesn't exists.\"\"\"\n bucket_put = (request.method.lower() == 'put' and\n request.path.endswith('buckets/default'))\n\n if not bucket_put:\n subrequest = build_request(request, {\n 'method': 'PUT',\n 'path': '/buckets/%s' % bucket_id,\n 'body': {\"data\": {}},\n 'headers': {'If-None-Match': '*'.encode('utf-8')}\n })\n\n try:\n request.invoke_subrequest(subrequest)\n except HTTPPreconditionFailed:\n # The bucket already exists\n pass\n\n\ndef create_collection(request, bucket_id):\n subpath = request.matchdict['subpath']\n if subpath.startswith('/collections/'):\n collection_id = subpath.split('/')[2]\n collection_put = (request.method.lower() == 'put' and\n request.path.endswith(collection_id))\n if not collection_put:\n subrequest = build_request(request, {\n 'method': 'PUT',\n 'path': '/buckets/%s/collections/%s' % (\n bucket_id, collection_id),\n 'body': {\"data\": {}},\n 'headers': {'If-None-Match': '*'.encode('utf-8')}\n })\n try:\n request.invoke_subrequest(subrequest)\n except HTTPPreconditionFailed:\n # The collection already exists\n pass\n\n\n@view_config(route_name='default_bucket', permission=NO_PERMISSION_REQUIRED)\ndef default_bucket(request):\n if getattr(request, 'prefixed_userid', None) is None:\n raise HTTPForbidden # Pass through the forbidden_view_config\n\n settings = request.registry.settings\n hmac_secret = settings['cliquet.userid_hmac_secret']\n # Build the user unguessable bucket_id UUID from its user_id\n bucket_id = hmac_digest(hmac_secret, request.prefixed_userid)[:32]\n path = request.path.replace('default', bucket_id)\n querystring = request.url[(request.url.index(request.path) +\n len(request.path)):]\n\n # Make sure bucket exists\n create_bucket(request, bucket_id)\n\n # Make sure the collection exists\n create_collection(request, bucket_id)\n\n subrequest = build_request(request, {\n 'method': request.method,\n 'path': path + querystring,\n 'body': request.body\n })\n\n return request.invoke_subrequest(subrequest)\n\n\[email protected](name='bucket',\n collection_methods=('GET',),\n collection_path='/buckets',\n record_path='/buckets/{{id}}')\nclass Bucket(resource.ProtectedResource):\n permissions = ('read', 'write', 'collection:create', 'group:create')\n\n def __init__(self, *args, **kwargs):\n super(Bucket, self).__init__(*args, **kwargs)\n self.collection.id_generator = NameGenerator()\n\n def get_parent_id(self, request):\n # Buckets are not isolated by user, unlike Cliquet resources.\n return ''\n\n def delete(self):\n result = super(Bucket, self).delete()\n\n # Delete groups.\n storage = self.collection.storage\n parent_id = '/buckets/%s' % self.record_id\n storage.delete_all(collection_id='group', parent_id=parent_id)\n\n # Delete collections.\n deleted = storage.delete_all(collection_id='collection',\n parent_id=parent_id)\n\n # Delete records.\n id_field = self.collection.id_field\n for collection in deleted:\n parent_id = '/buckets/%s/collections/%s' % (self.record_id,\n collection[id_field])\n storage.delete_all(collection_id='record', parent_id=parent_id)\n\n return result\n", "path": "kinto/views/buckets.py"}]} | 1,528 | 187 |
gh_patches_debug_33036 | rasdani/github-patches | git_diff | jazzband__pip-tools-493 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pip-sync appears to get confused about editable packages
When a requirements.txt file includes editable local packages, pip-sync seems to want to uninstall them and reinstall them.
Also, the pip install command that pip-sync attempts to perform is missing the `-e` editable flags, resulting in an error.
##### Steps to replicate
- Create a new virtualenv for testing
``` bash
virtualenv --no-site-packages testvenv
source ./testvenv/bin/activate
pip list
```
returns:
```
pip (8.1.2)
setuptools (28.3.0)
wheel (0.30.0a0)
```
- Generate a requirements.txt file (obfuscated from our actual names):
```
-e file:///vagrant/projects/someproject
```
- Install the editable project into the virtualenv
``` bash
pip install -r requirements.txt --no-deps
```
- Install pip-tools into the venv (because the globally installed pip-tools doesn't seem to deal with virtualenvironents well (another bug?) )
``` bash
pip install pip-tools
deactivate
source ./testvenv/bin/activate
```
- See what pip-sync thinks
``` bash
pip-sync -n requirements.txt
```
returns _unexpected results_:
```
Would uninstall:
our.package.name
Would install:
file:///vagrant/projects/packagename
```
- Try to do the sync
```
pip-sync requirements.txt
```
returns
```
Uninstalling our.package.name-1.2.3+dirty:
Successfully uninstalled our.package.name-1.2.3+dirty
Processing /vagrant/projects/packagename
Complete output from command python setup.py egg_info:
*** Bunch of errors we generate related to our package's assumptions about being a git repo ***
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-ls2yuR-build/
Traceback (most recent call last):
File "/home/vagrant/testvenv/bin/pip-sync", line 11, in <module>
sys.exit(cli())
File "/home/vagrant/testvenv/local/lib/python2.7/site-packages/click/core.py", line 716, in __call__
return self.main(*args, **kwargs)
File "/home/vagrant/testvenv/local/lib/python2.7/site-packages/click/core.py", line 696, in main
rv = self.invoke(ctx)
File "/home/vagrant/testvenv/local/lib/python2.7/site-packages/click/core.py", line 889, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/vagrant/testvenv/local/lib/python2.7/site-packages/click/core.py", line 534, in invoke
return callback(*args, **kwargs)
File "/home/vagrant/testvenv/local/lib/python2.7/site-packages/piptools/scripts/sync.py", line 72, in cli
install_flags=install_flags))
File "/home/vagrant/testvenv/local/lib/python2.7/site-packages/piptools/sync.py", line 157, in sync
check_call([pip, 'install'] + pip_flags + install_flags + sorted(to_install))
File "/usr/lib/python2.7/subprocess.py", line 511, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['pip', 'install', 'file:///vagrant/projects/packagename']' returned non-zero exit status 1
```
##### Expected result
A) The tool shouldn't be attempting to uninstall the editable package and then reinstall it
B) Installing editable packages should have a `-e` option
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `piptools/sync.py`
Content:
```
1 import collections
2 import os
3 import sys
4 from subprocess import check_call
5
6 from . import click
7 from .exceptions import IncompatibleRequirements, UnsupportedConstraint
8 from .utils import flat_map, key_from_req
9
10 PACKAGES_TO_IGNORE = [
11 'pip',
12 'pip-tools',
13 'pip-review',
14 'setuptools',
15 'wheel',
16 ]
17
18
19 def dependency_tree(installed_keys, root_key):
20 """
21 Calculate the dependency tree for the package `root_key` and return
22 a collection of all its dependencies. Uses a DFS traversal algorithm.
23
24 `installed_keys` should be a {key: requirement} mapping, e.g.
25 {'django': from_line('django==1.8')}
26 `root_key` should be the key to return the dependency tree for.
27 """
28 dependencies = set()
29 queue = collections.deque()
30
31 if root_key in installed_keys:
32 dep = installed_keys[root_key]
33 queue.append(dep)
34
35 while queue:
36 v = queue.popleft()
37 key = key_from_req(v)
38 if key in dependencies:
39 continue
40
41 dependencies.add(key)
42
43 for dep_specifier in v.requires():
44 dep_name = key_from_req(dep_specifier)
45 if dep_name in installed_keys:
46 dep = installed_keys[dep_name]
47
48 if dep_specifier.specifier.contains(dep.version):
49 queue.append(dep)
50
51 return dependencies
52
53
54 def get_dists_to_ignore(installed):
55 """
56 Returns a collection of package names to ignore when performing pip-sync,
57 based on the currently installed environment. For example, when pip-tools
58 is installed in the local environment, it should be ignored, including all
59 of its dependencies (e.g. click). When pip-tools is not installed
60 locally, click should also be installed/uninstalled depending on the given
61 requirements.
62 """
63 installed_keys = {key_from_req(r): r for r in installed}
64 return list(flat_map(lambda req: dependency_tree(installed_keys, req), PACKAGES_TO_IGNORE))
65
66
67 def merge(requirements, ignore_conflicts):
68 by_key = {}
69
70 for ireq in requirements:
71 if ireq.link is not None and not ireq.editable:
72 msg = ('pip-compile does not support URLs as packages, unless they are editable. '
73 'Perhaps add -e option?')
74 raise UnsupportedConstraint(msg, ireq)
75
76 key = ireq.link or key_from_req(ireq.req)
77
78 if not ignore_conflicts:
79 existing_ireq = by_key.get(key)
80 if existing_ireq:
81 # NOTE: We check equality here since we can assume that the
82 # requirements are all pinned
83 if ireq.specifier != existing_ireq.specifier:
84 raise IncompatibleRequirements(ireq, existing_ireq)
85
86 # TODO: Always pick the largest specifier in case of a conflict
87 by_key[key] = ireq
88
89 return by_key.values()
90
91
92 def diff(compiled_requirements, installed_dists):
93 """
94 Calculate which packages should be installed or uninstalled, given a set
95 of compiled requirements and a list of currently installed modules.
96 """
97 requirements_lut = {r.link or key_from_req(r.req): r for r in compiled_requirements}
98
99 satisfied = set() # holds keys
100 to_install = set() # holds keys-and-versions
101 to_uninstall = set() # holds keys
102
103 pkgs_to_ignore = get_dists_to_ignore(installed_dists)
104 for dist in installed_dists:
105 key = key_from_req(dist)
106 if key not in requirements_lut:
107 to_uninstall.add(key)
108 elif requirements_lut[key].specifier.contains(dist.version):
109 satisfied.add(key)
110
111 for key, requirement in requirements_lut.items():
112 if key not in satisfied:
113 to_install.add(str(requirement.link or requirement.req))
114
115 # Make sure to not uninstall any packages that should be ignored
116 to_uninstall -= set(pkgs_to_ignore)
117
118 return (to_install, to_uninstall)
119
120
121 def sync(to_install, to_uninstall, verbose=False, dry_run=False, pip_flags=None, install_flags=None):
122 """
123 Install and uninstalls the given sets of modules.
124 """
125 if not to_uninstall and not to_install:
126 click.echo("Everything up-to-date")
127
128 if pip_flags is None:
129 pip_flags = []
130
131 if not verbose:
132 pip_flags += ['-q']
133
134 if os.environ.get('VIRTUAL_ENV'):
135 # find pip via PATH
136 pip = 'pip'
137 else:
138 # find pip in same directory as pip-sync entry-point script
139 pip = os.path.join(os.path.dirname(os.path.abspath(sys.argv[0])), 'pip')
140
141 if to_uninstall:
142 if dry_run:
143 click.echo("Would uninstall:")
144 for pkg in to_uninstall:
145 click.echo(" {}".format(pkg))
146 else:
147 check_call([pip, 'uninstall', '-y'] + pip_flags + sorted(to_uninstall))
148
149 if to_install:
150 if install_flags is None:
151 install_flags = []
152 if dry_run:
153 click.echo("Would install:")
154 for pkg in to_install:
155 click.echo(" {}".format(pkg))
156 else:
157 check_call([pip, 'install'] + pip_flags + install_flags + sorted(to_install))
158 return 0
159
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/piptools/sync.py b/piptools/sync.py
--- a/piptools/sync.py
+++ b/piptools/sync.py
@@ -5,7 +5,7 @@
from . import click
from .exceptions import IncompatibleRequirements, UnsupportedConstraint
-from .utils import flat_map, key_from_req
+from .utils import flat_map, format_requirement, key_from_req
PACKAGES_TO_IGNORE = [
'pip',
@@ -97,7 +97,7 @@
requirements_lut = {r.link or key_from_req(r.req): r for r in compiled_requirements}
satisfied = set() # holds keys
- to_install = set() # holds keys-and-versions
+ to_install = set() # holds InstallRequirement objects
to_uninstall = set() # holds keys
pkgs_to_ignore = get_dists_to_ignore(installed_dists)
@@ -110,7 +110,7 @@
for key, requirement in requirements_lut.items():
if key not in satisfied:
- to_install.add(str(requirement.link or requirement.req))
+ to_install.add(requirement)
# Make sure to not uninstall any packages that should be ignored
to_uninstall -= set(pkgs_to_ignore)
@@ -151,8 +151,14 @@
install_flags = []
if dry_run:
click.echo("Would install:")
- for pkg in to_install:
- click.echo(" {}".format(pkg))
+ for ireq in to_install:
+ click.echo(" {}".format(format_requirement(ireq)))
else:
- check_call([pip, 'install'] + pip_flags + install_flags + sorted(to_install))
+ package_args = []
+ for ireq in sorted(to_install):
+ if ireq.editable:
+ package_args.extend(['-e', str(ireq.link or ireq.req)])
+ else:
+ package_args.append(str(ireq.req))
+ check_call([pip, 'install'] + pip_flags + install_flags + package_args)
return 0
| {"golden_diff": "diff --git a/piptools/sync.py b/piptools/sync.py\n--- a/piptools/sync.py\n+++ b/piptools/sync.py\n@@ -5,7 +5,7 @@\n \n from . import click\n from .exceptions import IncompatibleRequirements, UnsupportedConstraint\n-from .utils import flat_map, key_from_req\n+from .utils import flat_map, format_requirement, key_from_req\n \n PACKAGES_TO_IGNORE = [\n 'pip',\n@@ -97,7 +97,7 @@\n requirements_lut = {r.link or key_from_req(r.req): r for r in compiled_requirements}\n \n satisfied = set() # holds keys\n- to_install = set() # holds keys-and-versions\n+ to_install = set() # holds InstallRequirement objects\n to_uninstall = set() # holds keys\n \n pkgs_to_ignore = get_dists_to_ignore(installed_dists)\n@@ -110,7 +110,7 @@\n \n for key, requirement in requirements_lut.items():\n if key not in satisfied:\n- to_install.add(str(requirement.link or requirement.req))\n+ to_install.add(requirement)\n \n # Make sure to not uninstall any packages that should be ignored\n to_uninstall -= set(pkgs_to_ignore)\n@@ -151,8 +151,14 @@\n install_flags = []\n if dry_run:\n click.echo(\"Would install:\")\n- for pkg in to_install:\n- click.echo(\" {}\".format(pkg))\n+ for ireq in to_install:\n+ click.echo(\" {}\".format(format_requirement(ireq)))\n else:\n- check_call([pip, 'install'] + pip_flags + install_flags + sorted(to_install))\n+ package_args = []\n+ for ireq in sorted(to_install):\n+ if ireq.editable:\n+ package_args.extend(['-e', str(ireq.link or ireq.req)])\n+ else:\n+ package_args.append(str(ireq.req))\n+ check_call([pip, 'install'] + pip_flags + install_flags + package_args)\n return 0\n", "issue": "pip-sync appears to get confused about editable packages\nWhen a requirements.txt file includes editable local packages, pip-sync seems to want to uninstall them and reinstall them.\n\nAlso, the pip install command that pip-sync attempts to perform is missing the `-e` editable flags, resulting in an error.\n##### Steps to replicate\n- Create a new virtualenv for testing\n\n``` bash\nvirtualenv --no-site-packages testvenv\nsource ./testvenv/bin/activate\npip list\n```\n\nreturns:\n\n```\npip (8.1.2)\nsetuptools (28.3.0)\nwheel (0.30.0a0)\n```\n- Generate a requirements.txt file (obfuscated from our actual names):\n\n```\n-e file:///vagrant/projects/someproject\n```\n- Install the editable project into the virtualenv\n\n``` bash\npip install -r requirements.txt --no-deps\n```\n- Install pip-tools into the venv (because the globally installed pip-tools doesn't seem to deal with virtualenvironents well (another bug?) )\n\n``` bash\npip install pip-tools\ndeactivate\nsource ./testvenv/bin/activate\n```\n- See what pip-sync thinks\n\n``` bash\npip-sync -n requirements.txt\n```\n\nreturns _unexpected results_:\n\n```\nWould uninstall:\n our.package.name\nWould install:\n file:///vagrant/projects/packagename\n```\n- Try to do the sync\n\n```\npip-sync requirements.txt\n```\n\nreturns\n\n```\nUninstalling our.package.name-1.2.3+dirty:\n Successfully uninstalled our.package.name-1.2.3+dirty\nProcessing /vagrant/projects/packagename\n Complete output from command python setup.py egg_info:\n\n*** Bunch of errors we generate related to our package's assumptions about being a git repo ***\n\n ----------------------------------------\nCommand \"python setup.py egg_info\" failed with error code 1 in /tmp/pip-ls2yuR-build/\nTraceback (most recent call last):\n File \"/home/vagrant/testvenv/bin/pip-sync\", line 11, in <module>\n sys.exit(cli())\n File \"/home/vagrant/testvenv/local/lib/python2.7/site-packages/click/core.py\", line 716, in __call__\n return self.main(*args, **kwargs)\n File \"/home/vagrant/testvenv/local/lib/python2.7/site-packages/click/core.py\", line 696, in main\n rv = self.invoke(ctx)\n File \"/home/vagrant/testvenv/local/lib/python2.7/site-packages/click/core.py\", line 889, in invoke\n return ctx.invoke(self.callback, **ctx.params)\n File \"/home/vagrant/testvenv/local/lib/python2.7/site-packages/click/core.py\", line 534, in invoke\n return callback(*args, **kwargs)\n File \"/home/vagrant/testvenv/local/lib/python2.7/site-packages/piptools/scripts/sync.py\", line 72, in cli\n install_flags=install_flags))\n File \"/home/vagrant/testvenv/local/lib/python2.7/site-packages/piptools/sync.py\", line 157, in sync\n check_call([pip, 'install'] + pip_flags + install_flags + sorted(to_install))\n File \"/usr/lib/python2.7/subprocess.py\", line 511, in check_call\n raise CalledProcessError(retcode, cmd)\nsubprocess.CalledProcessError: Command '['pip', 'install', 'file:///vagrant/projects/packagename']' returned non-zero exit status 1\n```\n##### Expected result\n\nA) The tool shouldn't be attempting to uninstall the editable package and then reinstall it\nB) Installing editable packages should have a `-e` option\n\n", "before_files": [{"content": "import collections\nimport os\nimport sys\nfrom subprocess import check_call\n\nfrom . import click\nfrom .exceptions import IncompatibleRequirements, UnsupportedConstraint\nfrom .utils import flat_map, key_from_req\n\nPACKAGES_TO_IGNORE = [\n 'pip',\n 'pip-tools',\n 'pip-review',\n 'setuptools',\n 'wheel',\n]\n\n\ndef dependency_tree(installed_keys, root_key):\n \"\"\"\n Calculate the dependency tree for the package `root_key` and return\n a collection of all its dependencies. Uses a DFS traversal algorithm.\n\n `installed_keys` should be a {key: requirement} mapping, e.g.\n {'django': from_line('django==1.8')}\n `root_key` should be the key to return the dependency tree for.\n \"\"\"\n dependencies = set()\n queue = collections.deque()\n\n if root_key in installed_keys:\n dep = installed_keys[root_key]\n queue.append(dep)\n\n while queue:\n v = queue.popleft()\n key = key_from_req(v)\n if key in dependencies:\n continue\n\n dependencies.add(key)\n\n for dep_specifier in v.requires():\n dep_name = key_from_req(dep_specifier)\n if dep_name in installed_keys:\n dep = installed_keys[dep_name]\n\n if dep_specifier.specifier.contains(dep.version):\n queue.append(dep)\n\n return dependencies\n\n\ndef get_dists_to_ignore(installed):\n \"\"\"\n Returns a collection of package names to ignore when performing pip-sync,\n based on the currently installed environment. For example, when pip-tools\n is installed in the local environment, it should be ignored, including all\n of its dependencies (e.g. click). When pip-tools is not installed\n locally, click should also be installed/uninstalled depending on the given\n requirements.\n \"\"\"\n installed_keys = {key_from_req(r): r for r in installed}\n return list(flat_map(lambda req: dependency_tree(installed_keys, req), PACKAGES_TO_IGNORE))\n\n\ndef merge(requirements, ignore_conflicts):\n by_key = {}\n\n for ireq in requirements:\n if ireq.link is not None and not ireq.editable:\n msg = ('pip-compile does not support URLs as packages, unless they are editable. '\n 'Perhaps add -e option?')\n raise UnsupportedConstraint(msg, ireq)\n\n key = ireq.link or key_from_req(ireq.req)\n\n if not ignore_conflicts:\n existing_ireq = by_key.get(key)\n if existing_ireq:\n # NOTE: We check equality here since we can assume that the\n # requirements are all pinned\n if ireq.specifier != existing_ireq.specifier:\n raise IncompatibleRequirements(ireq, existing_ireq)\n\n # TODO: Always pick the largest specifier in case of a conflict\n by_key[key] = ireq\n\n return by_key.values()\n\n\ndef diff(compiled_requirements, installed_dists):\n \"\"\"\n Calculate which packages should be installed or uninstalled, given a set\n of compiled requirements and a list of currently installed modules.\n \"\"\"\n requirements_lut = {r.link or key_from_req(r.req): r for r in compiled_requirements}\n\n satisfied = set() # holds keys\n to_install = set() # holds keys-and-versions\n to_uninstall = set() # holds keys\n\n pkgs_to_ignore = get_dists_to_ignore(installed_dists)\n for dist in installed_dists:\n key = key_from_req(dist)\n if key not in requirements_lut:\n to_uninstall.add(key)\n elif requirements_lut[key].specifier.contains(dist.version):\n satisfied.add(key)\n\n for key, requirement in requirements_lut.items():\n if key not in satisfied:\n to_install.add(str(requirement.link or requirement.req))\n\n # Make sure to not uninstall any packages that should be ignored\n to_uninstall -= set(pkgs_to_ignore)\n\n return (to_install, to_uninstall)\n\n\ndef sync(to_install, to_uninstall, verbose=False, dry_run=False, pip_flags=None, install_flags=None):\n \"\"\"\n Install and uninstalls the given sets of modules.\n \"\"\"\n if not to_uninstall and not to_install:\n click.echo(\"Everything up-to-date\")\n\n if pip_flags is None:\n pip_flags = []\n\n if not verbose:\n pip_flags += ['-q']\n\n if os.environ.get('VIRTUAL_ENV'):\n # find pip via PATH\n pip = 'pip'\n else:\n # find pip in same directory as pip-sync entry-point script\n pip = os.path.join(os.path.dirname(os.path.abspath(sys.argv[0])), 'pip')\n\n if to_uninstall:\n if dry_run:\n click.echo(\"Would uninstall:\")\n for pkg in to_uninstall:\n click.echo(\" {}\".format(pkg))\n else:\n check_call([pip, 'uninstall', '-y'] + pip_flags + sorted(to_uninstall))\n\n if to_install:\n if install_flags is None:\n install_flags = []\n if dry_run:\n click.echo(\"Would install:\")\n for pkg in to_install:\n click.echo(\" {}\".format(pkg))\n else:\n check_call([pip, 'install'] + pip_flags + install_flags + sorted(to_install))\n return 0\n", "path": "piptools/sync.py"}], "after_files": [{"content": "import collections\nimport os\nimport sys\nfrom subprocess import check_call\n\nfrom . import click\nfrom .exceptions import IncompatibleRequirements, UnsupportedConstraint\nfrom .utils import flat_map, format_requirement, key_from_req\n\nPACKAGES_TO_IGNORE = [\n 'pip',\n 'pip-tools',\n 'pip-review',\n 'setuptools',\n 'wheel',\n]\n\n\ndef dependency_tree(installed_keys, root_key):\n \"\"\"\n Calculate the dependency tree for the package `root_key` and return\n a collection of all its dependencies. Uses a DFS traversal algorithm.\n\n `installed_keys` should be a {key: requirement} mapping, e.g.\n {'django': from_line('django==1.8')}\n `root_key` should be the key to return the dependency tree for.\n \"\"\"\n dependencies = set()\n queue = collections.deque()\n\n if root_key in installed_keys:\n dep = installed_keys[root_key]\n queue.append(dep)\n\n while queue:\n v = queue.popleft()\n key = key_from_req(v)\n if key in dependencies:\n continue\n\n dependencies.add(key)\n\n for dep_specifier in v.requires():\n dep_name = key_from_req(dep_specifier)\n if dep_name in installed_keys:\n dep = installed_keys[dep_name]\n\n if dep_specifier.specifier.contains(dep.version):\n queue.append(dep)\n\n return dependencies\n\n\ndef get_dists_to_ignore(installed):\n \"\"\"\n Returns a collection of package names to ignore when performing pip-sync,\n based on the currently installed environment. For example, when pip-tools\n is installed in the local environment, it should be ignored, including all\n of its dependencies (e.g. click). When pip-tools is not installed\n locally, click should also be installed/uninstalled depending on the given\n requirements.\n \"\"\"\n installed_keys = {key_from_req(r): r for r in installed}\n return list(flat_map(lambda req: dependency_tree(installed_keys, req), PACKAGES_TO_IGNORE))\n\n\ndef merge(requirements, ignore_conflicts):\n by_key = {}\n\n for ireq in requirements:\n if ireq.link is not None and not ireq.editable:\n msg = ('pip-compile does not support URLs as packages, unless they are editable. '\n 'Perhaps add -e option?')\n raise UnsupportedConstraint(msg, ireq)\n\n key = ireq.link or key_from_req(ireq.req)\n\n if not ignore_conflicts:\n existing_ireq = by_key.get(key)\n if existing_ireq:\n # NOTE: We check equality here since we can assume that the\n # requirements are all pinned\n if ireq.specifier != existing_ireq.specifier:\n raise IncompatibleRequirements(ireq, existing_ireq)\n\n # TODO: Always pick the largest specifier in case of a conflict\n by_key[key] = ireq\n\n return by_key.values()\n\n\ndef diff(compiled_requirements, installed_dists):\n \"\"\"\n Calculate which packages should be installed or uninstalled, given a set\n of compiled requirements and a list of currently installed modules.\n \"\"\"\n requirements_lut = {r.link or key_from_req(r.req): r for r in compiled_requirements}\n\n satisfied = set() # holds keys\n to_install = set() # holds InstallRequirement objects\n to_uninstall = set() # holds keys\n\n pkgs_to_ignore = get_dists_to_ignore(installed_dists)\n for dist in installed_dists:\n key = key_from_req(dist)\n if key not in requirements_lut:\n to_uninstall.add(key)\n elif requirements_lut[key].specifier.contains(dist.version):\n satisfied.add(key)\n\n for key, requirement in requirements_lut.items():\n if key not in satisfied:\n to_install.add(requirement)\n\n # Make sure to not uninstall any packages that should be ignored\n to_uninstall -= set(pkgs_to_ignore)\n\n return (to_install, to_uninstall)\n\n\ndef sync(to_install, to_uninstall, verbose=False, dry_run=False, pip_flags=None, install_flags=None):\n \"\"\"\n Install and uninstalls the given sets of modules.\n \"\"\"\n if not to_uninstall and not to_install:\n click.echo(\"Everything up-to-date\")\n\n if pip_flags is None:\n pip_flags = []\n\n if not verbose:\n pip_flags += ['-q']\n\n if os.environ.get('VIRTUAL_ENV'):\n # find pip via PATH\n pip = 'pip'\n else:\n # find pip in same directory as pip-sync entry-point script\n pip = os.path.join(os.path.dirname(os.path.abspath(sys.argv[0])), 'pip')\n\n if to_uninstall:\n if dry_run:\n click.echo(\"Would uninstall:\")\n for pkg in to_uninstall:\n click.echo(\" {}\".format(pkg))\n else:\n check_call([pip, 'uninstall', '-y'] + pip_flags + sorted(to_uninstall))\n\n if to_install:\n if install_flags is None:\n install_flags = []\n if dry_run:\n click.echo(\"Would install:\")\n for ireq in to_install:\n click.echo(\" {}\".format(format_requirement(ireq)))\n else:\n package_args = []\n for ireq in sorted(to_install):\n if ireq.editable:\n package_args.extend(['-e', str(ireq.link or ireq.req)])\n else:\n package_args.append(str(ireq.req))\n check_call([pip, 'install'] + pip_flags + install_flags + package_args)\n return 0\n", "path": "piptools/sync.py"}]} | 2,589 | 463 |
gh_patches_debug_11110 | rasdani/github-patches | git_diff | apluslms__a-plus-1301 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The Cancel button in Add Deadline Deviations behaves surprisingly
Just a detail:
If I go to a student’s page, there are now links ther to add deadline deviations for that particular student. (This is super-useful.)
Now say I click one of those buttons but then wish to cancel. There’s the cancel button there on the add deviations page, but that doesn’t actually cancel my action in the sense of taking me back where I was. It instead sends me to the list of all deviations (which page, in the case of the O1 course, either crashes with a Bad Gateway or takes approximately forever to load).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `deviations/viewbase.py`
Content:
```
1 from itertools import groupby
2 from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Type
3
4 from django.db import models
5 from django.http import HttpRequest, HttpResponse
6 from django.contrib import messages
7 from django import forms
8 from django.utils.text import format_lazy
9 from django.utils.translation import gettext_lazy as _, ngettext
10
11 from course.models import CourseModule, UserTag
12 from course.viewbase import CourseInstanceMixin, CourseInstanceBaseView
13 from deviations.models import SubmissionRuleDeviation
14 from lib.helpers import is_ajax
15 from lib.viewbase import BaseFormView, BaseRedirectView
16 from authorization.permissions import ACCESS
17 from exercise.models import BaseExercise
18 from userprofile.models import UserProfile
19
20
21 class ListDeviationsView(CourseInstanceBaseView):
22 access_mode = ACCESS.TEACHER
23 deviation_model: Type[SubmissionRuleDeviation]
24
25 def get_common_objects(self) -> None:
26 super().get_common_objects()
27 all_deviations = self.deviation_model.objects.filter(
28 exercise__course_module__course_instance=self.instance
29 )
30 self.deviation_groups = get_deviation_groups(all_deviations)
31 self.note("deviation_groups")
32
33
34 class AddDeviationsView(CourseInstanceMixin, BaseFormView):
35 access_mode = ACCESS.TEACHER
36 deviation_model: Type[SubmissionRuleDeviation]
37 session_key: str
38
39 def get_form_kwargs(self) -> Dict[str, Any]:
40 kwargs = super().get_form_kwargs()
41 kwargs["instance"] = self.instance
42 return kwargs
43
44 def get_initial_get_param_spec(self) -> Dict[str, Optional[Callable[[str], Any]]]:
45 def list_arg(arg):
46 return arg.split(",")
47
48 spec = super().get_initial_get_param_spec()
49 spec.update({
50 "module": list_arg,
51 "exercise": list_arg,
52 "submitter": list_arg,
53 "submitter_tag": list_arg,
54 })
55 return spec
56
57 def form_valid(self, form: forms.BaseForm) -> HttpResponse:
58 exercises = get_exercises(form.cleaned_data)
59 submitters = get_submitters(form.cleaned_data)
60 existing_deviations = self.deviation_model.objects.filter(
61 exercise__in=exercises,
62 submitter__in=submitters,
63 )
64
65 if existing_deviations:
66 # Some deviations already existed. Use OverrideDeviationsView to
67 # confirm which ones the user wants to override. Store the form
68 # values in the current session, so they can be used afterwards.
69 self.success_url = self.deviation_model.get_override_url(self.instance)
70 self.request.session[self.session_key] = self.serialize_session_data(form.cleaned_data)
71 else:
72 self.success_url = self.deviation_model.get_list_url(self.instance)
73 for exercise in exercises:
74 for submitter in submitters:
75 new_deviation = self.deviation_model(
76 exercise=exercise,
77 submitter=submitter,
78 granter=self.request.user.userprofile,
79 )
80 new_deviation.update_by_form(form.cleaned_data)
81 new_deviation.save()
82
83 return super().form_valid(form)
84
85 def serialize_session_data(self, form_data: Dict[str, Any]) -> Dict[str, Any]:
86 """
87 Convert input form data into serializable values that can be stored in
88 the session cache.
89 """
90 result = {}
91 for key in ('exercise', 'module', 'submitter', 'submitter_tag'):
92 result[key] = [i.id for i in form_data.get(key, [])]
93 return result
94
95
96 class OverrideDeviationsView(CourseInstanceMixin, BaseFormView):
97 access_mode = ACCESS.TEACHER
98 # form_class is not really used, but it is required by the FormView.
99 # The form contains only checkboxes and the user input is validated in
100 # the form_valid method. The form HTML is manually written in the template.
101 form_class = forms.Form
102 deviation_model: Type[SubmissionRuleDeviation]
103 session_key: str
104
105 def get_success_url(self) -> str:
106 return self.deviation_model.get_list_url(self.instance)
107
108 def get_common_objects(self) -> None:
109 super().get_common_objects()
110 self.session_data = self.deserialize_session_data(self.request.session[self.session_key])
111 self.exercises = get_exercises(self.session_data)
112 self.submitters = get_submitters(self.session_data)
113 self.existing_deviations = self.deviation_model.objects.filter(
114 exercise__in=self.exercises,
115 submitter__in=self.submitters,
116 )
117 self.deviation_groups = get_deviation_groups(self.existing_deviations)
118 self.note("session_data", "exercises", "submitters", "existing_deviations", "deviation_groups")
119
120 def form_valid(self, form: forms.BaseForm) -> HttpResponse:
121 override_deviations = set()
122 deviation_list = self.request.POST.getlist('override')
123 for id_pair in deviation_list:
124 try:
125 submitter_id, exercise_id = id_pair.split('.')
126 submitter_id, exercise_id = int(submitter_id), int(exercise_id)
127 override_deviations.add((submitter_id, exercise_id))
128 except ValueError:
129 messages.error(self.request,
130 format_lazy(
131 _("INVALID_EXERCISE_OR_SUBMITTER_ID -- {id}"),
132 id=id_pair,
133 )
134 )
135 continue
136
137 existing_deviations = {(d.submitter_id, d.exercise_id): d for d in self.existing_deviations}
138
139 for exercise in self.exercises:
140 for submitter in self.submitters:
141 existing_deviation = existing_deviations.get((submitter.id, exercise.id))
142 if existing_deviation is not None:
143 if (submitter.id, exercise.id) in override_deviations:
144 existing_deviation.granter = self.request.user.userprofile
145 existing_deviation.update_by_form(self.session_data)
146 existing_deviation.save()
147 else:
148 new_deviation = self.deviation_model(
149 exercise=exercise,
150 submitter=submitter,
151 granter=self.request.user.userprofile,
152 )
153 new_deviation.update_by_form(self.session_data)
154 new_deviation.save()
155
156 del self.request.session[self.session_key]
157 return super().form_valid(form)
158
159 def deserialize_session_data(self, session_data: Dict[str, Any]) -> Dict[str, Any]:
160 """
161 Convert serialized session data back into its original representation.
162 """
163 result = {
164 'exercise': BaseExercise.objects.filter(id__in=session_data.get('exercise', [])),
165 'module': CourseModule.objects.filter(id__in=session_data.get('module', [])),
166 'submitter': UserProfile.objects.filter(id__in=session_data.get('submitter', [])),
167 'submitter_tag': UserTag.objects.filter(id__in=session_data.get('submitter_tag', [])),
168 }
169 return result
170
171
172 class RemoveDeviationsByIDView(CourseInstanceMixin, BaseRedirectView):
173 access_mode = ACCESS.TEACHER
174 deviation_model: Type[SubmissionRuleDeviation]
175
176 def post(self, request: HttpRequest, *args: Any, **kwargs: Any) -> HttpResponse:
177 deviations = self.deviation_model.objects.filter(
178 id__in=request.POST.getlist("id"),
179 exercise__course_module__course_instance=self.instance,
180 )
181 for deviation in deviations:
182 deviation.delete()
183 if is_ajax(request):
184 return HttpResponse(status=204)
185 return self.redirect(self.deviation_model.get_list_url(self.instance))
186
187
188 class RemoveDeviationsView(CourseInstanceMixin, BaseFormView):
189 access_mode = ACCESS.TEACHER
190 deviation_model: Type[SubmissionRuleDeviation]
191
192 def get_form_kwargs(self) -> Dict[str, Any]:
193 kwargs = super().get_form_kwargs()
194 kwargs["instance"] = self.instance
195 return kwargs
196
197 def get_success_url(self) -> str:
198 return self.deviation_model.get_list_url(self.instance)
199
200 def form_valid(self, form: forms.BaseForm) -> HttpResponse:
201 number_of_removed = 0
202 deviations = self.deviation_model.objects.filter(
203 exercise__in=get_exercises(form.cleaned_data),
204 submitter__in=get_submitters(form.cleaned_data),
205 )
206 for deviation in deviations:
207 deviation.delete()
208 number_of_removed += 1
209 if number_of_removed == 0:
210 messages.warning(self.request, _("NOTHING_REMOVED"))
211 else:
212 message = ngettext(
213 'REMOVED_DEVIATION -- {count}',
214 'REMOVED_DEVIATIONS -- {count}',
215 number_of_removed,
216 ).format(count=number_of_removed)
217 messages.info(self.request, message)
218 return super().form_valid(form)
219
220
221 # pylint: disable-next=too-many-locals
222 def get_deviation_groups(
223 all_deviations: models.QuerySet[SubmissionRuleDeviation],
224 ) -> Iterable[Tuple[List[SubmissionRuleDeviation], bool, Optional[str]]]:
225 """
226 Group the deviations by user and module.
227
228 Grouping condition: deviations can be grouped if the user has been
229 granted the same deviation (based on the `is_equal` method) for all
230 exercises in the module.
231
232 The returned tuples contain the following values:
233 1. List of deviations with the same user and module.
234 2. Boolean representing whether the deviations in the list can be
235 displayed as a group (i.e. the grouping condition is satisfied).
236 3. An id that uniquely identifies the group of deviations.
237 """
238 # Find the number of exercises in each module.
239 course_instances = (
240 all_deviations
241 .values_list('exercise__course_module__course_instance', flat=True)
242 .distinct()
243 )
244 exercise_counts = (
245 BaseExercise.objects.filter(
246 course_module__course_instance__in=course_instances
247 )
248 .order_by()
249 .values('course_module_id')
250 .annotate(count=models.Count('*'))
251 )
252 exercise_count_by_module = {row['course_module_id']: row['count'] for row in exercise_counts}
253
254 ordered_deviations = (
255 all_deviations
256 .select_related(
257 'submitter', 'submitter__user',
258 'granter', 'granter__user',
259 'exercise', 'exercise__course_module',
260 'exercise__course_module__course_instance',
261 )
262 .defer(
263 'exercise__exercise_info',
264 'exercise__description',
265 'exercise__course_module__course_instance__description',
266 )
267 # parent is prefetched because there may be multiple ancestors, and
268 # they are needed for building the deviation's URL.
269 .prefetch_related('exercise__parent')
270 .order_by('submitter', 'exercise__course_module')
271 )
272
273 deviation_groups = groupby(
274 ordered_deviations,
275 lambda obj: (obj.submitter, obj.exercise.course_module),
276 )
277 for (_submitter, module), deviations_iter in deviation_groups:
278 deviations = list(deviations_iter)
279 can_group = True
280 show_granter = True
281 if len(deviations) < 2:
282 # Group must have at least 2 deviations.
283 can_group = False
284 else:
285 group_exercises = set()
286 # Check that the same deviation has been granted for all exercises.
287 first_granter = deviations[0].granter.id
288 for deviation in deviations:
289 if not deviation.is_groupable(deviations[0]):
290 can_group = False
291 if not show_granter:
292 break
293 if deviation.granter.id != first_granter:
294 show_granter = False
295 if not can_group:
296 break
297 group_exercises.add(deviation.exercise.id)
298 else:
299 if len(group_exercises) != exercise_count_by_module[module.id]:
300 # The number of exercises that have deviations doesn't
301 # match the number of exercises in the module, so there
302 # are some exercises that don't have a deviation.
303 can_group = False
304 group_id = f"{deviations[0].submitter.id}.{module.id}" if can_group else None
305 yield (deviations, can_group, group_id, show_granter)
306
307
308 def get_exercises(form_data: Dict[str, Any]) -> models.QuerySet[BaseExercise]:
309 """
310 Get the exercises that match the input form's `exercise` and `module`
311 fields.
312 """
313 return BaseExercise.objects.filter(
314 models.Q(id__in=form_data.get('exercise', []))
315 | models.Q(course_module__in=form_data.get('module', []))
316 )
317
318
319 def get_submitters(form_data: Dict[str, Any]) -> models.QuerySet[UserProfile]:
320 """
321 Get the submitters that match the input form's `submitter` and
322 `submitter_tag` fields.
323 """
324 return UserProfile.objects.filter(
325 models.Q(id__in=form_data.get('submitter', []))
326 | models.Q(taggings__tag__in=form_data.get('submitter_tag', []))
327 ).distinct()
328
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/deviations/viewbase.py b/deviations/viewbase.py
--- a/deviations/viewbase.py
+++ b/deviations/viewbase.py
@@ -36,6 +36,14 @@
deviation_model: Type[SubmissionRuleDeviation]
session_key: str
+ def get_context_data(self, **kwargs: Any) -> dict:
+ context = super().get_context_data(**kwargs)
+ if self.request.GET.get('previous'):
+ context.update({'cancel_action': self.request.GET.get('previous')})
+ else:
+ context.update({'cancel_action': self.instance.get_url('deviations-list-dl')})
+ return context
+
def get_form_kwargs(self) -> Dict[str, Any]:
kwargs = super().get_form_kwargs()
kwargs["instance"] = self.instance
| {"golden_diff": "diff --git a/deviations/viewbase.py b/deviations/viewbase.py\n--- a/deviations/viewbase.py\n+++ b/deviations/viewbase.py\n@@ -36,6 +36,14 @@\n deviation_model: Type[SubmissionRuleDeviation]\n session_key: str\n \n+ def get_context_data(self, **kwargs: Any) -> dict:\n+ context = super().get_context_data(**kwargs)\n+ if self.request.GET.get('previous'):\n+ context.update({'cancel_action': self.request.GET.get('previous')})\n+ else:\n+ context.update({'cancel_action': self.instance.get_url('deviations-list-dl')})\n+ return context\n+\n def get_form_kwargs(self) -> Dict[str, Any]:\n kwargs = super().get_form_kwargs()\n kwargs[\"instance\"] = self.instance\n", "issue": "The Cancel button in Add Deadline Deviations behaves surprisingly\nJust a detail:\r\n\r\nIf I go to a student\u2019s page, there are now links ther to add deadline deviations for that particular student. (This is super-useful.) \r\n\r\nNow say I click one of those buttons but then wish to cancel. There\u2019s the cancel button there on the add deviations page, but that doesn\u2019t actually cancel my action in the sense of taking me back where I was. It instead sends me to the list of all deviations (which page, in the case of the O1 course, either crashes with a Bad Gateway or takes approximately forever to load). \n", "before_files": [{"content": "from itertools import groupby\nfrom typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Type\n\nfrom django.db import models\nfrom django.http import HttpRequest, HttpResponse\nfrom django.contrib import messages\nfrom django import forms\nfrom django.utils.text import format_lazy\nfrom django.utils.translation import gettext_lazy as _, ngettext\n\nfrom course.models import CourseModule, UserTag\nfrom course.viewbase import CourseInstanceMixin, CourseInstanceBaseView\nfrom deviations.models import SubmissionRuleDeviation\nfrom lib.helpers import is_ajax\nfrom lib.viewbase import BaseFormView, BaseRedirectView\nfrom authorization.permissions import ACCESS\nfrom exercise.models import BaseExercise\nfrom userprofile.models import UserProfile\n\n\nclass ListDeviationsView(CourseInstanceBaseView):\n access_mode = ACCESS.TEACHER\n deviation_model: Type[SubmissionRuleDeviation]\n\n def get_common_objects(self) -> None:\n super().get_common_objects()\n all_deviations = self.deviation_model.objects.filter(\n exercise__course_module__course_instance=self.instance\n )\n self.deviation_groups = get_deviation_groups(all_deviations)\n self.note(\"deviation_groups\")\n\n\nclass AddDeviationsView(CourseInstanceMixin, BaseFormView):\n access_mode = ACCESS.TEACHER\n deviation_model: Type[SubmissionRuleDeviation]\n session_key: str\n\n def get_form_kwargs(self) -> Dict[str, Any]:\n kwargs = super().get_form_kwargs()\n kwargs[\"instance\"] = self.instance\n return kwargs\n\n def get_initial_get_param_spec(self) -> Dict[str, Optional[Callable[[str], Any]]]:\n def list_arg(arg):\n return arg.split(\",\")\n\n spec = super().get_initial_get_param_spec()\n spec.update({\n \"module\": list_arg,\n \"exercise\": list_arg,\n \"submitter\": list_arg,\n \"submitter_tag\": list_arg,\n })\n return spec\n\n def form_valid(self, form: forms.BaseForm) -> HttpResponse:\n exercises = get_exercises(form.cleaned_data)\n submitters = get_submitters(form.cleaned_data)\n existing_deviations = self.deviation_model.objects.filter(\n exercise__in=exercises,\n submitter__in=submitters,\n )\n\n if existing_deviations:\n # Some deviations already existed. Use OverrideDeviationsView to\n # confirm which ones the user wants to override. Store the form\n # values in the current session, so they can be used afterwards.\n self.success_url = self.deviation_model.get_override_url(self.instance)\n self.request.session[self.session_key] = self.serialize_session_data(form.cleaned_data)\n else:\n self.success_url = self.deviation_model.get_list_url(self.instance)\n for exercise in exercises:\n for submitter in submitters:\n new_deviation = self.deviation_model(\n exercise=exercise,\n submitter=submitter,\n granter=self.request.user.userprofile,\n )\n new_deviation.update_by_form(form.cleaned_data)\n new_deviation.save()\n\n return super().form_valid(form)\n\n def serialize_session_data(self, form_data: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"\n Convert input form data into serializable values that can be stored in\n the session cache.\n \"\"\"\n result = {}\n for key in ('exercise', 'module', 'submitter', 'submitter_tag'):\n result[key] = [i.id for i in form_data.get(key, [])]\n return result\n\n\nclass OverrideDeviationsView(CourseInstanceMixin, BaseFormView):\n access_mode = ACCESS.TEACHER\n # form_class is not really used, but it is required by the FormView.\n # The form contains only checkboxes and the user input is validated in\n # the form_valid method. The form HTML is manually written in the template.\n form_class = forms.Form\n deviation_model: Type[SubmissionRuleDeviation]\n session_key: str\n\n def get_success_url(self) -> str:\n return self.deviation_model.get_list_url(self.instance)\n\n def get_common_objects(self) -> None:\n super().get_common_objects()\n self.session_data = self.deserialize_session_data(self.request.session[self.session_key])\n self.exercises = get_exercises(self.session_data)\n self.submitters = get_submitters(self.session_data)\n self.existing_deviations = self.deviation_model.objects.filter(\n exercise__in=self.exercises,\n submitter__in=self.submitters,\n )\n self.deviation_groups = get_deviation_groups(self.existing_deviations)\n self.note(\"session_data\", \"exercises\", \"submitters\", \"existing_deviations\", \"deviation_groups\")\n\n def form_valid(self, form: forms.BaseForm) -> HttpResponse:\n override_deviations = set()\n deviation_list = self.request.POST.getlist('override')\n for id_pair in deviation_list:\n try:\n submitter_id, exercise_id = id_pair.split('.')\n submitter_id, exercise_id = int(submitter_id), int(exercise_id)\n override_deviations.add((submitter_id, exercise_id))\n except ValueError:\n messages.error(self.request,\n format_lazy(\n _(\"INVALID_EXERCISE_OR_SUBMITTER_ID -- {id}\"),\n id=id_pair,\n )\n )\n continue\n\n existing_deviations = {(d.submitter_id, d.exercise_id): d for d in self.existing_deviations}\n\n for exercise in self.exercises:\n for submitter in self.submitters:\n existing_deviation = existing_deviations.get((submitter.id, exercise.id))\n if existing_deviation is not None:\n if (submitter.id, exercise.id) in override_deviations:\n existing_deviation.granter = self.request.user.userprofile\n existing_deviation.update_by_form(self.session_data)\n existing_deviation.save()\n else:\n new_deviation = self.deviation_model(\n exercise=exercise,\n submitter=submitter,\n granter=self.request.user.userprofile,\n )\n new_deviation.update_by_form(self.session_data)\n new_deviation.save()\n\n del self.request.session[self.session_key]\n return super().form_valid(form)\n\n def deserialize_session_data(self, session_data: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"\n Convert serialized session data back into its original representation.\n \"\"\"\n result = {\n 'exercise': BaseExercise.objects.filter(id__in=session_data.get('exercise', [])),\n 'module': CourseModule.objects.filter(id__in=session_data.get('module', [])),\n 'submitter': UserProfile.objects.filter(id__in=session_data.get('submitter', [])),\n 'submitter_tag': UserTag.objects.filter(id__in=session_data.get('submitter_tag', [])),\n }\n return result\n\n\nclass RemoveDeviationsByIDView(CourseInstanceMixin, BaseRedirectView):\n access_mode = ACCESS.TEACHER\n deviation_model: Type[SubmissionRuleDeviation]\n\n def post(self, request: HttpRequest, *args: Any, **kwargs: Any) -> HttpResponse:\n deviations = self.deviation_model.objects.filter(\n id__in=request.POST.getlist(\"id\"),\n exercise__course_module__course_instance=self.instance,\n )\n for deviation in deviations:\n deviation.delete()\n if is_ajax(request):\n return HttpResponse(status=204)\n return self.redirect(self.deviation_model.get_list_url(self.instance))\n\n\nclass RemoveDeviationsView(CourseInstanceMixin, BaseFormView):\n access_mode = ACCESS.TEACHER\n deviation_model: Type[SubmissionRuleDeviation]\n\n def get_form_kwargs(self) -> Dict[str, Any]:\n kwargs = super().get_form_kwargs()\n kwargs[\"instance\"] = self.instance\n return kwargs\n\n def get_success_url(self) -> str:\n return self.deviation_model.get_list_url(self.instance)\n\n def form_valid(self, form: forms.BaseForm) -> HttpResponse:\n number_of_removed = 0\n deviations = self.deviation_model.objects.filter(\n exercise__in=get_exercises(form.cleaned_data),\n submitter__in=get_submitters(form.cleaned_data),\n )\n for deviation in deviations:\n deviation.delete()\n number_of_removed += 1\n if number_of_removed == 0:\n messages.warning(self.request, _(\"NOTHING_REMOVED\"))\n else:\n message = ngettext(\n 'REMOVED_DEVIATION -- {count}',\n 'REMOVED_DEVIATIONS -- {count}',\n number_of_removed,\n ).format(count=number_of_removed)\n messages.info(self.request, message)\n return super().form_valid(form)\n\n\n# pylint: disable-next=too-many-locals\ndef get_deviation_groups(\n all_deviations: models.QuerySet[SubmissionRuleDeviation],\n ) -> Iterable[Tuple[List[SubmissionRuleDeviation], bool, Optional[str]]]:\n \"\"\"\n Group the deviations by user and module.\n\n Grouping condition: deviations can be grouped if the user has been\n granted the same deviation (based on the `is_equal` method) for all\n exercises in the module.\n\n The returned tuples contain the following values:\n 1. List of deviations with the same user and module.\n 2. Boolean representing whether the deviations in the list can be\n displayed as a group (i.e. the grouping condition is satisfied).\n 3. An id that uniquely identifies the group of deviations.\n \"\"\"\n # Find the number of exercises in each module.\n course_instances = (\n all_deviations\n .values_list('exercise__course_module__course_instance', flat=True)\n .distinct()\n )\n exercise_counts = (\n BaseExercise.objects.filter(\n course_module__course_instance__in=course_instances\n )\n .order_by()\n .values('course_module_id')\n .annotate(count=models.Count('*'))\n )\n exercise_count_by_module = {row['course_module_id']: row['count'] for row in exercise_counts}\n\n ordered_deviations = (\n all_deviations\n .select_related(\n 'submitter', 'submitter__user',\n 'granter', 'granter__user',\n 'exercise', 'exercise__course_module',\n 'exercise__course_module__course_instance',\n )\n .defer(\n 'exercise__exercise_info',\n 'exercise__description',\n 'exercise__course_module__course_instance__description',\n )\n # parent is prefetched because there may be multiple ancestors, and\n # they are needed for building the deviation's URL.\n .prefetch_related('exercise__parent')\n .order_by('submitter', 'exercise__course_module')\n )\n\n deviation_groups = groupby(\n ordered_deviations,\n lambda obj: (obj.submitter, obj.exercise.course_module),\n )\n for (_submitter, module), deviations_iter in deviation_groups:\n deviations = list(deviations_iter)\n can_group = True\n show_granter = True\n if len(deviations) < 2:\n # Group must have at least 2 deviations.\n can_group = False\n else:\n group_exercises = set()\n # Check that the same deviation has been granted for all exercises.\n first_granter = deviations[0].granter.id\n for deviation in deviations:\n if not deviation.is_groupable(deviations[0]):\n can_group = False\n if not show_granter:\n break\n if deviation.granter.id != first_granter:\n show_granter = False\n if not can_group:\n break\n group_exercises.add(deviation.exercise.id)\n else:\n if len(group_exercises) != exercise_count_by_module[module.id]:\n # The number of exercises that have deviations doesn't\n # match the number of exercises in the module, so there\n # are some exercises that don't have a deviation.\n can_group = False\n group_id = f\"{deviations[0].submitter.id}.{module.id}\" if can_group else None\n yield (deviations, can_group, group_id, show_granter)\n\n\ndef get_exercises(form_data: Dict[str, Any]) -> models.QuerySet[BaseExercise]:\n \"\"\"\n Get the exercises that match the input form's `exercise` and `module`\n fields.\n \"\"\"\n return BaseExercise.objects.filter(\n models.Q(id__in=form_data.get('exercise', []))\n | models.Q(course_module__in=form_data.get('module', []))\n )\n\n\ndef get_submitters(form_data: Dict[str, Any]) -> models.QuerySet[UserProfile]:\n \"\"\"\n Get the submitters that match the input form's `submitter` and\n `submitter_tag` fields.\n \"\"\"\n return UserProfile.objects.filter(\n models.Q(id__in=form_data.get('submitter', []))\n | models.Q(taggings__tag__in=form_data.get('submitter_tag', []))\n ).distinct()\n", "path": "deviations/viewbase.py"}], "after_files": [{"content": "from itertools import groupby\nfrom typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Type\n\nfrom django.db import models\nfrom django.http import HttpRequest, HttpResponse\nfrom django.contrib import messages\nfrom django import forms\nfrom django.utils.text import format_lazy\nfrom django.utils.translation import gettext_lazy as _, ngettext\n\nfrom course.models import CourseModule, UserTag\nfrom course.viewbase import CourseInstanceMixin, CourseInstanceBaseView\nfrom deviations.models import SubmissionRuleDeviation\nfrom lib.helpers import is_ajax\nfrom lib.viewbase import BaseFormView, BaseRedirectView\nfrom authorization.permissions import ACCESS\nfrom exercise.models import BaseExercise\nfrom userprofile.models import UserProfile\n\n\nclass ListDeviationsView(CourseInstanceBaseView):\n access_mode = ACCESS.TEACHER\n deviation_model: Type[SubmissionRuleDeviation]\n\n def get_common_objects(self) -> None:\n super().get_common_objects()\n all_deviations = self.deviation_model.objects.filter(\n exercise__course_module__course_instance=self.instance\n )\n self.deviation_groups = get_deviation_groups(all_deviations)\n self.note(\"deviation_groups\")\n\n\nclass AddDeviationsView(CourseInstanceMixin, BaseFormView):\n access_mode = ACCESS.TEACHER\n deviation_model: Type[SubmissionRuleDeviation]\n session_key: str\n\n def get_context_data(self, **kwargs: Any) -> dict:\n context = super().get_context_data(**kwargs)\n if self.request.GET.get('previous'):\n context.update({'cancel_action': self.request.GET.get('previous')})\n else:\n context.update({'cancel_action': self.instance.get_url('deviations-list-dl')})\n return context\n\n def get_form_kwargs(self) -> Dict[str, Any]:\n kwargs = super().get_form_kwargs()\n kwargs[\"instance\"] = self.instance\n return kwargs\n\n def get_initial_get_param_spec(self) -> Dict[str, Optional[Callable[[str], Any]]]:\n def list_arg(arg):\n return arg.split(\",\")\n\n spec = super().get_initial_get_param_spec()\n spec.update({\n \"module\": list_arg,\n \"exercise\": list_arg,\n \"submitter\": list_arg,\n \"submitter_tag\": list_arg,\n })\n return spec\n\n def form_valid(self, form: forms.BaseForm) -> HttpResponse:\n exercises = get_exercises(form.cleaned_data)\n submitters = get_submitters(form.cleaned_data)\n existing_deviations = self.deviation_model.objects.filter(\n exercise__in=exercises,\n submitter__in=submitters,\n )\n\n if existing_deviations:\n # Some deviations already existed. Use OverrideDeviationsView to\n # confirm which ones the user wants to override. Store the form\n # values in the current session, so they can be used afterwards.\n self.success_url = self.deviation_model.get_override_url(self.instance)\n self.request.session[self.session_key] = self.serialize_session_data(form.cleaned_data)\n else:\n self.success_url = self.deviation_model.get_list_url(self.instance)\n for exercise in exercises:\n for submitter in submitters:\n new_deviation = self.deviation_model(\n exercise=exercise,\n submitter=submitter,\n granter=self.request.user.userprofile,\n )\n new_deviation.update_by_form(form.cleaned_data)\n new_deviation.save()\n\n return super().form_valid(form)\n\n def serialize_session_data(self, form_data: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"\n Convert input form data into serializable values that can be stored in\n the session cache.\n \"\"\"\n result = {}\n for key in ('exercise', 'module', 'submitter', 'submitter_tag'):\n result[key] = [i.id for i in form_data.get(key, [])]\n return result\n\n\nclass OverrideDeviationsView(CourseInstanceMixin, BaseFormView):\n access_mode = ACCESS.TEACHER\n # form_class is not really used, but it is required by the FormView.\n # The form contains only checkboxes and the user input is validated in\n # the form_valid method. The form HTML is manually written in the template.\n form_class = forms.Form\n deviation_model: Type[SubmissionRuleDeviation]\n session_key: str\n\n def get_success_url(self) -> str:\n return self.deviation_model.get_list_url(self.instance)\n\n def get_common_objects(self) -> None:\n super().get_common_objects()\n self.session_data = self.deserialize_session_data(self.request.session[self.session_key])\n self.exercises = get_exercises(self.session_data)\n self.submitters = get_submitters(self.session_data)\n self.existing_deviations = self.deviation_model.objects.filter(\n exercise__in=self.exercises,\n submitter__in=self.submitters,\n )\n self.deviation_groups = get_deviation_groups(self.existing_deviations)\n self.note(\"session_data\", \"exercises\", \"submitters\", \"existing_deviations\", \"deviation_groups\")\n\n def form_valid(self, form: forms.BaseForm) -> HttpResponse:\n override_deviations = set()\n deviation_list = self.request.POST.getlist('override')\n for id_pair in deviation_list:\n try:\n submitter_id, exercise_id = id_pair.split('.')\n submitter_id, exercise_id = int(submitter_id), int(exercise_id)\n override_deviations.add((submitter_id, exercise_id))\n except ValueError:\n messages.error(self.request,\n format_lazy(\n _(\"INVALID_EXERCISE_OR_SUBMITTER_ID -- {id}\"),\n id=id_pair,\n )\n )\n continue\n\n existing_deviations = {(d.submitter_id, d.exercise_id): d for d in self.existing_deviations}\n\n for exercise in self.exercises:\n for submitter in self.submitters:\n existing_deviation = existing_deviations.get((submitter.id, exercise.id))\n if existing_deviation is not None:\n if (submitter.id, exercise.id) in override_deviations:\n existing_deviation.granter = self.request.user.userprofile\n existing_deviation.update_by_form(self.session_data)\n existing_deviation.save()\n else:\n new_deviation = self.deviation_model(\n exercise=exercise,\n submitter=submitter,\n granter=self.request.user.userprofile,\n )\n new_deviation.update_by_form(self.session_data)\n new_deviation.save()\n\n del self.request.session[self.session_key]\n return super().form_valid(form)\n\n def deserialize_session_data(self, session_data: Dict[str, Any]) -> Dict[str, Any]:\n \"\"\"\n Convert serialized session data back into its original representation.\n \"\"\"\n result = {\n 'exercise': BaseExercise.objects.filter(id__in=session_data.get('exercise', [])),\n 'module': CourseModule.objects.filter(id__in=session_data.get('module', [])),\n 'submitter': UserProfile.objects.filter(id__in=session_data.get('submitter', [])),\n 'submitter_tag': UserTag.objects.filter(id__in=session_data.get('submitter_tag', [])),\n }\n return result\n\n\nclass RemoveDeviationsByIDView(CourseInstanceMixin, BaseRedirectView):\n access_mode = ACCESS.TEACHER\n deviation_model: Type[SubmissionRuleDeviation]\n\n def post(self, request: HttpRequest, *args: Any, **kwargs: Any) -> HttpResponse:\n deviations = self.deviation_model.objects.filter(\n id__in=request.POST.getlist(\"id\"),\n exercise__course_module__course_instance=self.instance,\n )\n for deviation in deviations:\n deviation.delete()\n if is_ajax(request):\n return HttpResponse(status=204)\n return self.redirect(self.deviation_model.get_list_url(self.instance))\n\n\nclass RemoveDeviationsView(CourseInstanceMixin, BaseFormView):\n access_mode = ACCESS.TEACHER\n deviation_model: Type[SubmissionRuleDeviation]\n\n def get_form_kwargs(self) -> Dict[str, Any]:\n kwargs = super().get_form_kwargs()\n kwargs[\"instance\"] = self.instance\n return kwargs\n\n def get_success_url(self) -> str:\n return self.deviation_model.get_list_url(self.instance)\n\n def form_valid(self, form: forms.BaseForm) -> HttpResponse:\n number_of_removed = 0\n deviations = self.deviation_model.objects.filter(\n exercise__in=get_exercises(form.cleaned_data),\n submitter__in=get_submitters(form.cleaned_data),\n )\n for deviation in deviations:\n deviation.delete()\n number_of_removed += 1\n if number_of_removed == 0:\n messages.warning(self.request, _(\"NOTHING_REMOVED\"))\n else:\n message = ngettext(\n 'REMOVED_DEVIATION -- {count}',\n 'REMOVED_DEVIATIONS -- {count}',\n number_of_removed,\n ).format(count=number_of_removed)\n messages.info(self.request, message)\n return super().form_valid(form)\n\n\n# pylint: disable-next=too-many-locals\ndef get_deviation_groups(\n all_deviations: models.QuerySet[SubmissionRuleDeviation],\n ) -> Iterable[Tuple[List[SubmissionRuleDeviation], bool, Optional[str]]]:\n \"\"\"\n Group the deviations by user and module.\n\n Grouping condition: deviations can be grouped if the user has been\n granted the same deviation (based on the `is_equal` method) for all\n exercises in the module.\n\n The returned tuples contain the following values:\n 1. List of deviations with the same user and module.\n 2. Boolean representing whether the deviations in the list can be\n displayed as a group (i.e. the grouping condition is satisfied).\n 3. An id that uniquely identifies the group of deviations.\n \"\"\"\n # Find the number of exercises in each module.\n course_instances = (\n all_deviations\n .values_list('exercise__course_module__course_instance', flat=True)\n .distinct()\n )\n exercise_counts = (\n BaseExercise.objects.filter(\n course_module__course_instance__in=course_instances\n )\n .order_by()\n .values('course_module_id')\n .annotate(count=models.Count('*'))\n )\n exercise_count_by_module = {row['course_module_id']: row['count'] for row in exercise_counts}\n\n ordered_deviations = (\n all_deviations\n .select_related(\n 'submitter', 'submitter__user',\n 'granter', 'granter__user',\n 'exercise', 'exercise__course_module',\n )\n # parent is prefetched because there may be multiple ancestors, and\n # they are needed for building the deviation's URL.\n .prefetch_related('exercise__parent')\n .order_by('submitter', 'exercise__course_module')\n )\n\n deviation_groups = groupby(\n ordered_deviations,\n lambda obj: (obj.submitter, obj.exercise.course_module),\n )\n for (_submitter, module), deviations_iter in deviation_groups:\n deviations = list(deviations_iter)\n can_group = True\n show_granter = True\n if len(deviations) < 2:\n # Group must have at least 2 deviations.\n can_group = False\n else:\n group_exercises = set()\n # Check that the same deviation has been granted for all exercises.\n first_granter = deviations[0].granter.id\n for deviation in deviations:\n if not deviation.is_groupable(deviations[0]):\n can_group = False\n if not show_granter:\n break\n if deviation.granter.id != first_granter:\n show_granter = False\n if not can_group:\n break\n group_exercises.add(deviation.exercise.id)\n else:\n if len(group_exercises) != exercise_count_by_module[module.id]:\n # The number of exercises that have deviations doesn't\n # match the number of exercises in the module, so there\n # are some exercises that don't have a deviation.\n can_group = False\n group_id = f\"{deviations[0].submitter.id}.{module.id}\" if can_group else None\n yield (deviations, can_group, group_id, show_granter)\n\n\ndef get_exercises(form_data: Dict[str, Any]) -> models.QuerySet[BaseExercise]:\n \"\"\"\n Get the exercises that match the input form's `exercise` and `module`\n fields.\n \"\"\"\n return BaseExercise.objects.filter(\n models.Q(id__in=form_data.get('exercise', []))\n | models.Q(course_module__in=form_data.get('module', []))\n )\n\n\ndef get_submitters(form_data: Dict[str, Any]) -> models.QuerySet[UserProfile]:\n \"\"\"\n Get the submitters that match the input form's `submitter` and\n `submitter_tag` fields.\n \"\"\"\n return UserProfile.objects.filter(\n models.Q(id__in=form_data.get('submitter', []))\n | models.Q(taggings__tag__in=form_data.get('submitter_tag', []))\n ).distinct()\n", "path": "deviations/viewbase.py"}]} | 4,031 | 177 |
gh_patches_debug_49039 | rasdani/github-patches | git_diff | facebookresearch__hydra-279 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug] Documentation inconsistency for `utils.get_original_cwd`
# 🐛 Bug
The tutorial for working directories has a few commands for setting the working directory [see here](https://cli.dev/docs/tutorial/working_directory), but the version of hydra on pip does not have these functions. Additionally, the install instructions do not include instructions on how to install from source (even if that's fairly trivial). The simple solution is to update the wheels on pip. Another alternative would be to put on the installation page that hydra is rapidly developing and suggest that one can install from source directly.
## System information
- 0.10.0 from pip
- python 3.7
- arch linux
## One more thing...
This is very minor but the pip version is `0.10.0` and the github master version is also `0.10.0`, but they not the same as evidenced by this issue. You should probably bump the version of git master. Keep up the good work, I think this is a great idea.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hydra/__init__.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 from . import utils
3 from .errors import MissingConfigException
4 from .main import main
5
6 # Source of truth for Hydra's version
7 __version__ = "0.10.0"
8
9 __all__ = ["__version__", "MissingConfigException", "main", "utils"]
10
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/hydra/__init__.py b/hydra/__init__.py
--- a/hydra/__init__.py
+++ b/hydra/__init__.py
@@ -4,6 +4,6 @@
from .main import main
# Source of truth for Hydra's version
-__version__ = "0.10.0"
+__version__ = "0.11.0-pre1"
__all__ = ["__version__", "MissingConfigException", "main", "utils"]
| {"golden_diff": "diff --git a/hydra/__init__.py b/hydra/__init__.py\n--- a/hydra/__init__.py\n+++ b/hydra/__init__.py\n@@ -4,6 +4,6 @@\n from .main import main\n \n # Source of truth for Hydra's version\n-__version__ = \"0.10.0\"\n+__version__ = \"0.11.0-pre1\"\n \n __all__ = [\"__version__\", \"MissingConfigException\", \"main\", \"utils\"]\n", "issue": "[Bug] Documentation inconsistency for `utils.get_original_cwd`\n# \ud83d\udc1b Bug\r\n\r\nThe tutorial for working directories has a few commands for setting the working directory [see here](https://cli.dev/docs/tutorial/working_directory), but the version of hydra on pip does not have these functions. Additionally, the install instructions do not include instructions on how to install from source (even if that's fairly trivial). The simple solution is to update the wheels on pip. Another alternative would be to put on the installation page that hydra is rapidly developing and suggest that one can install from source directly.\r\n\r\n## System information\r\n- 0.10.0 from pip\r\n- python 3.7\r\n- arch linux\r\n\r\n## One more thing...\r\nThis is very minor but the pip version is `0.10.0` and the github master version is also `0.10.0`, but they not the same as evidenced by this issue. You should probably bump the version of git master. Keep up the good work, I think this is a great idea.\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nfrom . import utils\nfrom .errors import MissingConfigException\nfrom .main import main\n\n# Source of truth for Hydra's version\n__version__ = \"0.10.0\"\n\n__all__ = [\"__version__\", \"MissingConfigException\", \"main\", \"utils\"]\n", "path": "hydra/__init__.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nfrom . import utils\nfrom .errors import MissingConfigException\nfrom .main import main\n\n# Source of truth for Hydra's version\n__version__ = \"0.11.0-pre1\"\n\n__all__ = [\"__version__\", \"MissingConfigException\", \"main\", \"utils\"]\n", "path": "hydra/__init__.py"}]} | 574 | 114 |
gh_patches_debug_37809 | rasdani/github-patches | git_diff | Pylons__pyramid-2760 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
make pyramid.paster.bootstrap into a context manager
This would just improve the API such that users could automatically call the closer in a reliable way.
``` python
from pyramid.paster import bootstrap
with bootstrap('development.ini') as env:
req = env['request']
```
This change would also affect `pyramid.scripting.prepare` which is what `bootstrap` uses under the hood to construct the `env`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyramid/paster.py`
Content:
```
1 import os
2
3 from paste.deploy import (
4 loadapp,
5 appconfig,
6 )
7
8 from pyramid.compat import configparser
9 from logging.config import fileConfig
10 from pyramid.scripting import prepare
11
12 def get_app(config_uri, name=None, options=None, loadapp=loadapp):
13 """ Return the WSGI application named ``name`` in the PasteDeploy
14 config file specified by ``config_uri``.
15
16 ``options``, if passed, should be a dictionary used as variable assignments
17 like ``{'http_port': 8080}``. This is useful if e.g. ``%(http_port)s`` is
18 used in the config file.
19
20 If the ``name`` is None, this will attempt to parse the name from
21 the ``config_uri`` string expecting the format ``inifile#name``.
22 If no name is found, the name will default to "main"."""
23 path, section = _getpathsec(config_uri, name)
24 config_name = 'config:%s' % path
25 here_dir = os.getcwd()
26
27 app = loadapp(
28 config_name,
29 name=section,
30 relative_to=here_dir,
31 global_conf=options)
32
33 return app
34
35 def get_appsettings(config_uri, name=None, options=None, appconfig=appconfig):
36 """ Return a dictionary representing the key/value pairs in an ``app``
37 section within the file represented by ``config_uri``.
38
39 ``options``, if passed, should be a dictionary used as variable assignments
40 like ``{'http_port': 8080}``. This is useful if e.g. ``%(http_port)s`` is
41 used in the config file.
42
43 If the ``name`` is None, this will attempt to parse the name from
44 the ``config_uri`` string expecting the format ``inifile#name``.
45 If no name is found, the name will default to "main"."""
46 path, section = _getpathsec(config_uri, name)
47 config_name = 'config:%s' % path
48 here_dir = os.getcwd()
49 return appconfig(
50 config_name,
51 name=section,
52 relative_to=here_dir,
53 global_conf=options)
54
55 def setup_logging(config_uri, global_conf=None,
56 fileConfig=fileConfig,
57 configparser=configparser):
58 """
59 Set up logging via :func:`logging.config.fileConfig` with the filename
60 specified via ``config_uri`` (a string in the form
61 ``filename#sectionname``).
62
63 ConfigParser defaults are specified for the special ``__file__``
64 and ``here`` variables, similar to PasteDeploy config loading.
65 Extra defaults can optionally be specified as a dict in ``global_conf``.
66 """
67 path, _ = _getpathsec(config_uri, None)
68 parser = configparser.ConfigParser()
69 parser.read([path])
70 if parser.has_section('loggers'):
71 config_file = os.path.abspath(path)
72 full_global_conf = dict(
73 __file__=config_file,
74 here=os.path.dirname(config_file))
75 if global_conf:
76 full_global_conf.update(global_conf)
77 return fileConfig(config_file, full_global_conf)
78
79 def _getpathsec(config_uri, name):
80 if '#' in config_uri:
81 path, section = config_uri.split('#', 1)
82 else:
83 path, section = config_uri, 'main'
84 if name:
85 section = name
86 return path, section
87
88 def bootstrap(config_uri, request=None, options=None):
89 """ Load a WSGI application from the PasteDeploy config file specified
90 by ``config_uri``. The environment will be configured as if it is
91 currently serving ``request``, leaving a natural environment in place
92 to write scripts that can generate URLs and utilize renderers.
93
94 This function returns a dictionary with ``app``, ``root``, ``closer``,
95 ``request``, and ``registry`` keys. ``app`` is the WSGI app loaded
96 (based on the ``config_uri``), ``root`` is the traversal root resource
97 of the Pyramid application, and ``closer`` is a parameterless callback
98 that may be called when your script is complete (it pops a threadlocal
99 stack).
100
101 .. note::
102
103 Most operations within :app:`Pyramid` expect to be invoked within the
104 context of a WSGI request, thus it's important when loading your
105 application to anchor it when executing scripts and other code that is
106 not normally invoked during active WSGI requests.
107
108 .. note::
109
110 For a complex config file containing multiple :app:`Pyramid`
111 applications, this function will setup the environment under the context
112 of the last-loaded :app:`Pyramid` application. You may load a specific
113 application yourself by using the lower-level functions
114 :meth:`pyramid.paster.get_app` and :meth:`pyramid.scripting.prepare` in
115 conjunction with :attr:`pyramid.config.global_registries`.
116
117 ``config_uri`` -- specifies the PasteDeploy config file to use for the
118 interactive shell. The format is ``inifile#name``. If the name is left
119 off, ``main`` will be assumed.
120
121 ``request`` -- specified to anchor the script to a given set of WSGI
122 parameters. For example, most people would want to specify the host,
123 scheme and port such that their script will generate URLs in relation
124 to those parameters. A request with default parameters is constructed
125 for you if none is provided. You can mutate the request's ``environ``
126 later to setup a specific host/port/scheme/etc.
127
128 ``options`` Is passed to get_app for use as variable assignments like
129 {'http_port': 8080} and then use %(http_port)s in the
130 config file.
131
132 See :ref:`writing_a_script` for more information about how to use this
133 function.
134 """
135 app = get_app(config_uri, options=options)
136 env = prepare(request)
137 env['app'] = app
138 return env
139
140
```
Path: `pyramid/scripting.py`
Content:
```
1 from pyramid.config import global_registries
2 from pyramid.exceptions import ConfigurationError
3
4 from pyramid.interfaces import (
5 IRequestFactory,
6 IRootFactory,
7 )
8 from pyramid.request import Request
9 from pyramid.request import apply_request_extensions
10
11 from pyramid.threadlocal import manager as threadlocal_manager
12 from pyramid.traversal import DefaultRootFactory
13
14 def get_root(app, request=None):
15 """ Return a tuple composed of ``(root, closer)`` when provided a
16 :term:`router` instance as the ``app`` argument. The ``root``
17 returned is the application root object. The ``closer`` returned
18 is a callable (accepting no arguments) that should be called when
19 your scripting application is finished using the root.
20
21 ``request`` is passed to the :app:`Pyramid` application root
22 factory to compute the root. If ``request`` is None, a default
23 will be constructed using the registry's :term:`Request Factory`
24 via the :meth:`pyramid.interfaces.IRequestFactory.blank` method.
25 """
26 registry = app.registry
27 if request is None:
28 request = _make_request('/', registry)
29 threadlocals = {'registry':registry, 'request':request}
30 app.threadlocal_manager.push(threadlocals)
31 def closer(request=request): # keep request alive via this function default
32 app.threadlocal_manager.pop()
33 root = app.root_factory(request)
34 return root, closer
35
36 def prepare(request=None, registry=None):
37 """ This function pushes data onto the Pyramid threadlocal stack
38 (request and registry), making those objects 'current'. It
39 returns a dictionary useful for bootstrapping a Pyramid
40 application in a scripting environment.
41
42 ``request`` is passed to the :app:`Pyramid` application root
43 factory to compute the root. If ``request`` is None, a default
44 will be constructed using the registry's :term:`Request Factory`
45 via the :meth:`pyramid.interfaces.IRequestFactory.blank` method.
46
47 If ``registry`` is not supplied, the last registry loaded from
48 :attr:`pyramid.config.global_registries` will be used. If you
49 have loaded more than one :app:`Pyramid` application in the
50 current process, you may not want to use the last registry
51 loaded, thus you can search the ``global_registries`` and supply
52 the appropriate one based on your own criteria.
53
54 The function returns a dictionary composed of ``root``,
55 ``closer``, ``registry``, ``request`` and ``root_factory``. The
56 ``root`` returned is the application's root resource object. The
57 ``closer`` returned is a callable (accepting no arguments) that
58 should be called when your scripting application is finished
59 using the root. ``registry`` is the registry object passed or
60 the last registry loaded into
61 :attr:`pyramid.config.global_registries` if no registry is passed.
62 ``request`` is the request object passed or the constructed request
63 if no request is passed. ``root_factory`` is the root factory used
64 to construct the root.
65 """
66 if registry is None:
67 registry = getattr(request, 'registry', global_registries.last)
68 if registry is None:
69 raise ConfigurationError('No valid Pyramid applications could be '
70 'found, make sure one has been created '
71 'before trying to activate it.')
72 if request is None:
73 request = _make_request('/', registry)
74 # NB: even though _make_request might have already set registry on
75 # request, we reset it in case someone has passed in their own
76 # request.
77 request.registry = registry
78 threadlocals = {'registry':registry, 'request':request}
79 threadlocal_manager.push(threadlocals)
80 apply_request_extensions(request)
81 def closer():
82 threadlocal_manager.pop()
83 root_factory = registry.queryUtility(IRootFactory,
84 default=DefaultRootFactory)
85 root = root_factory(request)
86 if getattr(request, 'context', None) is None:
87 request.context = root
88 return {'root':root, 'closer':closer, 'registry':registry,
89 'request':request, 'root_factory':root_factory}
90
91 def _make_request(path, registry=None):
92 """ Return a :meth:`pyramid.request.Request` object anchored at a
93 given path. The object returned will be generated from the supplied
94 registry's :term:`Request Factory` using the
95 :meth:`pyramid.interfaces.IRequestFactory.blank` method.
96
97 This request object can be passed to :meth:`pyramid.scripting.get_root`
98 or :meth:`pyramid.scripting.prepare` to initialize an application in
99 preparation for executing a script with a proper environment setup.
100 URLs can then be generated with the object, as well as rendering
101 templates.
102
103 If ``registry`` is not supplied, the last registry loaded from
104 :attr:`pyramid.config.global_registries` will be used. If you have
105 loaded more than one :app:`Pyramid` application in the current
106 process, you may not want to use the last registry loaded, thus
107 you can search the ``global_registries`` and supply the appropriate
108 one based on your own criteria.
109 """
110 if registry is None:
111 registry = global_registries.last
112 request_factory = registry.queryUtility(IRequestFactory, default=Request)
113 request = request_factory.blank(path)
114 request.registry = registry
115 return request
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pyramid/paster.py b/pyramid/paster.py
--- a/pyramid/paster.py
+++ b/pyramid/paster.py
@@ -129,8 +129,22 @@
{'http_port': 8080} and then use %(http_port)s in the
config file.
+ This function may be used as a context manager to call the ``closer``
+ automatically:
+
+ .. code-block:: python
+
+ with bootstrap('development.ini') as env:
+ request = env['request']
+ # ...
+
See :ref:`writing_a_script` for more information about how to use this
function.
+
+ .. versionchanged:: 1.8
+
+ Added the ability to use the return value as a context manager.
+
"""
app = get_app(config_uri, options=options)
env = prepare(request)
diff --git a/pyramid/scripting.py b/pyramid/scripting.py
--- a/pyramid/scripting.py
+++ b/pyramid/scripting.py
@@ -56,12 +56,25 @@
``root`` returned is the application's root resource object. The
``closer`` returned is a callable (accepting no arguments) that
should be called when your scripting application is finished
- using the root. ``registry`` is the registry object passed or
- the last registry loaded into
- :attr:`pyramid.config.global_registries` if no registry is passed.
+ using the root. ``registry`` is the resolved registry object.
``request`` is the request object passed or the constructed request
if no request is passed. ``root_factory`` is the root factory used
to construct the root.
+
+ This function may be used as a context manager to call the ``closer``
+ automatically:
+
+ .. code-block:: python
+
+ registry = config.registry
+ with prepare(registry) as env:
+ request = env['request']
+ # ...
+
+ .. versionchanged:: 1.8
+
+ Added the ability to use the return value as a context manager.
+
"""
if registry is None:
registry = getattr(request, 'registry', global_registries.last)
@@ -85,8 +98,20 @@
root = root_factory(request)
if getattr(request, 'context', None) is None:
request.context = root
- return {'root':root, 'closer':closer, 'registry':registry,
- 'request':request, 'root_factory':root_factory}
+ return AppEnvironment(
+ root=root,
+ closer=closer,
+ registry=registry,
+ request=request,
+ root_factory=root_factory,
+ )
+
+class AppEnvironment(dict):
+ def __enter__(self):
+ return self
+
+ def __exit__(self, type, value, traceback):
+ self['closer']()
def _make_request(path, registry=None):
""" Return a :meth:`pyramid.request.Request` object anchored at a
| {"golden_diff": "diff --git a/pyramid/paster.py b/pyramid/paster.py\n--- a/pyramid/paster.py\n+++ b/pyramid/paster.py\n@@ -129,8 +129,22 @@\n {'http_port': 8080} and then use %(http_port)s in the\n config file.\n \n+ This function may be used as a context manager to call the ``closer``\n+ automatically:\n+\n+ .. code-block:: python\n+\n+ with bootstrap('development.ini') as env:\n+ request = env['request']\n+ # ...\n+\n See :ref:`writing_a_script` for more information about how to use this\n function.\n+\n+ .. versionchanged:: 1.8\n+\n+ Added the ability to use the return value as a context manager.\n+\n \"\"\"\n app = get_app(config_uri, options=options)\n env = prepare(request)\ndiff --git a/pyramid/scripting.py b/pyramid/scripting.py\n--- a/pyramid/scripting.py\n+++ b/pyramid/scripting.py\n@@ -56,12 +56,25 @@\n ``root`` returned is the application's root resource object. The\n ``closer`` returned is a callable (accepting no arguments) that\n should be called when your scripting application is finished\n- using the root. ``registry`` is the registry object passed or\n- the last registry loaded into\n- :attr:`pyramid.config.global_registries` if no registry is passed.\n+ using the root. ``registry`` is the resolved registry object.\n ``request`` is the request object passed or the constructed request\n if no request is passed. ``root_factory`` is the root factory used\n to construct the root.\n+\n+ This function may be used as a context manager to call the ``closer``\n+ automatically:\n+\n+ .. code-block:: python\n+\n+ registry = config.registry\n+ with prepare(registry) as env:\n+ request = env['request']\n+ # ...\n+\n+ .. versionchanged:: 1.8\n+\n+ Added the ability to use the return value as a context manager.\n+\n \"\"\"\n if registry is None:\n registry = getattr(request, 'registry', global_registries.last)\n@@ -85,8 +98,20 @@\n root = root_factory(request)\n if getattr(request, 'context', None) is None:\n request.context = root\n- return {'root':root, 'closer':closer, 'registry':registry,\n- 'request':request, 'root_factory':root_factory}\n+ return AppEnvironment(\n+ root=root,\n+ closer=closer,\n+ registry=registry,\n+ request=request,\n+ root_factory=root_factory,\n+ )\n+\n+class AppEnvironment(dict):\n+ def __enter__(self):\n+ return self\n+\n+ def __exit__(self, type, value, traceback):\n+ self['closer']()\n \n def _make_request(path, registry=None):\n \"\"\" Return a :meth:`pyramid.request.Request` object anchored at a\n", "issue": "make pyramid.paster.bootstrap into a context manager\nThis would just improve the API such that users could automatically call the closer in a reliable way.\n\n``` python\nfrom pyramid.paster import bootstrap\n\nwith bootstrap('development.ini') as env:\n req = env['request']\n```\n\nThis change would also affect `pyramid.scripting.prepare` which is what `bootstrap` uses under the hood to construct the `env`.\n\n", "before_files": [{"content": "import os\n\nfrom paste.deploy import (\n loadapp,\n appconfig,\n )\n\nfrom pyramid.compat import configparser\nfrom logging.config import fileConfig\nfrom pyramid.scripting import prepare\n\ndef get_app(config_uri, name=None, options=None, loadapp=loadapp):\n \"\"\" Return the WSGI application named ``name`` in the PasteDeploy\n config file specified by ``config_uri``.\n\n ``options``, if passed, should be a dictionary used as variable assignments\n like ``{'http_port': 8080}``. This is useful if e.g. ``%(http_port)s`` is\n used in the config file.\n\n If the ``name`` is None, this will attempt to parse the name from\n the ``config_uri`` string expecting the format ``inifile#name``.\n If no name is found, the name will default to \"main\".\"\"\"\n path, section = _getpathsec(config_uri, name)\n config_name = 'config:%s' % path\n here_dir = os.getcwd()\n\n app = loadapp(\n config_name,\n name=section,\n relative_to=here_dir,\n global_conf=options)\n\n return app\n\ndef get_appsettings(config_uri, name=None, options=None, appconfig=appconfig):\n \"\"\" Return a dictionary representing the key/value pairs in an ``app``\n section within the file represented by ``config_uri``.\n\n ``options``, if passed, should be a dictionary used as variable assignments\n like ``{'http_port': 8080}``. This is useful if e.g. ``%(http_port)s`` is\n used in the config file.\n\n If the ``name`` is None, this will attempt to parse the name from\n the ``config_uri`` string expecting the format ``inifile#name``.\n If no name is found, the name will default to \"main\".\"\"\"\n path, section = _getpathsec(config_uri, name)\n config_name = 'config:%s' % path\n here_dir = os.getcwd()\n return appconfig(\n config_name,\n name=section,\n relative_to=here_dir,\n global_conf=options)\n\ndef setup_logging(config_uri, global_conf=None,\n fileConfig=fileConfig,\n configparser=configparser):\n \"\"\"\n Set up logging via :func:`logging.config.fileConfig` with the filename\n specified via ``config_uri`` (a string in the form\n ``filename#sectionname``).\n\n ConfigParser defaults are specified for the special ``__file__``\n and ``here`` variables, similar to PasteDeploy config loading.\n Extra defaults can optionally be specified as a dict in ``global_conf``.\n \"\"\"\n path, _ = _getpathsec(config_uri, None)\n parser = configparser.ConfigParser()\n parser.read([path])\n if parser.has_section('loggers'):\n config_file = os.path.abspath(path)\n full_global_conf = dict(\n __file__=config_file,\n here=os.path.dirname(config_file))\n if global_conf:\n full_global_conf.update(global_conf)\n return fileConfig(config_file, full_global_conf)\n\ndef _getpathsec(config_uri, name):\n if '#' in config_uri:\n path, section = config_uri.split('#', 1)\n else:\n path, section = config_uri, 'main'\n if name:\n section = name\n return path, section\n\ndef bootstrap(config_uri, request=None, options=None):\n \"\"\" Load a WSGI application from the PasteDeploy config file specified\n by ``config_uri``. The environment will be configured as if it is\n currently serving ``request``, leaving a natural environment in place\n to write scripts that can generate URLs and utilize renderers.\n\n This function returns a dictionary with ``app``, ``root``, ``closer``,\n ``request``, and ``registry`` keys. ``app`` is the WSGI app loaded\n (based on the ``config_uri``), ``root`` is the traversal root resource\n of the Pyramid application, and ``closer`` is a parameterless callback\n that may be called when your script is complete (it pops a threadlocal\n stack).\n\n .. note::\n\n Most operations within :app:`Pyramid` expect to be invoked within the\n context of a WSGI request, thus it's important when loading your\n application to anchor it when executing scripts and other code that is\n not normally invoked during active WSGI requests.\n\n .. note::\n\n For a complex config file containing multiple :app:`Pyramid`\n applications, this function will setup the environment under the context\n of the last-loaded :app:`Pyramid` application. You may load a specific\n application yourself by using the lower-level functions\n :meth:`pyramid.paster.get_app` and :meth:`pyramid.scripting.prepare` in\n conjunction with :attr:`pyramid.config.global_registries`.\n\n ``config_uri`` -- specifies the PasteDeploy config file to use for the\n interactive shell. The format is ``inifile#name``. If the name is left\n off, ``main`` will be assumed.\n\n ``request`` -- specified to anchor the script to a given set of WSGI\n parameters. For example, most people would want to specify the host,\n scheme and port such that their script will generate URLs in relation\n to those parameters. A request with default parameters is constructed\n for you if none is provided. You can mutate the request's ``environ``\n later to setup a specific host/port/scheme/etc.\n\n ``options`` Is passed to get_app for use as variable assignments like \n {'http_port': 8080} and then use %(http_port)s in the\n config file.\n\n See :ref:`writing_a_script` for more information about how to use this\n function.\n \"\"\"\n app = get_app(config_uri, options=options)\n env = prepare(request)\n env['app'] = app\n return env\n\n", "path": "pyramid/paster.py"}, {"content": "from pyramid.config import global_registries\nfrom pyramid.exceptions import ConfigurationError\n\nfrom pyramid.interfaces import (\n IRequestFactory,\n IRootFactory,\n )\nfrom pyramid.request import Request\nfrom pyramid.request import apply_request_extensions\n\nfrom pyramid.threadlocal import manager as threadlocal_manager\nfrom pyramid.traversal import DefaultRootFactory\n\ndef get_root(app, request=None):\n \"\"\" Return a tuple composed of ``(root, closer)`` when provided a\n :term:`router` instance as the ``app`` argument. The ``root``\n returned is the application root object. The ``closer`` returned\n is a callable (accepting no arguments) that should be called when\n your scripting application is finished using the root.\n\n ``request`` is passed to the :app:`Pyramid` application root\n factory to compute the root. If ``request`` is None, a default\n will be constructed using the registry's :term:`Request Factory`\n via the :meth:`pyramid.interfaces.IRequestFactory.blank` method.\n \"\"\"\n registry = app.registry\n if request is None:\n request = _make_request('/', registry)\n threadlocals = {'registry':registry, 'request':request}\n app.threadlocal_manager.push(threadlocals)\n def closer(request=request): # keep request alive via this function default\n app.threadlocal_manager.pop()\n root = app.root_factory(request)\n return root, closer\n\ndef prepare(request=None, registry=None):\n \"\"\" This function pushes data onto the Pyramid threadlocal stack\n (request and registry), making those objects 'current'. It\n returns a dictionary useful for bootstrapping a Pyramid\n application in a scripting environment.\n\n ``request`` is passed to the :app:`Pyramid` application root\n factory to compute the root. If ``request`` is None, a default\n will be constructed using the registry's :term:`Request Factory`\n via the :meth:`pyramid.interfaces.IRequestFactory.blank` method.\n\n If ``registry`` is not supplied, the last registry loaded from\n :attr:`pyramid.config.global_registries` will be used. If you\n have loaded more than one :app:`Pyramid` application in the\n current process, you may not want to use the last registry\n loaded, thus you can search the ``global_registries`` and supply\n the appropriate one based on your own criteria.\n\n The function returns a dictionary composed of ``root``,\n ``closer``, ``registry``, ``request`` and ``root_factory``. The\n ``root`` returned is the application's root resource object. The\n ``closer`` returned is a callable (accepting no arguments) that\n should be called when your scripting application is finished\n using the root. ``registry`` is the registry object passed or\n the last registry loaded into\n :attr:`pyramid.config.global_registries` if no registry is passed.\n ``request`` is the request object passed or the constructed request\n if no request is passed. ``root_factory`` is the root factory used\n to construct the root.\n \"\"\"\n if registry is None:\n registry = getattr(request, 'registry', global_registries.last)\n if registry is None:\n raise ConfigurationError('No valid Pyramid applications could be '\n 'found, make sure one has been created '\n 'before trying to activate it.')\n if request is None:\n request = _make_request('/', registry)\n # NB: even though _make_request might have already set registry on\n # request, we reset it in case someone has passed in their own\n # request.\n request.registry = registry \n threadlocals = {'registry':registry, 'request':request}\n threadlocal_manager.push(threadlocals)\n apply_request_extensions(request)\n def closer():\n threadlocal_manager.pop()\n root_factory = registry.queryUtility(IRootFactory,\n default=DefaultRootFactory)\n root = root_factory(request)\n if getattr(request, 'context', None) is None:\n request.context = root\n return {'root':root, 'closer':closer, 'registry':registry,\n 'request':request, 'root_factory':root_factory}\n\ndef _make_request(path, registry=None):\n \"\"\" Return a :meth:`pyramid.request.Request` object anchored at a\n given path. The object returned will be generated from the supplied\n registry's :term:`Request Factory` using the\n :meth:`pyramid.interfaces.IRequestFactory.blank` method.\n\n This request object can be passed to :meth:`pyramid.scripting.get_root`\n or :meth:`pyramid.scripting.prepare` to initialize an application in\n preparation for executing a script with a proper environment setup.\n URLs can then be generated with the object, as well as rendering\n templates.\n\n If ``registry`` is not supplied, the last registry loaded from\n :attr:`pyramid.config.global_registries` will be used. If you have\n loaded more than one :app:`Pyramid` application in the current\n process, you may not want to use the last registry loaded, thus\n you can search the ``global_registries`` and supply the appropriate\n one based on your own criteria.\n \"\"\"\n if registry is None:\n registry = global_registries.last\n request_factory = registry.queryUtility(IRequestFactory, default=Request)\n request = request_factory.blank(path)\n request.registry = registry\n return request\n", "path": "pyramid/scripting.py"}], "after_files": [{"content": "import os\n\nfrom paste.deploy import (\n loadapp,\n appconfig,\n )\n\nfrom pyramid.compat import configparser\nfrom logging.config import fileConfig\nfrom pyramid.scripting import prepare\n\ndef get_app(config_uri, name=None, options=None, loadapp=loadapp):\n \"\"\" Return the WSGI application named ``name`` in the PasteDeploy\n config file specified by ``config_uri``.\n\n ``options``, if passed, should be a dictionary used as variable assignments\n like ``{'http_port': 8080}``. This is useful if e.g. ``%(http_port)s`` is\n used in the config file.\n\n If the ``name`` is None, this will attempt to parse the name from\n the ``config_uri`` string expecting the format ``inifile#name``.\n If no name is found, the name will default to \"main\".\"\"\"\n path, section = _getpathsec(config_uri, name)\n config_name = 'config:%s' % path\n here_dir = os.getcwd()\n\n app = loadapp(\n config_name,\n name=section,\n relative_to=here_dir,\n global_conf=options)\n\n return app\n\ndef get_appsettings(config_uri, name=None, options=None, appconfig=appconfig):\n \"\"\" Return a dictionary representing the key/value pairs in an ``app``\n section within the file represented by ``config_uri``.\n\n ``options``, if passed, should be a dictionary used as variable assignments\n like ``{'http_port': 8080}``. This is useful if e.g. ``%(http_port)s`` is\n used in the config file.\n\n If the ``name`` is None, this will attempt to parse the name from\n the ``config_uri`` string expecting the format ``inifile#name``.\n If no name is found, the name will default to \"main\".\"\"\"\n path, section = _getpathsec(config_uri, name)\n config_name = 'config:%s' % path\n here_dir = os.getcwd()\n return appconfig(\n config_name,\n name=section,\n relative_to=here_dir,\n global_conf=options)\n\ndef setup_logging(config_uri, global_conf=None,\n fileConfig=fileConfig,\n configparser=configparser):\n \"\"\"\n Set up logging via :func:`logging.config.fileConfig` with the filename\n specified via ``config_uri`` (a string in the form\n ``filename#sectionname``).\n\n ConfigParser defaults are specified for the special ``__file__``\n and ``here`` variables, similar to PasteDeploy config loading.\n Extra defaults can optionally be specified as a dict in ``global_conf``.\n \"\"\"\n path, _ = _getpathsec(config_uri, None)\n parser = configparser.ConfigParser()\n parser.read([path])\n if parser.has_section('loggers'):\n config_file = os.path.abspath(path)\n full_global_conf = dict(\n __file__=config_file,\n here=os.path.dirname(config_file))\n if global_conf:\n full_global_conf.update(global_conf)\n return fileConfig(config_file, full_global_conf)\n\ndef _getpathsec(config_uri, name):\n if '#' in config_uri:\n path, section = config_uri.split('#', 1)\n else:\n path, section = config_uri, 'main'\n if name:\n section = name\n return path, section\n\ndef bootstrap(config_uri, request=None, options=None):\n \"\"\" Load a WSGI application from the PasteDeploy config file specified\n by ``config_uri``. The environment will be configured as if it is\n currently serving ``request``, leaving a natural environment in place\n to write scripts that can generate URLs and utilize renderers.\n\n This function returns a dictionary with ``app``, ``root``, ``closer``,\n ``request``, and ``registry`` keys. ``app`` is the WSGI app loaded\n (based on the ``config_uri``), ``root`` is the traversal root resource\n of the Pyramid application, and ``closer`` is a parameterless callback\n that may be called when your script is complete (it pops a threadlocal\n stack).\n\n .. note::\n\n Most operations within :app:`Pyramid` expect to be invoked within the\n context of a WSGI request, thus it's important when loading your\n application to anchor it when executing scripts and other code that is\n not normally invoked during active WSGI requests.\n\n .. note::\n\n For a complex config file containing multiple :app:`Pyramid`\n applications, this function will setup the environment under the context\n of the last-loaded :app:`Pyramid` application. You may load a specific\n application yourself by using the lower-level functions\n :meth:`pyramid.paster.get_app` and :meth:`pyramid.scripting.prepare` in\n conjunction with :attr:`pyramid.config.global_registries`.\n\n ``config_uri`` -- specifies the PasteDeploy config file to use for the\n interactive shell. The format is ``inifile#name``. If the name is left\n off, ``main`` will be assumed.\n\n ``request`` -- specified to anchor the script to a given set of WSGI\n parameters. For example, most people would want to specify the host,\n scheme and port such that their script will generate URLs in relation\n to those parameters. A request with default parameters is constructed\n for you if none is provided. You can mutate the request's ``environ``\n later to setup a specific host/port/scheme/etc.\n\n ``options`` Is passed to get_app for use as variable assignments like \n {'http_port': 8080} and then use %(http_port)s in the\n config file.\n\n This function may be used as a context manager to call the ``closer``\n automatically:\n\n .. code-block:: python\n\n with bootstrap('development.ini') as env:\n request = env['request']\n # ...\n\n See :ref:`writing_a_script` for more information about how to use this\n function.\n\n .. versionchanged:: 1.8\n\n Added the ability to use the return value as a context manager.\n\n \"\"\"\n app = get_app(config_uri, options=options)\n env = prepare(request)\n env['app'] = app\n return env\n\n", "path": "pyramid/paster.py"}, {"content": "from pyramid.config import global_registries\nfrom pyramid.exceptions import ConfigurationError\n\nfrom pyramid.interfaces import (\n IRequestFactory,\n IRootFactory,\n )\nfrom pyramid.request import Request\nfrom pyramid.request import apply_request_extensions\n\nfrom pyramid.threadlocal import manager as threadlocal_manager\nfrom pyramid.traversal import DefaultRootFactory\n\ndef get_root(app, request=None):\n \"\"\" Return a tuple composed of ``(root, closer)`` when provided a\n :term:`router` instance as the ``app`` argument. The ``root``\n returned is the application root object. The ``closer`` returned\n is a callable (accepting no arguments) that should be called when\n your scripting application is finished using the root.\n\n ``request`` is passed to the :app:`Pyramid` application root\n factory to compute the root. If ``request`` is None, a default\n will be constructed using the registry's :term:`Request Factory`\n via the :meth:`pyramid.interfaces.IRequestFactory.blank` method.\n \"\"\"\n registry = app.registry\n if request is None:\n request = _make_request('/', registry)\n threadlocals = {'registry':registry, 'request':request}\n app.threadlocal_manager.push(threadlocals)\n def closer(request=request): # keep request alive via this function default\n app.threadlocal_manager.pop()\n root = app.root_factory(request)\n return root, closer\n\ndef prepare(request=None, registry=None):\n \"\"\" This function pushes data onto the Pyramid threadlocal stack\n (request and registry), making those objects 'current'. It\n returns a dictionary useful for bootstrapping a Pyramid\n application in a scripting environment.\n\n ``request`` is passed to the :app:`Pyramid` application root\n factory to compute the root. If ``request`` is None, a default\n will be constructed using the registry's :term:`Request Factory`\n via the :meth:`pyramid.interfaces.IRequestFactory.blank` method.\n\n If ``registry`` is not supplied, the last registry loaded from\n :attr:`pyramid.config.global_registries` will be used. If you\n have loaded more than one :app:`Pyramid` application in the\n current process, you may not want to use the last registry\n loaded, thus you can search the ``global_registries`` and supply\n the appropriate one based on your own criteria.\n\n The function returns a dictionary composed of ``root``,\n ``closer``, ``registry``, ``request`` and ``root_factory``. The\n ``root`` returned is the application's root resource object. The\n ``closer`` returned is a callable (accepting no arguments) that\n should be called when your scripting application is finished\n using the root. ``registry`` is the resolved registry object.\n ``request`` is the request object passed or the constructed request\n if no request is passed. ``root_factory`` is the root factory used\n to construct the root.\n\n This function may be used as a context manager to call the ``closer``\n automatically:\n\n .. code-block:: python\n\n registry = config.registry\n with prepare(registry) as env:\n request = env['request']\n # ...\n\n .. versionchanged:: 1.8\n\n Added the ability to use the return value as a context manager.\n\n \"\"\"\n if registry is None:\n registry = getattr(request, 'registry', global_registries.last)\n if registry is None:\n raise ConfigurationError('No valid Pyramid applications could be '\n 'found, make sure one has been created '\n 'before trying to activate it.')\n if request is None:\n request = _make_request('/', registry)\n # NB: even though _make_request might have already set registry on\n # request, we reset it in case someone has passed in their own\n # request.\n request.registry = registry \n threadlocals = {'registry':registry, 'request':request}\n threadlocal_manager.push(threadlocals)\n apply_request_extensions(request)\n def closer():\n threadlocal_manager.pop()\n root_factory = registry.queryUtility(IRootFactory,\n default=DefaultRootFactory)\n root = root_factory(request)\n if getattr(request, 'context', None) is None:\n request.context = root\n return AppEnvironment(\n root=root,\n closer=closer,\n registry=registry,\n request=request,\n root_factory=root_factory,\n )\n\nclass AppEnvironment(dict):\n def __enter__(self):\n return self\n\n def __exit__(self, type, value, traceback):\n self['closer']()\n\ndef _make_request(path, registry=None):\n \"\"\" Return a :meth:`pyramid.request.Request` object anchored at a\n given path. The object returned will be generated from the supplied\n registry's :term:`Request Factory` using the\n :meth:`pyramid.interfaces.IRequestFactory.blank` method.\n\n This request object can be passed to :meth:`pyramid.scripting.get_root`\n or :meth:`pyramid.scripting.prepare` to initialize an application in\n preparation for executing a script with a proper environment setup.\n URLs can then be generated with the object, as well as rendering\n templates.\n\n If ``registry`` is not supplied, the last registry loaded from\n :attr:`pyramid.config.global_registries` will be used. If you have\n loaded more than one :app:`Pyramid` application in the current\n process, you may not want to use the last registry loaded, thus\n you can search the ``global_registries`` and supply the appropriate\n one based on your own criteria.\n \"\"\"\n if registry is None:\n registry = global_registries.last\n request_factory = registry.queryUtility(IRequestFactory, default=Request)\n request = request_factory.blank(path)\n request.registry = registry\n return request\n", "path": "pyramid/scripting.py"}]} | 3,393 | 681 |
gh_patches_debug_34543 | rasdani/github-patches | git_diff | UTNkar__moore-154 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Menu Translations
<!-- Do you want to ask a question? Are you looking for support? The system administrator can help you: [email protected] -->
### Description
Not all menu pages are using `translated_title` when being added to the menu.
<!-- Please select the appropriate "topic category"/blue and "issue type"/yellow label -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/website/templatetags/site_tags.py`
Content:
```
1 from django import template
2
3 register = template.Library()
4
5
6 @register.simple_tag(takes_context=True)
7 def get_site_root(context):
8 # NB this returns a core.Page, not the implementation-specific model used
9 # so object-comparison to self will return false as objects would differ
10 return context['request'].site.root_page
11
12
13 def has_menu_children(page):
14 return page.get_children().live().in_menu().exists()
15
16
17 # Retrieves the top menu items - the immediate children of the parent page
18 # The has_menu_children method is necessary because the bootstrap menu requires
19 # a dropdown class to be applied to a parent
20 @register.inclusion_tag('tags/menu.html', takes_context=True)
21 def menu_items(context, parent, calling_page=None, sidenav=False):
22 menuitems = parent.get_children().live().in_menu()
23 for menuitem in menuitems:
24 menuitem.show_dropdown = has_menu_children(menuitem)
25 # TODO: There has to be a better alternative!
26 if hasattr(menuitem, 'googleformindex'):
27 menuitem.translated_title = menuitem.googleformindex\
28 .translated_title
29 elif hasattr(menuitem, 'googleformpage'):
30 menuitem.translated_title = menuitem.googleformpage\
31 .translated_title
32 elif hasattr(menuitem, 'homepage'):
33 menuitem.translated_title = menuitem.homepage.translated_title
34 elif hasattr(menuitem, 'recruitmentpage'):
35 menuitem.translated_title = menuitem.recruitmentpage\
36 .translated_title
37 elif hasattr(menuitem, 'newsindexpage'):
38 menuitem.translated_title = menuitem.newsindexpage.translated_title
39 elif hasattr(menuitem, 'newspage'):
40 menuitem.translated_title = menuitem.newspage.translated_title
41 elif hasattr(menuitem, 'webpage'):
42 menuitem.translated_title = menuitem.webpage.translated_title
43 # We don't directly check if calling_page is None since the template
44 # engine can pass an empty string to calling_page
45 # if the variable passed as calling_page does not exist.
46 menuitem.active = (calling_page.url.startswith(menuitem.url)
47 if calling_page else False)
48 return {
49 'calling_page': calling_page,
50 'menuitems': menuitems,
51 'sidenav': sidenav,
52 # required by the pageurl tag that we want to use within this template
53 'request': context['request'],
54 }
55
56
57 # Retrieves the children of the top menu items for the drop downs
58 @register.inclusion_tag('tags/menu_children.html', takes_context=True)
59 def menu_children(context, parent, sidenav=False):
60 children = parent.get_children()
61 children = children.live().in_menu()
62 return {
63 'parent': parent,
64 'children': children,
65 'sidenav': sidenav,
66 # required by the pageurl tag that we want to use within this template
67 'request': context['request'],
68 }
69
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/website/website/templatetags/site_tags.py b/website/website/templatetags/site_tags.py
--- a/website/website/templatetags/site_tags.py
+++ b/website/website/templatetags/site_tags.py
@@ -20,26 +20,9 @@
@register.inclusion_tag('tags/menu.html', takes_context=True)
def menu_items(context, parent, calling_page=None, sidenav=False):
menuitems = parent.get_children().live().in_menu()
+ menuitems = [m.specific for m in menuitems]
for menuitem in menuitems:
menuitem.show_dropdown = has_menu_children(menuitem)
- # TODO: There has to be a better alternative!
- if hasattr(menuitem, 'googleformindex'):
- menuitem.translated_title = menuitem.googleformindex\
- .translated_title
- elif hasattr(menuitem, 'googleformpage'):
- menuitem.translated_title = menuitem.googleformpage\
- .translated_title
- elif hasattr(menuitem, 'homepage'):
- menuitem.translated_title = menuitem.homepage.translated_title
- elif hasattr(menuitem, 'recruitmentpage'):
- menuitem.translated_title = menuitem.recruitmentpage\
- .translated_title
- elif hasattr(menuitem, 'newsindexpage'):
- menuitem.translated_title = menuitem.newsindexpage.translated_title
- elif hasattr(menuitem, 'newspage'):
- menuitem.translated_title = menuitem.newspage.translated_title
- elif hasattr(menuitem, 'webpage'):
- menuitem.translated_title = menuitem.webpage.translated_title
# We don't directly check if calling_page is None since the template
# engine can pass an empty string to calling_page
# if the variable passed as calling_page does not exist.
@@ -57,8 +40,8 @@
# Retrieves the children of the top menu items for the drop downs
@register.inclusion_tag('tags/menu_children.html', takes_context=True)
def menu_children(context, parent, sidenav=False):
- children = parent.get_children()
- children = children.live().in_menu()
+ children = parent.get_children().live().in_menu()
+ children = [c.specific for c in children]
return {
'parent': parent,
'children': children,
| {"golden_diff": "diff --git a/website/website/templatetags/site_tags.py b/website/website/templatetags/site_tags.py\n--- a/website/website/templatetags/site_tags.py\n+++ b/website/website/templatetags/site_tags.py\n@@ -20,26 +20,9 @@\n @register.inclusion_tag('tags/menu.html', takes_context=True)\n def menu_items(context, parent, calling_page=None, sidenav=False):\n menuitems = parent.get_children().live().in_menu()\n+ menuitems = [m.specific for m in menuitems]\n for menuitem in menuitems:\n menuitem.show_dropdown = has_menu_children(menuitem)\n- # TODO: There has to be a better alternative!\n- if hasattr(menuitem, 'googleformindex'):\n- menuitem.translated_title = menuitem.googleformindex\\\n- .translated_title\n- elif hasattr(menuitem, 'googleformpage'):\n- menuitem.translated_title = menuitem.googleformpage\\\n- .translated_title\n- elif hasattr(menuitem, 'homepage'):\n- menuitem.translated_title = menuitem.homepage.translated_title\n- elif hasattr(menuitem, 'recruitmentpage'):\n- menuitem.translated_title = menuitem.recruitmentpage\\\n- .translated_title\n- elif hasattr(menuitem, 'newsindexpage'):\n- menuitem.translated_title = menuitem.newsindexpage.translated_title\n- elif hasattr(menuitem, 'newspage'):\n- menuitem.translated_title = menuitem.newspage.translated_title\n- elif hasattr(menuitem, 'webpage'):\n- menuitem.translated_title = menuitem.webpage.translated_title\n # We don't directly check if calling_page is None since the template\n # engine can pass an empty string to calling_page\n # if the variable passed as calling_page does not exist.\n@@ -57,8 +40,8 @@\n # Retrieves the children of the top menu items for the drop downs\n @register.inclusion_tag('tags/menu_children.html', takes_context=True)\n def menu_children(context, parent, sidenav=False):\n- children = parent.get_children()\n- children = children.live().in_menu()\n+ children = parent.get_children().live().in_menu()\n+ children = [c.specific for c in children]\n return {\n 'parent': parent,\n 'children': children,\n", "issue": "Menu Translations\n<!-- Do you want to ask a question? Are you looking for support? The system administrator can help you: [email protected] -->\r\n\r\n### Description\r\n\r\nNot all menu pages are using `translated_title` when being added to the menu.\r\n\r\n<!-- Please select the appropriate \"topic category\"/blue and \"issue type\"/yellow label -->\r\n\n", "before_files": [{"content": "from django import template\n\nregister = template.Library()\n\n\[email protected]_tag(takes_context=True)\ndef get_site_root(context):\n # NB this returns a core.Page, not the implementation-specific model used\n # so object-comparison to self will return false as objects would differ\n return context['request'].site.root_page\n\n\ndef has_menu_children(page):\n return page.get_children().live().in_menu().exists()\n\n\n# Retrieves the top menu items - the immediate children of the parent page\n# The has_menu_children method is necessary because the bootstrap menu requires\n# a dropdown class to be applied to a parent\[email protected]_tag('tags/menu.html', takes_context=True)\ndef menu_items(context, parent, calling_page=None, sidenav=False):\n menuitems = parent.get_children().live().in_menu()\n for menuitem in menuitems:\n menuitem.show_dropdown = has_menu_children(menuitem)\n # TODO: There has to be a better alternative!\n if hasattr(menuitem, 'googleformindex'):\n menuitem.translated_title = menuitem.googleformindex\\\n .translated_title\n elif hasattr(menuitem, 'googleformpage'):\n menuitem.translated_title = menuitem.googleformpage\\\n .translated_title\n elif hasattr(menuitem, 'homepage'):\n menuitem.translated_title = menuitem.homepage.translated_title\n elif hasattr(menuitem, 'recruitmentpage'):\n menuitem.translated_title = menuitem.recruitmentpage\\\n .translated_title\n elif hasattr(menuitem, 'newsindexpage'):\n menuitem.translated_title = menuitem.newsindexpage.translated_title\n elif hasattr(menuitem, 'newspage'):\n menuitem.translated_title = menuitem.newspage.translated_title\n elif hasattr(menuitem, 'webpage'):\n menuitem.translated_title = menuitem.webpage.translated_title\n # We don't directly check if calling_page is None since the template\n # engine can pass an empty string to calling_page\n # if the variable passed as calling_page does not exist.\n menuitem.active = (calling_page.url.startswith(menuitem.url)\n if calling_page else False)\n return {\n 'calling_page': calling_page,\n 'menuitems': menuitems,\n 'sidenav': sidenav,\n # required by the pageurl tag that we want to use within this template\n 'request': context['request'],\n }\n\n\n# Retrieves the children of the top menu items for the drop downs\[email protected]_tag('tags/menu_children.html', takes_context=True)\ndef menu_children(context, parent, sidenav=False):\n children = parent.get_children()\n children = children.live().in_menu()\n return {\n 'parent': parent,\n 'children': children,\n 'sidenav': sidenav,\n # required by the pageurl tag that we want to use within this template\n 'request': context['request'],\n }\n", "path": "website/website/templatetags/site_tags.py"}], "after_files": [{"content": "from django import template\n\nregister = template.Library()\n\n\[email protected]_tag(takes_context=True)\ndef get_site_root(context):\n # NB this returns a core.Page, not the implementation-specific model used\n # so object-comparison to self will return false as objects would differ\n return context['request'].site.root_page\n\n\ndef has_menu_children(page):\n return page.get_children().live().in_menu().exists()\n\n\n# Retrieves the top menu items - the immediate children of the parent page\n# The has_menu_children method is necessary because the bootstrap menu requires\n# a dropdown class to be applied to a parent\[email protected]_tag('tags/menu.html', takes_context=True)\ndef menu_items(context, parent, calling_page=None, sidenav=False):\n menuitems = parent.get_children().live().in_menu()\n menuitems = [m.specific for m in menuitems]\n for menuitem in menuitems:\n menuitem.show_dropdown = has_menu_children(menuitem)\n # We don't directly check if calling_page is None since the template\n # engine can pass an empty string to calling_page\n # if the variable passed as calling_page does not exist.\n menuitem.active = (calling_page.url.startswith(menuitem.url)\n if calling_page else False)\n return {\n 'calling_page': calling_page,\n 'menuitems': menuitems,\n 'sidenav': sidenav,\n # required by the pageurl tag that we want to use within this template\n 'request': context['request'],\n }\n\n\n# Retrieves the children of the top menu items for the drop downs\[email protected]_tag('tags/menu_children.html', takes_context=True)\ndef menu_children(context, parent, sidenav=False):\n children = parent.get_children().live().in_menu()\n children = [c.specific for c in children]\n return {\n 'parent': parent,\n 'children': children,\n 'sidenav': sidenav,\n # required by the pageurl tag that we want to use within this template\n 'request': context['request'],\n }\n", "path": "website/website/templatetags/site_tags.py"}]} | 1,087 | 522 |
gh_patches_debug_10281 | rasdani/github-patches | git_diff | scikit-hep__awkward-2410 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
String broadcasting is not producing sensible results
### Version of Awkward Array
main
### Description and code to reproduce
As Jim and I discussed today, string broadcasting is currently not producing reliable results for string-string, or even string-non string cases. In particular many broadcasts violate the appearance that strings are atoms:
```python3
>>> import awkward as ak
>>> x = ak._v2.Array([["one", "two"], ["three", "four"]])
>>> ak._v2.broadcast_arrays(x[1:], x[:-1])
ValueError: while calling (from <ipython-input-42-6e4db831c1ff>, line 1)
ak._v2.broadcast_arrays(
arrays = (<Array [['one', 'two']] type='1 * var * string'>, <Array [[...
kwargs = {}
)
Error details: cannot broadcast nested list (in compiled code: https://github.com/scikit-hep/awkward-1.0/blob/1.10.0rc1/src/cpu-kernels/awkward_ListArray_broadcast_tooffsets.cpp#L27)
```
In this case, broadcasting an Array of strings against an Array of integers produces arrays that have the same structure as though the outer `__array__ = "string"` were missing (i.e. broadcasting happens against the underlying characters array):
```python3
>>> import awkward as ak
>>> x = ak._v2.Array([["one", "two"], ["three", "four"]])
>>> y = ak._v2.Array([[1,2],[3, 4]])
>>> ak._v2.broadcast_arrays(x, y)
[<Array [['one', 'two'], ['three', 'four']] type='2 * var * string'>,
<Array [[[1, 1, 1], [2, 2, 2]], [[...], ...]] type='2 * var * var * int64'>]
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/awkward/operations/ak_broadcast_arrays.py`
Content:
```
1 # BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE
2 __all__ = ("broadcast_arrays",)
3 import awkward as ak
4 from awkward._backends.dispatch import backend_of
5 from awkward._backends.numpy import NumpyBackend
6 from awkward._behavior import behavior_of
7 from awkward._connect.numpy import unsupported
8 from awkward._layout import wrap_layout
9 from awkward._nplikes.numpylike import NumpyMetadata
10
11 np = NumpyMetadata.instance()
12 cpu = NumpyBackend.instance()
13
14
15 def broadcast_arrays(
16 *arrays,
17 depth_limit=None,
18 broadcast_parameters_rule="one_to_one",
19 left_broadcast=True,
20 right_broadcast=True,
21 highlevel=True,
22 behavior=None,
23 ):
24 """
25 Args:
26 arrays: Array-like data (anything #ak.to_layout recognizes).
27 depth_limit (None or int, default is None): If None, attempt to fully
28 broadcast the `arrays` to all levels. If an int, limit the number
29 of dimensions that get broadcasted. The minimum value is `1`,
30 for no broadcasting.
31 broadcast_parameters_rule (str): Rule for broadcasting parameters, one of:
32 - `"intersect"`
33 - `"all_or_nothing"`
34 - `"one_to_one"`
35 - `"none"`
36 left_broadcast (bool): If True, follow rules for implicit
37 left-broadcasting, as described below.
38 right_broadcast (bool): If True, follow rules for implicit
39 right-broadcasting, as described below.
40 highlevel (bool, default is True): If True, return an #ak.Array;
41 otherwise, return a low-level #ak.contents.Content subclass.
42 behavior (None or dict): Custom #ak.behavior for the output array, if
43 high-level.
44
45 Like NumPy's
46 [broadcast_arrays](https://docs.scipy.org/doc/numpy/reference/generated/numpy.broadcast_arrays.html)
47 function, this function returns the input `arrays` with enough elements
48 duplicated that they can be combined element-by-element.
49
50 For NumPy arrays, this means that scalars are replaced with arrays with
51 the same scalar value repeated at every element of the array, and regular
52 dimensions are created to increase low-dimensional data into
53 high-dimensional data.
54
55 For example,
56
57 >>> ak.broadcast_arrays(5,
58 ... [1, 2, 3, 4, 5])
59 [<Array [5, 5, 5, 5, 5] type='5 * int64'>,
60 <Array [1, 2, 3, 4, 5] type='5 * int64'>]
61
62 and
63
64 >>> ak.broadcast_arrays(np.array([1, 2, 3]),
65 ... np.array([[0.1, 0.2, 0.3], [10, 20, 30]]))
66 [<Array [[ 1, 2, 3], [ 1, 2, 3]] type='2 * 3 * int64'>,
67 <Array [[0.1, 0.2, 0.3], [10, 20, 30]] type='2 * 3 * float64'>]
68
69 Note that in the second example, when the `3 * int64` array is expanded
70 to match the `2 * 3 * float64` array, it is the deepest dimension that
71 is aligned. If we try to match a `2 * int64` with the `2 * 3 * float64`,
72
73 >>> ak.broadcast_arrays(np.array([1, 2]),
74 ... np.array([[0.1, 0.2, 0.3], [10, 20, 30]]))
75 ValueError: while calling
76 ak.broadcast_arrays(
77 arrays = (array([1, 2]), array([[ 0.1, 0.2, 0.3],
78 [10. , 20....
79 depth_limit = None
80 broadcast_parameters_rule = 'one_to_one'
81 left_broadcast = True
82 right_broadcast = True
83 highlevel = True
84 behavior = None
85 )
86 Error details: cannot broadcast RegularArray of size 2 with RegularArray of size 3
87
88 NumPy has the same behavior: arrays with different numbers of dimensions
89 are aligned to the right before expansion. One can control this by
90 explicitly adding a new axis (reshape to add a dimension of length 1)
91 where the expansion is supposed to take place because a dimension of
92 length 1 can be expanded like a scalar.
93
94 >>> ak.broadcast_arrays(np.array([1, 2])[:, np.newaxis],
95 ... np.array([[0.1, 0.2, 0.3], [10, 20, 30]]))
96 [<Array [[ 1, 1, 1], [ 2, 2, 2]] type='2 * 3 * int64'>,
97 <Array [[0.1, 0.2, 0.3], [10, 20, 30]] type='2 * 3 * float64'>]
98
99 Again, NumPy does the same thing (`np.newaxis` is equal to None, so this
100 trick is often shown with None in the slice-tuple). Where the broadcasting
101 happens can be controlled, but numbers of dimensions that don't match are
102 implicitly aligned to the right (fitting innermost structure, not
103 outermost).
104
105 While that might be an arbitrary decision for rectilinear arrays, it is
106 much more natural for implicit broadcasting to align left for tree-like
107 structures. That is, the root of each data structure should agree and
108 leaves may be duplicated to match. For example,
109
110 >>> ak.broadcast_arrays([ 100, 200, 300],
111 ... [[1.1, 2.2, 3.3], [], [4.4, 5.5]])
112 [<Array [[100, 100, 100], [], [300, 300]] type='3 * var * int64'>,
113 <Array [[1.1, 2.2, 3.3], [], [4.4, 5.5]] type='3 * var * float64'>]
114
115 One typically wants single-item-per-element data to be duplicated to
116 match multiple-items-per-element data. Operations on the broadcasted
117 arrays like
118
119 one_dimensional + nested_lists
120
121 would then have the same effect as the procedural code
122
123 for x, outer in zip(one_dimensional, nested_lists):
124 output = []
125 for inner in outer:
126 output.append(x + inner)
127 yield output
128
129 where `x` has the same value for each `inner` in the inner loop.
130
131 Awkward Array's broadcasting manages to have it both ways by applying the
132 following rules:
133
134 * If all dimensions are regular (i.e. #ak.types.RegularType), like NumPy,
135 implicit broadcasting aligns to the right, like NumPy.
136 * If any dimension is variable (i.e. #ak.types.ListType), which can
137 never be true of NumPy, implicit broadcasting aligns to the left.
138 * Explicit broadcasting with a length-1 regular dimension always
139 broadcasts, like NumPy.
140
141 Thus, it is important to be aware of the distinction between a dimension
142 that is declared to be regular in the type specification and a dimension
143 that is allowed to be variable (even if it happens to have the same length
144 for all elements). This distinction is can be accessed through the
145 #ak.Array.type, but it is lost when converting an array into JSON or
146 Python objects.
147
148 If arrays have the same depth but different lengths of nested
149 lists, attempting to broadcast them together is a broadcasting error.
150
151 >>> one = ak.Array([[[1, 2, 3], [], [4, 5], [6]], [], [[7, 8]]])
152 >>> two = ak.Array([[[1.1, 2.2], [3.3], [4.4], [5.5]], [], [[6.6]]])
153 >>> ak.broadcast_arrays(one, two)
154 ValueError: while calling
155 ak.broadcast_arrays(
156 arrays = (<Array [[[1, 2, 3], [], [4, ...], [6]], ...] type='3 * var ...
157 depth_limit = None
158 broadcast_parameters_rule = 'one_to_one'
159 left_broadcast = True
160 right_broadcast = True
161 highlevel = True
162 behavior = None
163 )
164 Error details: cannot broadcast nested list
165
166 For this, one can set the `depth_limit` to prevent the operation from
167 attempting to broadcast what can't be broadcasted.
168
169 >>> this, that = ak.broadcast_arrays(one, two, depth_limit=1)
170 >>> this.show()
171 [[[1, 2, 3], [], [4, 5], [6]],
172 [],
173 [[7, 8]]]
174 >>> that.show()
175 [[[1.1, 2.2], [3.3], [4.4], [5.5]],
176 [],
177 [[6.6]]]
178 """
179 with ak._errors.OperationErrorContext(
180 "ak.broadcast_arrays",
181 {
182 "arrays": arrays,
183 "depth_limit": depth_limit,
184 "broadcast_parameters_rule": broadcast_parameters_rule,
185 "left_broadcast": left_broadcast,
186 "right_broadcast": right_broadcast,
187 "highlevel": highlevel,
188 "behavior": behavior,
189 },
190 ):
191 return _impl(
192 arrays,
193 depth_limit,
194 broadcast_parameters_rule,
195 left_broadcast,
196 right_broadcast,
197 highlevel,
198 behavior,
199 )
200
201
202 def _impl(
203 arrays,
204 depth_limit,
205 broadcast_parameters_rule,
206 left_broadcast,
207 right_broadcast,
208 highlevel,
209 behavior,
210 ):
211 # Need at least one array!
212 if len(arrays) == 0:
213 return []
214
215 backend = backend_of(*arrays, default=cpu)
216
217 inputs = []
218 for x in arrays:
219 y = ak.operations.to_layout(x, allow_record=True, allow_other=True)
220 if not isinstance(y, (ak.contents.Content, ak.Record)):
221 y = ak.contents.NumpyArray(backend.nplike.asarray([y]))
222 inputs.append(y.to_backend(backend))
223
224 def action(inputs, depth, **kwargs):
225 if depth == depth_limit or all(x.is_numpy for x in inputs):
226 return tuple(inputs)
227 else:
228 return None
229
230 behavior = behavior_of(*arrays, behavior=behavior)
231 out = ak._broadcasting.broadcast_and_apply(
232 inputs,
233 action,
234 behavior,
235 left_broadcast=left_broadcast,
236 right_broadcast=right_broadcast,
237 broadcast_parameters_rule=broadcast_parameters_rule,
238 numpy_to_regular=True,
239 )
240 assert isinstance(out, tuple)
241 return [wrap_layout(x, behavior, highlevel) for x in out]
242
243
244 @ak._connect.numpy.implements("broadcast_arrays")
245 def _nep_18_impl(*args, subok=unsupported):
246 return broadcast_arrays(*args)
247
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/awkward/operations/ak_broadcast_arrays.py b/src/awkward/operations/ak_broadcast_arrays.py
--- a/src/awkward/operations/ak_broadcast_arrays.py
+++ b/src/awkward/operations/ak_broadcast_arrays.py
@@ -222,7 +222,14 @@
inputs.append(y.to_backend(backend))
def action(inputs, depth, **kwargs):
- if depth == depth_limit or all(x.is_numpy for x in inputs):
+ # The depth limit is the depth at which we must return, i.e.
+ # the _first_ layout at that depth
+ if depth == depth_limit:
+ return tuple(inputs)
+ # Walk through non-leaf nodes
+ elif all(
+ x.purelist_depth == 1 and not (x.is_option or x.is_indexed) for x in inputs
+ ):
return tuple(inputs)
else:
return None
| {"golden_diff": "diff --git a/src/awkward/operations/ak_broadcast_arrays.py b/src/awkward/operations/ak_broadcast_arrays.py\n--- a/src/awkward/operations/ak_broadcast_arrays.py\n+++ b/src/awkward/operations/ak_broadcast_arrays.py\n@@ -222,7 +222,14 @@\n inputs.append(y.to_backend(backend))\n \n def action(inputs, depth, **kwargs):\n- if depth == depth_limit or all(x.is_numpy for x in inputs):\n+ # The depth limit is the depth at which we must return, i.e.\n+ # the _first_ layout at that depth\n+ if depth == depth_limit:\n+ return tuple(inputs)\n+ # Walk through non-leaf nodes\n+ elif all(\n+ x.purelist_depth == 1 and not (x.is_option or x.is_indexed) for x in inputs\n+ ):\n return tuple(inputs)\n else:\n return None\n", "issue": "String broadcasting is not producing sensible results\n### Version of Awkward Array\r\n\r\nmain\r\n\r\n### Description and code to reproduce\r\n\r\nAs Jim and I discussed today, string broadcasting is currently not producing reliable results for string-string, or even string-non string cases. In particular many broadcasts violate the appearance that strings are atoms:\r\n\r\n```python3\r\n>>> import awkward as ak\r\n>>> x = ak._v2.Array([[\"one\", \"two\"], [\"three\", \"four\"]])\r\n>>> ak._v2.broadcast_arrays(x[1:], x[:-1])\r\nValueError: while calling (from <ipython-input-42-6e4db831c1ff>, line 1)\r\n ak._v2.broadcast_arrays(\r\n arrays = (<Array [['one', 'two']] type='1 * var * string'>, <Array [[...\r\n kwargs = {}\r\n )\r\nError details: cannot broadcast nested list (in compiled code: https://github.com/scikit-hep/awkward-1.0/blob/1.10.0rc1/src/cpu-kernels/awkward_ListArray_broadcast_tooffsets.cpp#L27)\r\n```\r\n\r\nIn this case, broadcasting an Array of strings against an Array of integers produces arrays that have the same structure as though the outer `__array__ = \"string\"` were missing (i.e. broadcasting happens against the underlying characters array):\r\n```python3\r\n>>> import awkward as ak\r\n>>> x = ak._v2.Array([[\"one\", \"two\"], [\"three\", \"four\"]])\r\n>>> y = ak._v2.Array([[1,2],[3, 4]])\r\n>>> ak._v2.broadcast_arrays(x, y)\r\n[<Array [['one', 'two'], ['three', 'four']] type='2 * var * string'>,\r\n <Array [[[1, 1, 1], [2, 2, 2]], [[...], ...]] type='2 * var * var * int64'>]\r\n```\n", "before_files": [{"content": "# BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE\n__all__ = (\"broadcast_arrays\",)\nimport awkward as ak\nfrom awkward._backends.dispatch import backend_of\nfrom awkward._backends.numpy import NumpyBackend\nfrom awkward._behavior import behavior_of\nfrom awkward._connect.numpy import unsupported\nfrom awkward._layout import wrap_layout\nfrom awkward._nplikes.numpylike import NumpyMetadata\n\nnp = NumpyMetadata.instance()\ncpu = NumpyBackend.instance()\n\n\ndef broadcast_arrays(\n *arrays,\n depth_limit=None,\n broadcast_parameters_rule=\"one_to_one\",\n left_broadcast=True,\n right_broadcast=True,\n highlevel=True,\n behavior=None,\n):\n \"\"\"\n Args:\n arrays: Array-like data (anything #ak.to_layout recognizes).\n depth_limit (None or int, default is None): If None, attempt to fully\n broadcast the `arrays` to all levels. If an int, limit the number\n of dimensions that get broadcasted. The minimum value is `1`,\n for no broadcasting.\n broadcast_parameters_rule (str): Rule for broadcasting parameters, one of:\n - `\"intersect\"`\n - `\"all_or_nothing\"`\n - `\"one_to_one\"`\n - `\"none\"`\n left_broadcast (bool): If True, follow rules for implicit\n left-broadcasting, as described below.\n right_broadcast (bool): If True, follow rules for implicit\n right-broadcasting, as described below.\n highlevel (bool, default is True): If True, return an #ak.Array;\n otherwise, return a low-level #ak.contents.Content subclass.\n behavior (None or dict): Custom #ak.behavior for the output array, if\n high-level.\n\n Like NumPy's\n [broadcast_arrays](https://docs.scipy.org/doc/numpy/reference/generated/numpy.broadcast_arrays.html)\n function, this function returns the input `arrays` with enough elements\n duplicated that they can be combined element-by-element.\n\n For NumPy arrays, this means that scalars are replaced with arrays with\n the same scalar value repeated at every element of the array, and regular\n dimensions are created to increase low-dimensional data into\n high-dimensional data.\n\n For example,\n\n >>> ak.broadcast_arrays(5,\n ... [1, 2, 3, 4, 5])\n [<Array [5, 5, 5, 5, 5] type='5 * int64'>,\n <Array [1, 2, 3, 4, 5] type='5 * int64'>]\n\n and\n\n >>> ak.broadcast_arrays(np.array([1, 2, 3]),\n ... np.array([[0.1, 0.2, 0.3], [10, 20, 30]]))\n [<Array [[ 1, 2, 3], [ 1, 2, 3]] type='2 * 3 * int64'>,\n <Array [[0.1, 0.2, 0.3], [10, 20, 30]] type='2 * 3 * float64'>]\n\n Note that in the second example, when the `3 * int64` array is expanded\n to match the `2 * 3 * float64` array, it is the deepest dimension that\n is aligned. If we try to match a `2 * int64` with the `2 * 3 * float64`,\n\n >>> ak.broadcast_arrays(np.array([1, 2]),\n ... np.array([[0.1, 0.2, 0.3], [10, 20, 30]]))\n ValueError: while calling\n ak.broadcast_arrays(\n arrays = (array([1, 2]), array([[ 0.1, 0.2, 0.3],\n [10. , 20....\n depth_limit = None\n broadcast_parameters_rule = 'one_to_one'\n left_broadcast = True\n right_broadcast = True\n highlevel = True\n behavior = None\n )\n Error details: cannot broadcast RegularArray of size 2 with RegularArray of size 3\n\n NumPy has the same behavior: arrays with different numbers of dimensions\n are aligned to the right before expansion. One can control this by\n explicitly adding a new axis (reshape to add a dimension of length 1)\n where the expansion is supposed to take place because a dimension of\n length 1 can be expanded like a scalar.\n\n >>> ak.broadcast_arrays(np.array([1, 2])[:, np.newaxis],\n ... np.array([[0.1, 0.2, 0.3], [10, 20, 30]]))\n [<Array [[ 1, 1, 1], [ 2, 2, 2]] type='2 * 3 * int64'>,\n <Array [[0.1, 0.2, 0.3], [10, 20, 30]] type='2 * 3 * float64'>]\n\n Again, NumPy does the same thing (`np.newaxis` is equal to None, so this\n trick is often shown with None in the slice-tuple). Where the broadcasting\n happens can be controlled, but numbers of dimensions that don't match are\n implicitly aligned to the right (fitting innermost structure, not\n outermost).\n\n While that might be an arbitrary decision for rectilinear arrays, it is\n much more natural for implicit broadcasting to align left for tree-like\n structures. That is, the root of each data structure should agree and\n leaves may be duplicated to match. For example,\n\n >>> ak.broadcast_arrays([ 100, 200, 300],\n ... [[1.1, 2.2, 3.3], [], [4.4, 5.5]])\n [<Array [[100, 100, 100], [], [300, 300]] type='3 * var * int64'>,\n <Array [[1.1, 2.2, 3.3], [], [4.4, 5.5]] type='3 * var * float64'>]\n\n One typically wants single-item-per-element data to be duplicated to\n match multiple-items-per-element data. Operations on the broadcasted\n arrays like\n\n one_dimensional + nested_lists\n\n would then have the same effect as the procedural code\n\n for x, outer in zip(one_dimensional, nested_lists):\n output = []\n for inner in outer:\n output.append(x + inner)\n yield output\n\n where `x` has the same value for each `inner` in the inner loop.\n\n Awkward Array's broadcasting manages to have it both ways by applying the\n following rules:\n\n * If all dimensions are regular (i.e. #ak.types.RegularType), like NumPy,\n implicit broadcasting aligns to the right, like NumPy.\n * If any dimension is variable (i.e. #ak.types.ListType), which can\n never be true of NumPy, implicit broadcasting aligns to the left.\n * Explicit broadcasting with a length-1 regular dimension always\n broadcasts, like NumPy.\n\n Thus, it is important to be aware of the distinction between a dimension\n that is declared to be regular in the type specification and a dimension\n that is allowed to be variable (even if it happens to have the same length\n for all elements). This distinction is can be accessed through the\n #ak.Array.type, but it is lost when converting an array into JSON or\n Python objects.\n\n If arrays have the same depth but different lengths of nested\n lists, attempting to broadcast them together is a broadcasting error.\n\n >>> one = ak.Array([[[1, 2, 3], [], [4, 5], [6]], [], [[7, 8]]])\n >>> two = ak.Array([[[1.1, 2.2], [3.3], [4.4], [5.5]], [], [[6.6]]])\n >>> ak.broadcast_arrays(one, two)\n ValueError: while calling\n ak.broadcast_arrays(\n arrays = (<Array [[[1, 2, 3], [], [4, ...], [6]], ...] type='3 * var ...\n depth_limit = None\n broadcast_parameters_rule = 'one_to_one'\n left_broadcast = True\n right_broadcast = True\n highlevel = True\n behavior = None\n )\n Error details: cannot broadcast nested list\n\n For this, one can set the `depth_limit` to prevent the operation from\n attempting to broadcast what can't be broadcasted.\n\n >>> this, that = ak.broadcast_arrays(one, two, depth_limit=1)\n >>> this.show()\n [[[1, 2, 3], [], [4, 5], [6]],\n [],\n [[7, 8]]]\n >>> that.show()\n [[[1.1, 2.2], [3.3], [4.4], [5.5]],\n [],\n [[6.6]]]\n \"\"\"\n with ak._errors.OperationErrorContext(\n \"ak.broadcast_arrays\",\n {\n \"arrays\": arrays,\n \"depth_limit\": depth_limit,\n \"broadcast_parameters_rule\": broadcast_parameters_rule,\n \"left_broadcast\": left_broadcast,\n \"right_broadcast\": right_broadcast,\n \"highlevel\": highlevel,\n \"behavior\": behavior,\n },\n ):\n return _impl(\n arrays,\n depth_limit,\n broadcast_parameters_rule,\n left_broadcast,\n right_broadcast,\n highlevel,\n behavior,\n )\n\n\ndef _impl(\n arrays,\n depth_limit,\n broadcast_parameters_rule,\n left_broadcast,\n right_broadcast,\n highlevel,\n behavior,\n):\n # Need at least one array!\n if len(arrays) == 0:\n return []\n\n backend = backend_of(*arrays, default=cpu)\n\n inputs = []\n for x in arrays:\n y = ak.operations.to_layout(x, allow_record=True, allow_other=True)\n if not isinstance(y, (ak.contents.Content, ak.Record)):\n y = ak.contents.NumpyArray(backend.nplike.asarray([y]))\n inputs.append(y.to_backend(backend))\n\n def action(inputs, depth, **kwargs):\n if depth == depth_limit or all(x.is_numpy for x in inputs):\n return tuple(inputs)\n else:\n return None\n\n behavior = behavior_of(*arrays, behavior=behavior)\n out = ak._broadcasting.broadcast_and_apply(\n inputs,\n action,\n behavior,\n left_broadcast=left_broadcast,\n right_broadcast=right_broadcast,\n broadcast_parameters_rule=broadcast_parameters_rule,\n numpy_to_regular=True,\n )\n assert isinstance(out, tuple)\n return [wrap_layout(x, behavior, highlevel) for x in out]\n\n\n@ak._connect.numpy.implements(\"broadcast_arrays\")\ndef _nep_18_impl(*args, subok=unsupported):\n return broadcast_arrays(*args)\n", "path": "src/awkward/operations/ak_broadcast_arrays.py"}], "after_files": [{"content": "# BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE\n__all__ = (\"broadcast_arrays\",)\nimport awkward as ak\nfrom awkward._backends.dispatch import backend_of\nfrom awkward._backends.numpy import NumpyBackend\nfrom awkward._behavior import behavior_of\nfrom awkward._connect.numpy import unsupported\nfrom awkward._layout import wrap_layout\nfrom awkward._nplikes.numpylike import NumpyMetadata\n\nnp = NumpyMetadata.instance()\ncpu = NumpyBackend.instance()\n\n\ndef broadcast_arrays(\n *arrays,\n depth_limit=None,\n broadcast_parameters_rule=\"one_to_one\",\n left_broadcast=True,\n right_broadcast=True,\n highlevel=True,\n behavior=None,\n):\n \"\"\"\n Args:\n arrays: Array-like data (anything #ak.to_layout recognizes).\n depth_limit (None or int, default is None): If None, attempt to fully\n broadcast the `arrays` to all levels. If an int, limit the number\n of dimensions that get broadcasted. The minimum value is `1`,\n for no broadcasting.\n broadcast_parameters_rule (str): Rule for broadcasting parameters, one of:\n - `\"intersect\"`\n - `\"all_or_nothing\"`\n - `\"one_to_one\"`\n - `\"none\"`\n left_broadcast (bool): If True, follow rules for implicit\n left-broadcasting, as described below.\n right_broadcast (bool): If True, follow rules for implicit\n right-broadcasting, as described below.\n highlevel (bool, default is True): If True, return an #ak.Array;\n otherwise, return a low-level #ak.contents.Content subclass.\n behavior (None or dict): Custom #ak.behavior for the output array, if\n high-level.\n\n Like NumPy's\n [broadcast_arrays](https://docs.scipy.org/doc/numpy/reference/generated/numpy.broadcast_arrays.html)\n function, this function returns the input `arrays` with enough elements\n duplicated that they can be combined element-by-element.\n\n For NumPy arrays, this means that scalars are replaced with arrays with\n the same scalar value repeated at every element of the array, and regular\n dimensions are created to increase low-dimensional data into\n high-dimensional data.\n\n For example,\n\n >>> ak.broadcast_arrays(5,\n ... [1, 2, 3, 4, 5])\n [<Array [5, 5, 5, 5, 5] type='5 * int64'>,\n <Array [1, 2, 3, 4, 5] type='5 * int64'>]\n\n and\n\n >>> ak.broadcast_arrays(np.array([1, 2, 3]),\n ... np.array([[0.1, 0.2, 0.3], [10, 20, 30]]))\n [<Array [[ 1, 2, 3], [ 1, 2, 3]] type='2 * 3 * int64'>,\n <Array [[0.1, 0.2, 0.3], [10, 20, 30]] type='2 * 3 * float64'>]\n\n Note that in the second example, when the `3 * int64` array is expanded\n to match the `2 * 3 * float64` array, it is the deepest dimension that\n is aligned. If we try to match a `2 * int64` with the `2 * 3 * float64`,\n\n >>> ak.broadcast_arrays(np.array([1, 2]),\n ... np.array([[0.1, 0.2, 0.3], [10, 20, 30]]))\n ValueError: while calling\n ak.broadcast_arrays(\n arrays = (array([1, 2]), array([[ 0.1, 0.2, 0.3],\n [10. , 20....\n depth_limit = None\n broadcast_parameters_rule = 'one_to_one'\n left_broadcast = True\n right_broadcast = True\n highlevel = True\n behavior = None\n )\n Error details: cannot broadcast RegularArray of size 2 with RegularArray of size 3\n\n NumPy has the same behavior: arrays with different numbers of dimensions\n are aligned to the right before expansion. One can control this by\n explicitly adding a new axis (reshape to add a dimension of length 1)\n where the expansion is supposed to take place because a dimension of\n length 1 can be expanded like a scalar.\n\n >>> ak.broadcast_arrays(np.array([1, 2])[:, np.newaxis],\n ... np.array([[0.1, 0.2, 0.3], [10, 20, 30]]))\n [<Array [[ 1, 1, 1], [ 2, 2, 2]] type='2 * 3 * int64'>,\n <Array [[0.1, 0.2, 0.3], [10, 20, 30]] type='2 * 3 * float64'>]\n\n Again, NumPy does the same thing (`np.newaxis` is equal to None, so this\n trick is often shown with None in the slice-tuple). Where the broadcasting\n happens can be controlled, but numbers of dimensions that don't match are\n implicitly aligned to the right (fitting innermost structure, not\n outermost).\n\n While that might be an arbitrary decision for rectilinear arrays, it is\n much more natural for implicit broadcasting to align left for tree-like\n structures. That is, the root of each data structure should agree and\n leaves may be duplicated to match. For example,\n\n >>> ak.broadcast_arrays([ 100, 200, 300],\n ... [[1.1, 2.2, 3.3], [], [4.4, 5.5]])\n [<Array [[100, 100, 100], [], [300, 300]] type='3 * var * int64'>,\n <Array [[1.1, 2.2, 3.3], [], [4.4, 5.5]] type='3 * var * float64'>]\n\n One typically wants single-item-per-element data to be duplicated to\n match multiple-items-per-element data. Operations on the broadcasted\n arrays like\n\n one_dimensional + nested_lists\n\n would then have the same effect as the procedural code\n\n for x, outer in zip(one_dimensional, nested_lists):\n output = []\n for inner in outer:\n output.append(x + inner)\n yield output\n\n where `x` has the same value for each `inner` in the inner loop.\n\n Awkward Array's broadcasting manages to have it both ways by applying the\n following rules:\n\n * If all dimensions are regular (i.e. #ak.types.RegularType), like NumPy,\n implicit broadcasting aligns to the right, like NumPy.\n * If any dimension is variable (i.e. #ak.types.ListType), which can\n never be true of NumPy, implicit broadcasting aligns to the left.\n * Explicit broadcasting with a length-1 regular dimension always\n broadcasts, like NumPy.\n\n Thus, it is important to be aware of the distinction between a dimension\n that is declared to be regular in the type specification and a dimension\n that is allowed to be variable (even if it happens to have the same length\n for all elements). This distinction is can be accessed through the\n #ak.Array.type, but it is lost when converting an array into JSON or\n Python objects.\n\n If arrays have the same depth but different lengths of nested\n lists, attempting to broadcast them together is a broadcasting error.\n\n >>> one = ak.Array([[[1, 2, 3], [], [4, 5], [6]], [], [[7, 8]]])\n >>> two = ak.Array([[[1.1, 2.2], [3.3], [4.4], [5.5]], [], [[6.6]]])\n >>> ak.broadcast_arrays(one, two)\n ValueError: while calling\n ak.broadcast_arrays(\n arrays = (<Array [[[1, 2, 3], [], [4, ...], [6]], ...] type='3 * var ...\n depth_limit = None\n broadcast_parameters_rule = 'one_to_one'\n left_broadcast = True\n right_broadcast = True\n highlevel = True\n behavior = None\n )\n Error details: cannot broadcast nested list\n\n For this, one can set the `depth_limit` to prevent the operation from\n attempting to broadcast what can't be broadcasted.\n\n >>> this, that = ak.broadcast_arrays(one, two, depth_limit=1)\n >>> this.show()\n [[[1, 2, 3], [], [4, 5], [6]],\n [],\n [[7, 8]]]\n >>> that.show()\n [[[1.1, 2.2], [3.3], [4.4], [5.5]],\n [],\n [[6.6]]]\n \"\"\"\n with ak._errors.OperationErrorContext(\n \"ak.broadcast_arrays\",\n {\n \"arrays\": arrays,\n \"depth_limit\": depth_limit,\n \"broadcast_parameters_rule\": broadcast_parameters_rule,\n \"left_broadcast\": left_broadcast,\n \"right_broadcast\": right_broadcast,\n \"highlevel\": highlevel,\n \"behavior\": behavior,\n },\n ):\n return _impl(\n arrays,\n depth_limit,\n broadcast_parameters_rule,\n left_broadcast,\n right_broadcast,\n highlevel,\n behavior,\n )\n\n\ndef _impl(\n arrays,\n depth_limit,\n broadcast_parameters_rule,\n left_broadcast,\n right_broadcast,\n highlevel,\n behavior,\n):\n # Need at least one array!\n if len(arrays) == 0:\n return []\n\n backend = backend_of(*arrays, default=cpu)\n\n inputs = []\n for x in arrays:\n y = ak.operations.to_layout(x, allow_record=True, allow_other=True)\n if not isinstance(y, (ak.contents.Content, ak.Record)):\n y = ak.contents.NumpyArray(backend.nplike.asarray([y]))\n inputs.append(y.to_backend(backend))\n\n def action(inputs, depth, **kwargs):\n # The depth limit is the depth at which we must return, i.e.\n # the _first_ layout at that depth\n if depth == depth_limit:\n return tuple(inputs)\n # Walk through non-leaf nodes\n elif all(\n x.purelist_depth == 1 and not (x.is_option or x.is_indexed) for x in inputs\n ):\n return tuple(inputs)\n else:\n return None\n\n behavior = behavior_of(*arrays, behavior=behavior)\n out = ak._broadcasting.broadcast_and_apply(\n inputs,\n action,\n behavior,\n left_broadcast=left_broadcast,\n right_broadcast=right_broadcast,\n broadcast_parameters_rule=broadcast_parameters_rule,\n numpy_to_regular=True,\n )\n assert isinstance(out, tuple)\n return [wrap_layout(x, behavior, highlevel) for x in out]\n\n\n@ak._connect.numpy.implements(\"broadcast_arrays\")\ndef _nep_18_impl(*args, subok=unsupported):\n return broadcast_arrays(*args)\n", "path": "src/awkward/operations/ak_broadcast_arrays.py"}]} | 3,763 | 209 |
gh_patches_debug_3740 | rasdani/github-patches | git_diff | napari__napari-553 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Menu bar focus on Mac
## 🐛 Bug
We've now added a menubar, but you need to toggle focus in and out of napari before it becomes active on the mac. This bug has been encountered in other Qt apps, but we still need to look into fixing.
See here - https://github.com/robotology/yarp/issues/457
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `napari/_qt/qt_main_window.py`
Content:
```
1 """
2 Custom Qt widgets that serve as native objects that the public-facing elements
3 wrap.
4 """
5 # set vispy to use same backend as qtpy
6 from qtpy import API_NAME
7 from vispy import app
8
9 app.use_app(API_NAME)
10 del app
11
12 from qtpy.QtWidgets import (
13 QMainWindow,
14 QWidget,
15 QHBoxLayout,
16 QLabel,
17 QAction,
18 QShortcut,
19 )
20 from qtpy.QtGui import QKeySequence
21
22 from ..util.theme import template
23
24
25 class Window:
26 """Application window that contains the menu bar and viewer.
27
28 Parameters
29 ----------
30 qt_viewer : QtViewer
31 Contained viewer widget.
32
33 Attributes
34 ----------
35 qt_viewer : QtViewer
36 Contained viewer widget.
37 """
38
39 def __init__(self, qt_viewer, *, show=True):
40
41 self.qt_viewer = qt_viewer
42
43 self._qt_window = QMainWindow()
44 self._qt_window.setUnifiedTitleAndToolBarOnMac(True)
45 self._qt_center = QWidget()
46 self._qt_window.setCentralWidget(self._qt_center)
47 self._qt_window.setWindowTitle(self.qt_viewer.viewer.title)
48 self._qt_center.setLayout(QHBoxLayout())
49 self._status_bar = self._qt_window.statusBar()
50 self._qt_window.closeEvent = self.closeEvent
51 self.close = self._qt_window.close
52
53 self._add_menubar()
54
55 self._add_file_menu()
56 self._add_view_menu()
57 self._add_window_menu()
58
59 self._status_bar.showMessage('Ready')
60 self._help = QLabel('')
61 self._status_bar.addPermanentWidget(self._help)
62
63 self._qt_center.layout().addWidget(self.qt_viewer)
64 self._qt_center.layout().setContentsMargins(4, 0, 4, 0)
65
66 self._update_palette(qt_viewer.viewer.palette)
67
68 self.qt_viewer.viewer.events.status.connect(self._status_changed)
69 self.qt_viewer.viewer.events.help.connect(self._help_changed)
70 self.qt_viewer.viewer.events.title.connect(self._title_changed)
71 self.qt_viewer.viewer.events.palette.connect(
72 lambda event: self._update_palette(event.palette)
73 )
74
75 if show:
76 self.show()
77
78 def _add_menubar(self):
79 self.main_menu = self._qt_window.menuBar()
80 # Menubar shortcuts are only active when the menubar is visible.
81 # Therefore, we set a global shortcut not associated with the menubar
82 # to toggle visibility, *but*, in order to not shadow the menubar
83 # shortcut, we disable it, and only enable it when the menubar is
84 # hidden. See this stackoverflow link for details:
85 # https://stackoverflow.com/questions/50537642/how-to-keep-the-shortcuts-of-a-hidden-widget-in-pyqt5
86 self._main_menu_shortcut = QShortcut(
87 QKeySequence('Ctrl+M'), self._qt_window
88 )
89 self._main_menu_shortcut.activated.connect(
90 self._toggle_menubar_visible
91 )
92 self._main_menu_shortcut.setEnabled(False)
93
94 def _toggle_menubar_visible(self):
95 """Toggle visibility of app menubar.
96
97 This function also disables or enables a global keyboard shortcut to
98 show the menubar, since menubar shortcuts are only available while the
99 menubar is visible.
100 """
101 if self.main_menu.isVisible():
102 self.main_menu.setVisible(False)
103 self._main_menu_shortcut.setEnabled(True)
104 else:
105 self.main_menu.setVisible(True)
106 self._main_menu_shortcut.setEnabled(False)
107
108 def _add_file_menu(self):
109 open_images = QAction('Open', self._qt_window)
110 open_images.setShortcut('Ctrl+O')
111 open_images.setStatusTip('Open image file(s)')
112 open_images.triggered.connect(self.qt_viewer._open_images)
113 self.file_menu = self.main_menu.addMenu('&File')
114 self.file_menu.addAction(open_images)
115
116 def _add_view_menu(self):
117 toggle_visible = QAction('Toggle menubar visibility', self._qt_window)
118 toggle_visible.setShortcut('Ctrl+M')
119 toggle_visible.setStatusTip('Hide Menubar')
120 toggle_visible.triggered.connect(self._toggle_menubar_visible)
121 self.view_menu = self.main_menu.addMenu('&View')
122 self.view_menu.addAction(toggle_visible)
123
124 def _add_window_menu(self):
125 exit_action = QAction("Close window", self._qt_window)
126 exit_action.setShortcut("Ctrl+W")
127 exit_action.setStatusTip('Close napari window')
128 exit_action.triggered.connect(self._qt_window.close)
129 self.window_menu = self.main_menu.addMenu('&Window')
130 self.window_menu.addAction(exit_action)
131
132 def resize(self, width, height):
133 """Resize the window.
134
135 Parameters
136 ----------
137 width : int
138 Width in logical pixels.
139 height : int
140 Height in logical pixels.
141 """
142 self._qt_window.resize(width, height)
143
144 def show(self):
145 """Resize, show, and bring forward the window.
146 """
147 self._qt_window.resize(self._qt_window.layout().sizeHint())
148 self._qt_window.show()
149 self._qt_window.raise_()
150
151 def _update_palette(self, palette):
152 # set window styles which don't use the primary stylesheet
153 # FIXME: this is a problem with the stylesheet not using properties
154 self._status_bar.setStyleSheet(
155 template(
156 'QStatusBar { background: {{ background }}; '
157 'color: {{ text }}; }',
158 **palette,
159 )
160 )
161 self._qt_center.setStyleSheet(
162 template('QWidget { background: {{ background }}; }', **palette)
163 )
164
165 def _status_changed(self, event):
166 """Update status bar.
167 """
168 self._status_bar.showMessage(event.text)
169
170 def _title_changed(self, event):
171 """Update window title.
172 """
173 self._qt_window.setWindowTitle(event.text)
174
175 def _help_changed(self, event):
176 """Update help message on status bar.
177 """
178 self._help.setText(event.text)
179
180 def closeEvent(self, event):
181 # Forward close event to the console to trigger proper shutdown
182 self.qt_viewer.console.shutdown()
183 event.accept()
184
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/napari/_qt/qt_main_window.py b/napari/_qt/qt_main_window.py
--- a/napari/_qt/qt_main_window.py
+++ b/napari/_qt/qt_main_window.py
@@ -146,7 +146,6 @@
"""
self._qt_window.resize(self._qt_window.layout().sizeHint())
self._qt_window.show()
- self._qt_window.raise_()
def _update_palette(self, palette):
# set window styles which don't use the primary stylesheet
| {"golden_diff": "diff --git a/napari/_qt/qt_main_window.py b/napari/_qt/qt_main_window.py\n--- a/napari/_qt/qt_main_window.py\n+++ b/napari/_qt/qt_main_window.py\n@@ -146,7 +146,6 @@\n \"\"\"\n self._qt_window.resize(self._qt_window.layout().sizeHint())\n self._qt_window.show()\n- self._qt_window.raise_()\n \n def _update_palette(self, palette):\n # set window styles which don't use the primary stylesheet\n", "issue": "Menu bar focus on Mac\n## \ud83d\udc1b Bug\r\nWe've now added a menubar, but you need to toggle focus in and out of napari before it becomes active on the mac. This bug has been encountered in other Qt apps, but we still need to look into fixing.\r\n\r\nSee here - https://github.com/robotology/yarp/issues/457\n", "before_files": [{"content": "\"\"\"\nCustom Qt widgets that serve as native objects that the public-facing elements\nwrap.\n\"\"\"\n# set vispy to use same backend as qtpy\nfrom qtpy import API_NAME\nfrom vispy import app\n\napp.use_app(API_NAME)\ndel app\n\nfrom qtpy.QtWidgets import (\n QMainWindow,\n QWidget,\n QHBoxLayout,\n QLabel,\n QAction,\n QShortcut,\n)\nfrom qtpy.QtGui import QKeySequence\n\nfrom ..util.theme import template\n\n\nclass Window:\n \"\"\"Application window that contains the menu bar and viewer.\n\n Parameters\n ----------\n qt_viewer : QtViewer\n Contained viewer widget.\n\n Attributes\n ----------\n qt_viewer : QtViewer\n Contained viewer widget.\n \"\"\"\n\n def __init__(self, qt_viewer, *, show=True):\n\n self.qt_viewer = qt_viewer\n\n self._qt_window = QMainWindow()\n self._qt_window.setUnifiedTitleAndToolBarOnMac(True)\n self._qt_center = QWidget()\n self._qt_window.setCentralWidget(self._qt_center)\n self._qt_window.setWindowTitle(self.qt_viewer.viewer.title)\n self._qt_center.setLayout(QHBoxLayout())\n self._status_bar = self._qt_window.statusBar()\n self._qt_window.closeEvent = self.closeEvent\n self.close = self._qt_window.close\n\n self._add_menubar()\n\n self._add_file_menu()\n self._add_view_menu()\n self._add_window_menu()\n\n self._status_bar.showMessage('Ready')\n self._help = QLabel('')\n self._status_bar.addPermanentWidget(self._help)\n\n self._qt_center.layout().addWidget(self.qt_viewer)\n self._qt_center.layout().setContentsMargins(4, 0, 4, 0)\n\n self._update_palette(qt_viewer.viewer.palette)\n\n self.qt_viewer.viewer.events.status.connect(self._status_changed)\n self.qt_viewer.viewer.events.help.connect(self._help_changed)\n self.qt_viewer.viewer.events.title.connect(self._title_changed)\n self.qt_viewer.viewer.events.palette.connect(\n lambda event: self._update_palette(event.palette)\n )\n\n if show:\n self.show()\n\n def _add_menubar(self):\n self.main_menu = self._qt_window.menuBar()\n # Menubar shortcuts are only active when the menubar is visible.\n # Therefore, we set a global shortcut not associated with the menubar\n # to toggle visibility, *but*, in order to not shadow the menubar\n # shortcut, we disable it, and only enable it when the menubar is\n # hidden. See this stackoverflow link for details:\n # https://stackoverflow.com/questions/50537642/how-to-keep-the-shortcuts-of-a-hidden-widget-in-pyqt5\n self._main_menu_shortcut = QShortcut(\n QKeySequence('Ctrl+M'), self._qt_window\n )\n self._main_menu_shortcut.activated.connect(\n self._toggle_menubar_visible\n )\n self._main_menu_shortcut.setEnabled(False)\n\n def _toggle_menubar_visible(self):\n \"\"\"Toggle visibility of app menubar.\n\n This function also disables or enables a global keyboard shortcut to\n show the menubar, since menubar shortcuts are only available while the\n menubar is visible.\n \"\"\"\n if self.main_menu.isVisible():\n self.main_menu.setVisible(False)\n self._main_menu_shortcut.setEnabled(True)\n else:\n self.main_menu.setVisible(True)\n self._main_menu_shortcut.setEnabled(False)\n\n def _add_file_menu(self):\n open_images = QAction('Open', self._qt_window)\n open_images.setShortcut('Ctrl+O')\n open_images.setStatusTip('Open image file(s)')\n open_images.triggered.connect(self.qt_viewer._open_images)\n self.file_menu = self.main_menu.addMenu('&File')\n self.file_menu.addAction(open_images)\n\n def _add_view_menu(self):\n toggle_visible = QAction('Toggle menubar visibility', self._qt_window)\n toggle_visible.setShortcut('Ctrl+M')\n toggle_visible.setStatusTip('Hide Menubar')\n toggle_visible.triggered.connect(self._toggle_menubar_visible)\n self.view_menu = self.main_menu.addMenu('&View')\n self.view_menu.addAction(toggle_visible)\n\n def _add_window_menu(self):\n exit_action = QAction(\"Close window\", self._qt_window)\n exit_action.setShortcut(\"Ctrl+W\")\n exit_action.setStatusTip('Close napari window')\n exit_action.triggered.connect(self._qt_window.close)\n self.window_menu = self.main_menu.addMenu('&Window')\n self.window_menu.addAction(exit_action)\n\n def resize(self, width, height):\n \"\"\"Resize the window.\n\n Parameters\n ----------\n width : int\n Width in logical pixels.\n height : int\n Height in logical pixels.\n \"\"\"\n self._qt_window.resize(width, height)\n\n def show(self):\n \"\"\"Resize, show, and bring forward the window.\n \"\"\"\n self._qt_window.resize(self._qt_window.layout().sizeHint())\n self._qt_window.show()\n self._qt_window.raise_()\n\n def _update_palette(self, palette):\n # set window styles which don't use the primary stylesheet\n # FIXME: this is a problem with the stylesheet not using properties\n self._status_bar.setStyleSheet(\n template(\n 'QStatusBar { background: {{ background }}; '\n 'color: {{ text }}; }',\n **palette,\n )\n )\n self._qt_center.setStyleSheet(\n template('QWidget { background: {{ background }}; }', **palette)\n )\n\n def _status_changed(self, event):\n \"\"\"Update status bar.\n \"\"\"\n self._status_bar.showMessage(event.text)\n\n def _title_changed(self, event):\n \"\"\"Update window title.\n \"\"\"\n self._qt_window.setWindowTitle(event.text)\n\n def _help_changed(self, event):\n \"\"\"Update help message on status bar.\n \"\"\"\n self._help.setText(event.text)\n\n def closeEvent(self, event):\n # Forward close event to the console to trigger proper shutdown\n self.qt_viewer.console.shutdown()\n event.accept()\n", "path": "napari/_qt/qt_main_window.py"}], "after_files": [{"content": "\"\"\"\nCustom Qt widgets that serve as native objects that the public-facing elements\nwrap.\n\"\"\"\n# set vispy to use same backend as qtpy\nfrom qtpy import API_NAME\nfrom vispy import app\n\napp.use_app(API_NAME)\ndel app\n\nfrom qtpy.QtWidgets import (\n QMainWindow,\n QWidget,\n QHBoxLayout,\n QLabel,\n QAction,\n QShortcut,\n)\nfrom qtpy.QtGui import QKeySequence\n\nfrom ..util.theme import template\n\n\nclass Window:\n \"\"\"Application window that contains the menu bar and viewer.\n\n Parameters\n ----------\n qt_viewer : QtViewer\n Contained viewer widget.\n\n Attributes\n ----------\n qt_viewer : QtViewer\n Contained viewer widget.\n \"\"\"\n\n def __init__(self, qt_viewer, *, show=True):\n\n self.qt_viewer = qt_viewer\n\n self._qt_window = QMainWindow()\n self._qt_window.setUnifiedTitleAndToolBarOnMac(True)\n self._qt_center = QWidget()\n self._qt_window.setCentralWidget(self._qt_center)\n self._qt_window.setWindowTitle(self.qt_viewer.viewer.title)\n self._qt_center.setLayout(QHBoxLayout())\n self._status_bar = self._qt_window.statusBar()\n self._qt_window.closeEvent = self.closeEvent\n self.close = self._qt_window.close\n\n self._add_menubar()\n\n self._add_file_menu()\n self._add_view_menu()\n self._add_window_menu()\n\n self._status_bar.showMessage('Ready')\n self._help = QLabel('')\n self._status_bar.addPermanentWidget(self._help)\n\n self._qt_center.layout().addWidget(self.qt_viewer)\n self._qt_center.layout().setContentsMargins(4, 0, 4, 0)\n\n self._update_palette(qt_viewer.viewer.palette)\n\n self.qt_viewer.viewer.events.status.connect(self._status_changed)\n self.qt_viewer.viewer.events.help.connect(self._help_changed)\n self.qt_viewer.viewer.events.title.connect(self._title_changed)\n self.qt_viewer.viewer.events.palette.connect(\n lambda event: self._update_palette(event.palette)\n )\n\n if show:\n self.show()\n\n def _add_menubar(self):\n self.main_menu = self._qt_window.menuBar()\n # Menubar shortcuts are only active when the menubar is visible.\n # Therefore, we set a global shortcut not associated with the menubar\n # to toggle visibility, *but*, in order to not shadow the menubar\n # shortcut, we disable it, and only enable it when the menubar is\n # hidden. See this stackoverflow link for details:\n # https://stackoverflow.com/questions/50537642/how-to-keep-the-shortcuts-of-a-hidden-widget-in-pyqt5\n self._main_menu_shortcut = QShortcut(\n QKeySequence('Ctrl+M'), self._qt_window\n )\n self._main_menu_shortcut.activated.connect(\n self._toggle_menubar_visible\n )\n self._main_menu_shortcut.setEnabled(False)\n\n def _toggle_menubar_visible(self):\n \"\"\"Toggle visibility of app menubar.\n\n This function also disables or enables a global keyboard shortcut to\n show the menubar, since menubar shortcuts are only available while the\n menubar is visible.\n \"\"\"\n if self.main_menu.isVisible():\n self.main_menu.setVisible(False)\n self._main_menu_shortcut.setEnabled(True)\n else:\n self.main_menu.setVisible(True)\n self._main_menu_shortcut.setEnabled(False)\n\n def _add_file_menu(self):\n open_images = QAction('Open', self._qt_window)\n open_images.setShortcut('Ctrl+O')\n open_images.setStatusTip('Open image file(s)')\n open_images.triggered.connect(self.qt_viewer._open_images)\n self.file_menu = self.main_menu.addMenu('&File')\n self.file_menu.addAction(open_images)\n\n def _add_view_menu(self):\n toggle_visible = QAction('Toggle menubar visibility', self._qt_window)\n toggle_visible.setShortcut('Ctrl+M')\n toggle_visible.setStatusTip('Hide Menubar')\n toggle_visible.triggered.connect(self._toggle_menubar_visible)\n self.view_menu = self.main_menu.addMenu('&View')\n self.view_menu.addAction(toggle_visible)\n\n def _add_window_menu(self):\n exit_action = QAction(\"Close window\", self._qt_window)\n exit_action.setShortcut(\"Ctrl+W\")\n exit_action.setStatusTip('Close napari window')\n exit_action.triggered.connect(self._qt_window.close)\n self.window_menu = self.main_menu.addMenu('&Window')\n self.window_menu.addAction(exit_action)\n\n def resize(self, width, height):\n \"\"\"Resize the window.\n\n Parameters\n ----------\n width : int\n Width in logical pixels.\n height : int\n Height in logical pixels.\n \"\"\"\n self._qt_window.resize(width, height)\n\n def show(self):\n \"\"\"Resize, show, and bring forward the window.\n \"\"\"\n self._qt_window.resize(self._qt_window.layout().sizeHint())\n self._qt_window.show()\n\n def _update_palette(self, palette):\n # set window styles which don't use the primary stylesheet\n # FIXME: this is a problem with the stylesheet not using properties\n self._status_bar.setStyleSheet(\n template(\n 'QStatusBar { background: {{ background }}; '\n 'color: {{ text }}; }',\n **palette,\n )\n )\n self._qt_center.setStyleSheet(\n template('QWidget { background: {{ background }}; }', **palette)\n )\n\n def _status_changed(self, event):\n \"\"\"Update status bar.\n \"\"\"\n self._status_bar.showMessage(event.text)\n\n def _title_changed(self, event):\n \"\"\"Update window title.\n \"\"\"\n self._qt_window.setWindowTitle(event.text)\n\n def _help_changed(self, event):\n \"\"\"Update help message on status bar.\n \"\"\"\n self._help.setText(event.text)\n\n def closeEvent(self, event):\n # Forward close event to the console to trigger proper shutdown\n self.qt_viewer.console.shutdown()\n event.accept()\n", "path": "napari/_qt/qt_main_window.py"}]} | 2,098 | 117 |
gh_patches_debug_8815 | rasdani/github-patches | git_diff | CTFd__CTFd-2458 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Upload to S3 Failing
- CTFd Version/Commit: 3.6.1
- Operating System: Linux (Docker container)
- Web Browser and Version: Chrome
**What happened?**
Upgrading CTFd resulting in S3 file uploads beginning to return 400 (bad request) status codes. I see one of the fixes for 3.6.1 was for S3, so perhaps a new bug was introduced.
Here are some additional facts which may be helpful:
- The files are successfully making there way into S3, despite the error
- The timezone I have configured for my server is CST
I can also confirm that my deployment had working file upload before upgrade to version 3.6.1 (file upload was working for 3.6.0).
**What did you expect to happen?**
File upload to continue working.
**How to reproduce your issue**
Deploy CTFd free version using version 3.6.1 with S3 file upload configured.
**Any associated stack traces or error logs**
The browser request returns error (400 status code):
```
{
"success": false,
"errors": {
"location": [
"I/O operation on closed file."
]
}
}
```
The backend error is:
```
[ERROR] Error handling request
Traceback (most recent call last):
File "/opt/venv/lib/python3.9/site-packages/gunicorn/workers/base_async.py", line 113, in handle_request
resp.write_file(respiter)
File "/opt/venv/lib/python3.9/site-packages/gunicorn/http/wsgi.py", line 385, in write_file
if not self.sendfile(respiter):
File "/opt/venv/lib/python3.9/site-packages/gunicorn/http/wsgi.py", line 375, in sendfile
self.sock.sendfile(respiter.filelike, count=nbytes)
File "/opt/venv/lib/python3.9/site-packages/gevent/_socket3.py", line 486, in sendfile
return self._sendfile_use_send(file, offset, count)
File "/opt/venv/lib/python3.9/site-packages/gevent/_socket3.py", line 416, in _sendfile_use_send
self._check_sendfile_params(file, offset, count)
File "/opt/venv/lib/python3.9/site-packages/gevent/_socket3.py", line 461, in _check_sendfile_params
raise ValueError(
ValueError: count must be a positive integer (got 0)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `CTFd/utils/uploads/__init__.py`
Content:
```
1 import hashlib
2 import shutil
3 from pathlib import Path
4
5 from CTFd.models import ChallengeFiles, Files, PageFiles, db
6 from CTFd.utils import get_app_config
7 from CTFd.utils.uploads.uploaders import FilesystemUploader, S3Uploader
8
9 UPLOADERS = {"filesystem": FilesystemUploader, "s3": S3Uploader}
10
11
12 def get_uploader():
13 return UPLOADERS.get(get_app_config("UPLOAD_PROVIDER") or "filesystem")()
14
15
16 def upload_file(*args, **kwargs):
17 file_obj = kwargs.get("file")
18 challenge_id = kwargs.get("challenge_id") or kwargs.get("challenge")
19 page_id = kwargs.get("page_id") or kwargs.get("page")
20 file_type = kwargs.get("type", "standard")
21 location = kwargs.get("location")
22
23 # Validate location and default filename to uploaded file's name
24 parent = None
25 filename = file_obj.filename
26 if location:
27 path = Path(location)
28 if len(path.parts) != 2:
29 raise ValueError(
30 "Location must contain two parts, a directory and a filename"
31 )
32 # Allow location to override the directory and filename
33 parent = path.parts[0]
34 filename = path.parts[1]
35 location = parent + "/" + filename
36
37 model_args = {"type": file_type, "location": location}
38
39 model = Files
40 if file_type == "challenge":
41 model = ChallengeFiles
42 model_args["challenge_id"] = challenge_id
43 if file_type == "page":
44 model = PageFiles
45 model_args["page_id"] = page_id
46
47 uploader = get_uploader()
48 location = uploader.upload(file_obj=file_obj, filename=filename, path=parent)
49
50 sha1sum = hash_file(fp=file_obj)
51
52 model_args["location"] = location
53 model_args["sha1sum"] = sha1sum
54
55 existing_file = Files.query.filter_by(location=location).first()
56 if existing_file:
57 for k, v in model_args.items():
58 setattr(existing_file, k, v)
59 db.session.commit()
60 file_row = existing_file
61 else:
62 file_row = model(**model_args)
63 db.session.add(file_row)
64 db.session.commit()
65 return file_row
66
67
68 def hash_file(fp, algo="sha1"):
69 fp.seek(0)
70 if algo == "sha1":
71 h = hashlib.sha1() # nosec
72 # https://stackoverflow.com/a/64730457
73 while chunk := fp.read(1024):
74 h.update(chunk)
75 fp.seek(0)
76 return h.hexdigest()
77 else:
78 raise NotImplementedError
79
80
81 def delete_file(file_id):
82 f = Files.query.filter_by(id=file_id).first_or_404()
83
84 uploader = get_uploader()
85 uploader.delete(filename=f.location)
86
87 db.session.delete(f)
88 db.session.commit()
89 return True
90
91
92 def rmdir(directory):
93 shutil.rmtree(directory, ignore_errors=True)
94
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/CTFd/utils/uploads/__init__.py b/CTFd/utils/uploads/__init__.py
--- a/CTFd/utils/uploads/__init__.py
+++ b/CTFd/utils/uploads/__init__.py
@@ -44,11 +44,12 @@
model = PageFiles
model_args["page_id"] = page_id
+ # Hash is calculated before upload since S3 file upload closes file object
+ sha1sum = hash_file(fp=file_obj)
+
uploader = get_uploader()
location = uploader.upload(file_obj=file_obj, filename=filename, path=parent)
- sha1sum = hash_file(fp=file_obj)
-
model_args["location"] = location
model_args["sha1sum"] = sha1sum
| {"golden_diff": "diff --git a/CTFd/utils/uploads/__init__.py b/CTFd/utils/uploads/__init__.py\n--- a/CTFd/utils/uploads/__init__.py\n+++ b/CTFd/utils/uploads/__init__.py\n@@ -44,11 +44,12 @@\n model = PageFiles\n model_args[\"page_id\"] = page_id\n \n+ # Hash is calculated before upload since S3 file upload closes file object\n+ sha1sum = hash_file(fp=file_obj)\n+\n uploader = get_uploader()\n location = uploader.upload(file_obj=file_obj, filename=filename, path=parent)\n \n- sha1sum = hash_file(fp=file_obj)\n-\n model_args[\"location\"] = location\n model_args[\"sha1sum\"] = sha1sum\n", "issue": "Upload to S3 Failing\n- CTFd Version/Commit: 3.6.1\r\n- Operating System: Linux (Docker container)\r\n- Web Browser and Version: Chrome\r\n\r\n**What happened?**\r\n\r\nUpgrading CTFd resulting in S3 file uploads beginning to return 400 (bad request) status codes. I see one of the fixes for 3.6.1 was for S3, so perhaps a new bug was introduced.\r\n\r\nHere are some additional facts which may be helpful:\r\n\r\n - The files are successfully making there way into S3, despite the error\r\n - The timezone I have configured for my server is CST\r\n\r\nI can also confirm that my deployment had working file upload before upgrade to version 3.6.1 (file upload was working for 3.6.0).\r\n\r\n**What did you expect to happen?**\r\n\r\nFile upload to continue working.\r\n\r\n**How to reproduce your issue**\r\n\r\nDeploy CTFd free version using version 3.6.1 with S3 file upload configured.\r\n\r\n**Any associated stack traces or error logs**\r\n\r\nThe browser request returns error (400 status code):\r\n\r\n```\r\n{\r\n \"success\": false,\r\n \"errors\": {\r\n \"location\": [\r\n \"I/O operation on closed file.\"\r\n ]\r\n }\r\n}\r\n```\r\n\r\nThe backend error is:\r\n\r\n```\r\n[ERROR] Error handling request\r\nTraceback (most recent call last):\r\nFile \"/opt/venv/lib/python3.9/site-packages/gunicorn/workers/base_async.py\", line 113, in handle_request\r\nresp.write_file(respiter)\r\nFile \"/opt/venv/lib/python3.9/site-packages/gunicorn/http/wsgi.py\", line 385, in write_file\r\nif not self.sendfile(respiter):\r\nFile \"/opt/venv/lib/python3.9/site-packages/gunicorn/http/wsgi.py\", line 375, in sendfile\r\nself.sock.sendfile(respiter.filelike, count=nbytes)\r\nFile \"/opt/venv/lib/python3.9/site-packages/gevent/_socket3.py\", line 486, in sendfile\r\nreturn self._sendfile_use_send(file, offset, count)\r\nFile \"/opt/venv/lib/python3.9/site-packages/gevent/_socket3.py\", line 416, in _sendfile_use_send\r\nself._check_sendfile_params(file, offset, count)\r\nFile \"/opt/venv/lib/python3.9/site-packages/gevent/_socket3.py\", line 461, in _check_sendfile_params\r\nraise ValueError(\r\nValueError: count must be a positive integer (got 0)\r\n```\n", "before_files": [{"content": "import hashlib\nimport shutil\nfrom pathlib import Path\n\nfrom CTFd.models import ChallengeFiles, Files, PageFiles, db\nfrom CTFd.utils import get_app_config\nfrom CTFd.utils.uploads.uploaders import FilesystemUploader, S3Uploader\n\nUPLOADERS = {\"filesystem\": FilesystemUploader, \"s3\": S3Uploader}\n\n\ndef get_uploader():\n return UPLOADERS.get(get_app_config(\"UPLOAD_PROVIDER\") or \"filesystem\")()\n\n\ndef upload_file(*args, **kwargs):\n file_obj = kwargs.get(\"file\")\n challenge_id = kwargs.get(\"challenge_id\") or kwargs.get(\"challenge\")\n page_id = kwargs.get(\"page_id\") or kwargs.get(\"page\")\n file_type = kwargs.get(\"type\", \"standard\")\n location = kwargs.get(\"location\")\n\n # Validate location and default filename to uploaded file's name\n parent = None\n filename = file_obj.filename\n if location:\n path = Path(location)\n if len(path.parts) != 2:\n raise ValueError(\n \"Location must contain two parts, a directory and a filename\"\n )\n # Allow location to override the directory and filename\n parent = path.parts[0]\n filename = path.parts[1]\n location = parent + \"/\" + filename\n\n model_args = {\"type\": file_type, \"location\": location}\n\n model = Files\n if file_type == \"challenge\":\n model = ChallengeFiles\n model_args[\"challenge_id\"] = challenge_id\n if file_type == \"page\":\n model = PageFiles\n model_args[\"page_id\"] = page_id\n\n uploader = get_uploader()\n location = uploader.upload(file_obj=file_obj, filename=filename, path=parent)\n\n sha1sum = hash_file(fp=file_obj)\n\n model_args[\"location\"] = location\n model_args[\"sha1sum\"] = sha1sum\n\n existing_file = Files.query.filter_by(location=location).first()\n if existing_file:\n for k, v in model_args.items():\n setattr(existing_file, k, v)\n db.session.commit()\n file_row = existing_file\n else:\n file_row = model(**model_args)\n db.session.add(file_row)\n db.session.commit()\n return file_row\n\n\ndef hash_file(fp, algo=\"sha1\"):\n fp.seek(0)\n if algo == \"sha1\":\n h = hashlib.sha1() # nosec\n # https://stackoverflow.com/a/64730457\n while chunk := fp.read(1024):\n h.update(chunk)\n fp.seek(0)\n return h.hexdigest()\n else:\n raise NotImplementedError\n\n\ndef delete_file(file_id):\n f = Files.query.filter_by(id=file_id).first_or_404()\n\n uploader = get_uploader()\n uploader.delete(filename=f.location)\n\n db.session.delete(f)\n db.session.commit()\n return True\n\n\ndef rmdir(directory):\n shutil.rmtree(directory, ignore_errors=True)\n", "path": "CTFd/utils/uploads/__init__.py"}], "after_files": [{"content": "import hashlib\nimport shutil\nfrom pathlib import Path\n\nfrom CTFd.models import ChallengeFiles, Files, PageFiles, db\nfrom CTFd.utils import get_app_config\nfrom CTFd.utils.uploads.uploaders import FilesystemUploader, S3Uploader\n\nUPLOADERS = {\"filesystem\": FilesystemUploader, \"s3\": S3Uploader}\n\n\ndef get_uploader():\n return UPLOADERS.get(get_app_config(\"UPLOAD_PROVIDER\") or \"filesystem\")()\n\n\ndef upload_file(*args, **kwargs):\n file_obj = kwargs.get(\"file\")\n challenge_id = kwargs.get(\"challenge_id\") or kwargs.get(\"challenge\")\n page_id = kwargs.get(\"page_id\") or kwargs.get(\"page\")\n file_type = kwargs.get(\"type\", \"standard\")\n location = kwargs.get(\"location\")\n\n # Validate location and default filename to uploaded file's name\n parent = None\n filename = file_obj.filename\n if location:\n path = Path(location)\n if len(path.parts) != 2:\n raise ValueError(\n \"Location must contain two parts, a directory and a filename\"\n )\n # Allow location to override the directory and filename\n parent = path.parts[0]\n filename = path.parts[1]\n location = parent + \"/\" + filename\n\n model_args = {\"type\": file_type, \"location\": location}\n\n model = Files\n if file_type == \"challenge\":\n model = ChallengeFiles\n model_args[\"challenge_id\"] = challenge_id\n if file_type == \"page\":\n model = PageFiles\n model_args[\"page_id\"] = page_id\n\n # Hash is calculated before upload since S3 file upload closes file object\n sha1sum = hash_file(fp=file_obj)\n\n uploader = get_uploader()\n location = uploader.upload(file_obj=file_obj, filename=filename, path=parent)\n\n model_args[\"location\"] = location\n model_args[\"sha1sum\"] = sha1sum\n\n existing_file = Files.query.filter_by(location=location).first()\n if existing_file:\n for k, v in model_args.items():\n setattr(existing_file, k, v)\n db.session.commit()\n file_row = existing_file\n else:\n file_row = model(**model_args)\n db.session.add(file_row)\n db.session.commit()\n return file_row\n\n\ndef hash_file(fp, algo=\"sha1\"):\n fp.seek(0)\n if algo == \"sha1\":\n h = hashlib.sha1() # nosec\n # https://stackoverflow.com/a/64730457\n while chunk := fp.read(1024):\n h.update(chunk)\n fp.seek(0)\n return h.hexdigest()\n else:\n raise NotImplementedError\n\n\ndef delete_file(file_id):\n f = Files.query.filter_by(id=file_id).first_or_404()\n\n uploader = get_uploader()\n uploader.delete(filename=f.location)\n\n db.session.delete(f)\n db.session.commit()\n return True\n\n\ndef rmdir(directory):\n shutil.rmtree(directory, ignore_errors=True)\n", "path": "CTFd/utils/uploads/__init__.py"}]} | 1,645 | 170 |
gh_patches_debug_29333 | rasdani/github-patches | git_diff | pex-tool__pex-322 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove pkg_resources.build_zipmanifest monkeypatching
This may involve increasing the minimum setuptools version. Another alternative is vendoring setuptools.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = '1.1.15'
5
6 SETUPTOOLS_REQUIREMENT = 'setuptools>=2.2,<20.11'
7 WHEEL_REQUIREMENT = 'wheel>=0.26.0,<0.30.0'
8
```
Path: `pex/pex_bootstrapper.py`
Content:
```
1 # Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 import contextlib
5 import os
6 import sys
7 import zipfile
8
9 __all__ = ('bootstrap_pex',)
10
11
12 def pex_info_name(entry_point):
13 """Return the PEX-INFO for an entry_point"""
14 return os.path.join(entry_point, 'PEX-INFO')
15
16
17 def is_compressed(entry_point):
18 return os.path.exists(entry_point) and not os.path.exists(pex_info_name(entry_point))
19
20
21 def read_pexinfo_from_directory(entry_point):
22 with open(pex_info_name(entry_point), 'rb') as fp:
23 return fp.read()
24
25
26 def read_pexinfo_from_zip(entry_point):
27 with contextlib.closing(zipfile.ZipFile(entry_point)) as zf:
28 return zf.read('PEX-INFO')
29
30
31 def read_pex_info_content(entry_point):
32 """Return the raw content of a PEX-INFO."""
33 if is_compressed(entry_point):
34 return read_pexinfo_from_zip(entry_point)
35 else:
36 return read_pexinfo_from_directory(entry_point)
37
38
39 def get_pex_info(entry_point):
40 """Return the PexInfo object for an entry point."""
41 from . import pex_info
42
43 pex_info_content = read_pex_info_content(entry_point)
44 if pex_info_content:
45 return pex_info.PexInfo.from_json(pex_info_content)
46 raise ValueError('Invalid entry_point: %s' % entry_point)
47
48
49 # TODO(wickman) Remove once resolved (#91):
50 # https://bitbucket.org/pypa/setuptools/issue/154/build_zipmanifest-results-should-be
51 def monkeypatch_build_zipmanifest():
52 import pkg_resources
53 if not hasattr(pkg_resources, 'build_zipmanifest'):
54 return
55 old_build_zipmanifest = pkg_resources.build_zipmanifest
56 def memoized_build_zipmanifest(archive, memo={}):
57 if archive not in memo:
58 memo[archive] = old_build_zipmanifest(archive)
59 return memo[archive]
60 pkg_resources.build_zipmanifest = memoized_build_zipmanifest
61
62
63 def find_in_path(target_interpreter):
64 if os.path.exists(target_interpreter):
65 return target_interpreter
66
67 for directory in os.getenv('PATH', '').split(os.pathsep):
68 try_path = os.path.join(directory, target_interpreter)
69 if os.path.exists(try_path):
70 return try_path
71
72
73 def maybe_reexec_pex():
74 from .variables import ENV
75 if not ENV.PEX_PYTHON:
76 return
77
78 from .common import die
79 from .tracer import TRACER
80
81 target_python = ENV.PEX_PYTHON
82 target = find_in_path(target_python)
83 if not target:
84 die('Failed to find interpreter specified by PEX_PYTHON: %s' % target)
85 if os.path.exists(target) and os.path.realpath(target) != os.path.realpath(sys.executable):
86 TRACER.log('Detected PEX_PYTHON, re-exec to %s' % target)
87 ENV.delete('PEX_PYTHON')
88 os.execve(target, [target_python] + sys.argv, ENV.copy())
89
90
91 def bootstrap_pex(entry_point):
92 from .finders import register_finders
93 monkeypatch_build_zipmanifest()
94 register_finders()
95 maybe_reexec_pex()
96
97 from . import pex
98 pex.PEX(entry_point).execute()
99
100
101 def bootstrap_pex_env(entry_point):
102 """Bootstrap the current runtime environment using a given pex."""
103 from .environment import PEXEnvironment
104 from .finders import register_finders
105 from .pex_info import PexInfo
106
107 monkeypatch_build_zipmanifest()
108 register_finders()
109
110 PEXEnvironment(entry_point, PexInfo.from_pex(entry_point)).activate()
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pex/pex_bootstrapper.py b/pex/pex_bootstrapper.py
--- a/pex/pex_bootstrapper.py
+++ b/pex/pex_bootstrapper.py
@@ -46,20 +46,6 @@
raise ValueError('Invalid entry_point: %s' % entry_point)
-# TODO(wickman) Remove once resolved (#91):
-# https://bitbucket.org/pypa/setuptools/issue/154/build_zipmanifest-results-should-be
-def monkeypatch_build_zipmanifest():
- import pkg_resources
- if not hasattr(pkg_resources, 'build_zipmanifest'):
- return
- old_build_zipmanifest = pkg_resources.build_zipmanifest
- def memoized_build_zipmanifest(archive, memo={}):
- if archive not in memo:
- memo[archive] = old_build_zipmanifest(archive)
- return memo[archive]
- pkg_resources.build_zipmanifest = memoized_build_zipmanifest
-
-
def find_in_path(target_interpreter):
if os.path.exists(target_interpreter):
return target_interpreter
@@ -90,7 +76,6 @@
def bootstrap_pex(entry_point):
from .finders import register_finders
- monkeypatch_build_zipmanifest()
register_finders()
maybe_reexec_pex()
@@ -104,7 +89,6 @@
from .finders import register_finders
from .pex_info import PexInfo
- monkeypatch_build_zipmanifest()
register_finders()
PEXEnvironment(entry_point, PexInfo.from_pex(entry_point)).activate()
diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -3,5 +3,5 @@
__version__ = '1.1.15'
-SETUPTOOLS_REQUIREMENT = 'setuptools>=2.2,<20.11'
+SETUPTOOLS_REQUIREMENT = 'setuptools>=5.7,<20.11'
WHEEL_REQUIREMENT = 'wheel>=0.26.0,<0.30.0'
| {"golden_diff": "diff --git a/pex/pex_bootstrapper.py b/pex/pex_bootstrapper.py\n--- a/pex/pex_bootstrapper.py\n+++ b/pex/pex_bootstrapper.py\n@@ -46,20 +46,6 @@\n raise ValueError('Invalid entry_point: %s' % entry_point)\n \n \n-# TODO(wickman) Remove once resolved (#91):\n-# https://bitbucket.org/pypa/setuptools/issue/154/build_zipmanifest-results-should-be\n-def monkeypatch_build_zipmanifest():\n- import pkg_resources\n- if not hasattr(pkg_resources, 'build_zipmanifest'):\n- return\n- old_build_zipmanifest = pkg_resources.build_zipmanifest\n- def memoized_build_zipmanifest(archive, memo={}):\n- if archive not in memo:\n- memo[archive] = old_build_zipmanifest(archive)\n- return memo[archive]\n- pkg_resources.build_zipmanifest = memoized_build_zipmanifest\n-\n-\n def find_in_path(target_interpreter):\n if os.path.exists(target_interpreter):\n return target_interpreter\n@@ -90,7 +76,6 @@\n \n def bootstrap_pex(entry_point):\n from .finders import register_finders\n- monkeypatch_build_zipmanifest()\n register_finders()\n maybe_reexec_pex()\n \n@@ -104,7 +89,6 @@\n from .finders import register_finders\n from .pex_info import PexInfo\n \n- monkeypatch_build_zipmanifest()\n register_finders()\n \n PEXEnvironment(entry_point, PexInfo.from_pex(entry_point)).activate()\ndiff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -3,5 +3,5 @@\n \n __version__ = '1.1.15'\n \n-SETUPTOOLS_REQUIREMENT = 'setuptools>=2.2,<20.11'\n+SETUPTOOLS_REQUIREMENT = 'setuptools>=5.7,<20.11'\n WHEEL_REQUIREMENT = 'wheel>=0.26.0,<0.30.0'\n", "issue": "Remove pkg_resources.build_zipmanifest monkeypatching\nThis may involve increasing the minimum setuptools version. Another alternative is vendoring setuptools.\n\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = '1.1.15'\n\nSETUPTOOLS_REQUIREMENT = 'setuptools>=2.2,<20.11'\nWHEEL_REQUIREMENT = 'wheel>=0.26.0,<0.30.0'\n", "path": "pex/version.py"}, {"content": "# Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nimport contextlib\nimport os\nimport sys\nimport zipfile\n\n__all__ = ('bootstrap_pex',)\n\n\ndef pex_info_name(entry_point):\n \"\"\"Return the PEX-INFO for an entry_point\"\"\"\n return os.path.join(entry_point, 'PEX-INFO')\n\n\ndef is_compressed(entry_point):\n return os.path.exists(entry_point) and not os.path.exists(pex_info_name(entry_point))\n\n\ndef read_pexinfo_from_directory(entry_point):\n with open(pex_info_name(entry_point), 'rb') as fp:\n return fp.read()\n\n\ndef read_pexinfo_from_zip(entry_point):\n with contextlib.closing(zipfile.ZipFile(entry_point)) as zf:\n return zf.read('PEX-INFO')\n\n\ndef read_pex_info_content(entry_point):\n \"\"\"Return the raw content of a PEX-INFO.\"\"\"\n if is_compressed(entry_point):\n return read_pexinfo_from_zip(entry_point)\n else:\n return read_pexinfo_from_directory(entry_point)\n\n\ndef get_pex_info(entry_point):\n \"\"\"Return the PexInfo object for an entry point.\"\"\"\n from . import pex_info\n\n pex_info_content = read_pex_info_content(entry_point)\n if pex_info_content:\n return pex_info.PexInfo.from_json(pex_info_content)\n raise ValueError('Invalid entry_point: %s' % entry_point)\n\n\n# TODO(wickman) Remove once resolved (#91):\n# https://bitbucket.org/pypa/setuptools/issue/154/build_zipmanifest-results-should-be\ndef monkeypatch_build_zipmanifest():\n import pkg_resources\n if not hasattr(pkg_resources, 'build_zipmanifest'):\n return\n old_build_zipmanifest = pkg_resources.build_zipmanifest\n def memoized_build_zipmanifest(archive, memo={}):\n if archive not in memo:\n memo[archive] = old_build_zipmanifest(archive)\n return memo[archive]\n pkg_resources.build_zipmanifest = memoized_build_zipmanifest\n\n\ndef find_in_path(target_interpreter):\n if os.path.exists(target_interpreter):\n return target_interpreter\n\n for directory in os.getenv('PATH', '').split(os.pathsep):\n try_path = os.path.join(directory, target_interpreter)\n if os.path.exists(try_path):\n return try_path\n\n\ndef maybe_reexec_pex():\n from .variables import ENV\n if not ENV.PEX_PYTHON:\n return\n\n from .common import die\n from .tracer import TRACER\n\n target_python = ENV.PEX_PYTHON\n target = find_in_path(target_python)\n if not target:\n die('Failed to find interpreter specified by PEX_PYTHON: %s' % target)\n if os.path.exists(target) and os.path.realpath(target) != os.path.realpath(sys.executable):\n TRACER.log('Detected PEX_PYTHON, re-exec to %s' % target)\n ENV.delete('PEX_PYTHON')\n os.execve(target, [target_python] + sys.argv, ENV.copy())\n\n\ndef bootstrap_pex(entry_point):\n from .finders import register_finders\n monkeypatch_build_zipmanifest()\n register_finders()\n maybe_reexec_pex()\n\n from . import pex\n pex.PEX(entry_point).execute()\n\n\ndef bootstrap_pex_env(entry_point):\n \"\"\"Bootstrap the current runtime environment using a given pex.\"\"\"\n from .environment import PEXEnvironment\n from .finders import register_finders\n from .pex_info import PexInfo\n\n monkeypatch_build_zipmanifest()\n register_finders()\n\n PEXEnvironment(entry_point, PexInfo.from_pex(entry_point)).activate()\n", "path": "pex/pex_bootstrapper.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = '1.1.15'\n\nSETUPTOOLS_REQUIREMENT = 'setuptools>=5.7,<20.11'\nWHEEL_REQUIREMENT = 'wheel>=0.26.0,<0.30.0'\n", "path": "pex/version.py"}, {"content": "# Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nimport contextlib\nimport os\nimport sys\nimport zipfile\n\n__all__ = ('bootstrap_pex',)\n\n\ndef pex_info_name(entry_point):\n \"\"\"Return the PEX-INFO for an entry_point\"\"\"\n return os.path.join(entry_point, 'PEX-INFO')\n\n\ndef is_compressed(entry_point):\n return os.path.exists(entry_point) and not os.path.exists(pex_info_name(entry_point))\n\n\ndef read_pexinfo_from_directory(entry_point):\n with open(pex_info_name(entry_point), 'rb') as fp:\n return fp.read()\n\n\ndef read_pexinfo_from_zip(entry_point):\n with contextlib.closing(zipfile.ZipFile(entry_point)) as zf:\n return zf.read('PEX-INFO')\n\n\ndef read_pex_info_content(entry_point):\n \"\"\"Return the raw content of a PEX-INFO.\"\"\"\n if is_compressed(entry_point):\n return read_pexinfo_from_zip(entry_point)\n else:\n return read_pexinfo_from_directory(entry_point)\n\n\ndef get_pex_info(entry_point):\n \"\"\"Return the PexInfo object for an entry point.\"\"\"\n from . import pex_info\n\n pex_info_content = read_pex_info_content(entry_point)\n if pex_info_content:\n return pex_info.PexInfo.from_json(pex_info_content)\n raise ValueError('Invalid entry_point: %s' % entry_point)\n\n\ndef find_in_path(target_interpreter):\n if os.path.exists(target_interpreter):\n return target_interpreter\n\n for directory in os.getenv('PATH', '').split(os.pathsep):\n try_path = os.path.join(directory, target_interpreter)\n if os.path.exists(try_path):\n return try_path\n\n\ndef maybe_reexec_pex():\n from .variables import ENV\n if not ENV.PEX_PYTHON:\n return\n\n from .common import die\n from .tracer import TRACER\n\n target_python = ENV.PEX_PYTHON\n target = find_in_path(target_python)\n if not target:\n die('Failed to find interpreter specified by PEX_PYTHON: %s' % target)\n if os.path.exists(target) and os.path.realpath(target) != os.path.realpath(sys.executable):\n TRACER.log('Detected PEX_PYTHON, re-exec to %s' % target)\n ENV.delete('PEX_PYTHON')\n os.execve(target, [target_python] + sys.argv, ENV.copy())\n\n\ndef bootstrap_pex(entry_point):\n from .finders import register_finders\n register_finders()\n maybe_reexec_pex()\n\n from . import pex\n pex.PEX(entry_point).execute()\n\n\ndef bootstrap_pex_env(entry_point):\n \"\"\"Bootstrap the current runtime environment using a given pex.\"\"\"\n from .environment import PEXEnvironment\n from .finders import register_finders\n from .pex_info import PexInfo\n\n register_finders()\n\n PEXEnvironment(entry_point, PexInfo.from_pex(entry_point)).activate()\n", "path": "pex/pex_bootstrapper.py"}]} | 1,458 | 476 |
gh_patches_debug_18001 | rasdani/github-patches | git_diff | mozilla__telemetry-analysis-service-258 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ATMO should pre-click my single SSH key
Would save me thousands of milliseconds every time I launch a cluster ;)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `atmo/clusters/views.py`
Content:
```
1 # This Source Code Form is subject to the terms of the Mozilla Public
2 # License, v. 2.0. If a copy of the MPL was not distributed with this
3 # file, you can obtain one at http://mozilla.org/MPL/2.0/.
4 from django.contrib import messages
5 from django.contrib.auth.decorators import login_required
6 from django.shortcuts import redirect, render
7 from django.utils.safestring import mark_safe
8
9 from allauth.account.utils import user_display
10
11 from .forms import NewClusterForm
12 from .models import Cluster
13 from ..decorators import view_permission_required, delete_permission_required
14
15
16 @login_required
17 def new_cluster(request):
18 if request.user.created_sshkeys.count() == 0:
19 messages.error(
20 request,
21 mark_safe(
22 '<h4>No SSH keys associated to you.</h4>'
23 'Please upload one below to be able to launch a cluster.'
24 'This is one-time step.'
25 )
26 )
27 return redirect('keys-new')
28 initial = {
29 'identifier': '{}-telemetry-analysis'.format(user_display(request.user)),
30 'size': 1,
31 }
32 form = NewClusterForm(
33 request.user,
34 initial=initial,
35 )
36 if request.method == 'POST':
37 form = NewClusterForm(
38 request.user,
39 data=request.POST,
40 files=request.FILES,
41 initial=initial,
42 )
43 if form.is_valid():
44 cluster = form.save() # this will also magically spawn the cluster for us
45 return redirect(cluster)
46 context = {
47 'form': form,
48 }
49 return render(request, 'atmo/clusters/new.html', context)
50
51
52 @login_required
53 @delete_permission_required(Cluster)
54 def terminate_cluster(request, id):
55 cluster = Cluster.objects.get(id=id)
56 if not cluster.is_active:
57 return redirect(cluster)
58
59 if request.method == 'POST':
60 cluster.deactivate()
61 return redirect(cluster)
62
63 context = {
64 'cluster': cluster,
65 }
66 return render(request, 'atmo/clusters/terminate.html', context=context)
67
68
69 @login_required
70 @view_permission_required(Cluster)
71 def detail_cluster(request, id):
72 cluster = Cluster.objects.get(id=id)
73 context = {
74 'cluster': cluster,
75 }
76 return render(request, 'atmo/clusters/detail.html', context=context)
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/atmo/clusters/views.py b/atmo/clusters/views.py
--- a/atmo/clusters/views.py
+++ b/atmo/clusters/views.py
@@ -15,7 +15,13 @@
@login_required
def new_cluster(request):
- if request.user.created_sshkeys.count() == 0:
+ initial = {
+ 'identifier': '{}-telemetry-analysis'.format(user_display(request.user)),
+ 'size': 1,
+ }
+ ssh_key_count = request.user.created_sshkeys.count()
+
+ if ssh_key_count == 0:
messages.error(
request,
mark_safe(
@@ -25,10 +31,10 @@
)
)
return redirect('keys-new')
- initial = {
- 'identifier': '{}-telemetry-analysis'.format(user_display(request.user)),
- 'size': 1,
- }
+ elif ssh_key_count == 1:
+ # If only 1 ssh key, make it pre-selected.
+ initial['ssh_key'] = request.user.created_sshkeys.values('pk')[0]['pk']
+
form = NewClusterForm(
request.user,
initial=initial,
| {"golden_diff": "diff --git a/atmo/clusters/views.py b/atmo/clusters/views.py\n--- a/atmo/clusters/views.py\n+++ b/atmo/clusters/views.py\n@@ -15,7 +15,13 @@\n \n @login_required\n def new_cluster(request):\n- if request.user.created_sshkeys.count() == 0:\n+ initial = {\n+ 'identifier': '{}-telemetry-analysis'.format(user_display(request.user)),\n+ 'size': 1,\n+ }\n+ ssh_key_count = request.user.created_sshkeys.count()\n+\n+ if ssh_key_count == 0:\n messages.error(\n request,\n mark_safe(\n@@ -25,10 +31,10 @@\n )\n )\n return redirect('keys-new')\n- initial = {\n- 'identifier': '{}-telemetry-analysis'.format(user_display(request.user)),\n- 'size': 1,\n- }\n+ elif ssh_key_count == 1:\n+ # If only 1 ssh key, make it pre-selected.\n+ initial['ssh_key'] = request.user.created_sshkeys.values('pk')[0]['pk']\n+\n form = NewClusterForm(\n request.user,\n initial=initial,\n", "issue": "ATMO should pre-click my single SSH key\nWould save me thousands of milliseconds every time I launch a cluster ;)\n", "before_files": [{"content": "# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this\n# file, you can obtain one at http://mozilla.org/MPL/2.0/.\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.shortcuts import redirect, render\nfrom django.utils.safestring import mark_safe\n\nfrom allauth.account.utils import user_display\n\nfrom .forms import NewClusterForm\nfrom .models import Cluster\nfrom ..decorators import view_permission_required, delete_permission_required\n\n\n@login_required\ndef new_cluster(request):\n if request.user.created_sshkeys.count() == 0:\n messages.error(\n request,\n mark_safe(\n '<h4>No SSH keys associated to you.</h4>'\n 'Please upload one below to be able to launch a cluster.'\n 'This is one-time step.'\n )\n )\n return redirect('keys-new')\n initial = {\n 'identifier': '{}-telemetry-analysis'.format(user_display(request.user)),\n 'size': 1,\n }\n form = NewClusterForm(\n request.user,\n initial=initial,\n )\n if request.method == 'POST':\n form = NewClusterForm(\n request.user,\n data=request.POST,\n files=request.FILES,\n initial=initial,\n )\n if form.is_valid():\n cluster = form.save() # this will also magically spawn the cluster for us\n return redirect(cluster)\n context = {\n 'form': form,\n }\n return render(request, 'atmo/clusters/new.html', context)\n\n\n@login_required\n@delete_permission_required(Cluster)\ndef terminate_cluster(request, id):\n cluster = Cluster.objects.get(id=id)\n if not cluster.is_active:\n return redirect(cluster)\n\n if request.method == 'POST':\n cluster.deactivate()\n return redirect(cluster)\n\n context = {\n 'cluster': cluster,\n }\n return render(request, 'atmo/clusters/terminate.html', context=context)\n\n\n@login_required\n@view_permission_required(Cluster)\ndef detail_cluster(request, id):\n cluster = Cluster.objects.get(id=id)\n context = {\n 'cluster': cluster,\n }\n return render(request, 'atmo/clusters/detail.html', context=context)\n", "path": "atmo/clusters/views.py"}], "after_files": [{"content": "# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this\n# file, you can obtain one at http://mozilla.org/MPL/2.0/.\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.shortcuts import redirect, render\nfrom django.utils.safestring import mark_safe\n\nfrom allauth.account.utils import user_display\n\nfrom .forms import NewClusterForm\nfrom .models import Cluster\nfrom ..decorators import view_permission_required, delete_permission_required\n\n\n@login_required\ndef new_cluster(request):\n initial = {\n 'identifier': '{}-telemetry-analysis'.format(user_display(request.user)),\n 'size': 1,\n }\n ssh_key_count = request.user.created_sshkeys.count()\n\n if ssh_key_count == 0:\n messages.error(\n request,\n mark_safe(\n '<h4>No SSH keys associated to you.</h4>'\n 'Please upload one below to be able to launch a cluster.'\n 'This is one-time step.'\n )\n )\n return redirect('keys-new')\n elif ssh_key_count == 1:\n # If only 1 ssh key, make it pre-selected.\n initial['ssh_key'] = request.user.created_sshkeys.values('pk')[0]['pk']\n\n form = NewClusterForm(\n request.user,\n initial=initial,\n )\n if request.method == 'POST':\n form = NewClusterForm(\n request.user,\n data=request.POST,\n files=request.FILES,\n initial=initial,\n )\n if form.is_valid():\n cluster = form.save() # this will also magically spawn the cluster for us\n return redirect(cluster)\n context = {\n 'form': form,\n }\n return render(request, 'atmo/clusters/new.html', context)\n\n\n@login_required\n@delete_permission_required(Cluster)\ndef terminate_cluster(request, id):\n cluster = Cluster.objects.get(id=id)\n if not cluster.is_active:\n return redirect(cluster)\n\n if request.method == 'POST':\n cluster.deactivate()\n return redirect(cluster)\n\n context = {\n 'cluster': cluster,\n }\n return render(request, 'atmo/clusters/terminate.html', context=context)\n\n\n@login_required\n@view_permission_required(Cluster)\ndef detail_cluster(request, id):\n cluster = Cluster.objects.get(id=id)\n context = {\n 'cluster': cluster,\n }\n return render(request, 'atmo/clusters/detail.html', context=context)\n", "path": "atmo/clusters/views.py"}]} | 921 | 262 |
gh_patches_debug_40147 | rasdani/github-patches | git_diff | OpenNMT__OpenNMT-tf-671 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PRF evaluator: list index out of range
Hi!
I'm getting `list index out of range` when prf evaluator is used.
**Config:**
Model: TransformerRelative
params:
beam_width: 1
train:
maximum_features_length: 50
maximum_labels_length: 50
save_summary_steps: 100
sample_buffer_size: 1000000
keep_checkpoint_max: 20
save_checkpoints_steps: 5000
max_step: 2000000
eval:
batch_size: 32
steps: 5000
export_on_best: bleu
external_evaluators: [ "bleu", "prf", "wer" ]
infer:
batch_size: 1024
**Full stack:**
W tensorflow/core/kernels/data/generator_dataset_op.cc:103] Error occurred when finalizing GeneratorDataset iterator: Cancelled: Operation was cancelled
Traceback (most recent call last):
File "/home/dima/anaconda3/envs/tf/bin/onmt-main", line 8, in <module>
sys.exit(main())
File "/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/bin/main.py", line 224, in main
hvd=hvd)
File "/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/runner.py", line 217, in train
moving_average_decay=train_config.get("moving_average_decay"))
File "/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/training.py", line 118, in __call__
early_stop = self._evaluate(evaluator, step, moving_average=moving_average)
File "/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/training.py", line 140, in _evaluate
evaluator(step)
File "/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/evaluation.py", line 299, in __call__
score = scorer(self._labels_file, output_path)
File "/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/utils/scorers.py", line 132, in __call__
precision_score, recall_score, fmeasure_score = fmeasure(ref_path, hyp_path)
File "/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/utils/fmeasure.py", line 49, in fmeasure
if tag == classref[linecpt][tagcpt]:
IndexError: list index out of range
Can I help you with the issue? I'm not familiar with the code base, but I can try to reproduce it locally and extract the context if necessary.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opennmt/utils/fmeasure.py`
Content:
```
1 """Hypotheses file scoring for Precision Recall and F-Measure."""
2
3 def fmeasure(ref_path,
4 hyp_path,
5 return_precision_only=False,
6 return_recall_only=False,
7 return_fmeasure_only=False):
8 """Compute Precision Recall and F-Measure between two files"""
9 with open(ref_path) as ref_fp, open(hyp_path) as hyp_fp:
10 list_null_tags = ["X", "null", "NULL", "Null", "O"]
11 listtags = []
12 linecpt = 0
13 classref = []
14 classrandom = []
15 classhyp = []
16 nbrtagref = {}
17 nbrtaghyp = {}
18 nbrtagok = {}
19 for tag in listtags:
20 nbrtagref[tag] = 0
21 nbrtaghyp[tag] = 0
22 nbrtagok[tag] = 0
23 for line in ref_fp:
24 line = line.strip()
25 tabline = line.split(' ')
26 tagcpt = 0
27 lineref = []
28 for tag in tabline:
29 lineref.append(tag)
30 if tag in nbrtagref.keys() and tag not in list_null_tags:
31 nbrtagref[tag] = nbrtagref[tag]+1
32 else:
33 nbrtagref[tag] = 1
34 tagcpt = tagcpt+1
35 classref.append(lineref)
36 linecpt = linecpt+1
37 linecpt = 0
38 for line in hyp_fp:
39 line = line.strip()
40 tabline = line.split(' ')
41 tagcpt = 0
42 linehyp = []
43 linerandom = []
44 for tag in tabline:
45 linehyp.append(tag)
46 if tag not in listtags:
47 listtags.append(tag)
48 linerandom.append(tag)
49 if tag == classref[linecpt][tagcpt]:
50 if tag in nbrtagok.keys():
51 nbrtagok[tag] = nbrtagok[tag]+1
52 else:
53 nbrtagok[tag] = 1
54 tagcpt = tagcpt+1
55 if tag in nbrtaghyp.keys():
56 nbrtaghyp[tag] = nbrtaghyp[tag]+1
57 else:
58 nbrtaghyp[tag] = 1
59 classhyp.append(linehyp)
60 classrandom.append(linerandom)
61 linecpt = linecpt+1
62
63 tagcpt = 0
64 fullprecision = 0
65 fullrecall = 0
66 precision = {}
67 recall = {}
68 fulltagok = 0.00
69 fulltaghyp = 0.00
70 fulltagref = 0.00
71 for tag in listtags:
72 if tag not in nbrtagok:
73 nbrtagok[tag] = 0
74 if tag not in nbrtaghyp:
75 nbrtaghyp[tag] = 0
76 if tag not in nbrtagref:
77 nbrtagref[tag] = 0
78 if nbrtaghyp[tag] != 0:
79 precision[tag] = nbrtagok[tag]/nbrtaghyp[tag]
80 else:
81 precision[tag] = 0
82 if nbrtagref[tag] != 0:
83 recall[tag] = nbrtagok[tag]/nbrtagref[tag]
84 else:
85 recall[tag] = 0
86 if tag not in list_null_tags:
87 fulltagok = fulltagok+nbrtagok[tag]
88 fulltaghyp = fulltaghyp+nbrtaghyp[tag]
89 fulltagref = fulltagref+nbrtagref[tag]
90 # fullprecision = fullprecision+precision[tag]
91 # fullrecall = fullrecall+recall[tag]
92 tagcpt = tagcpt+1
93 fullprecision = round(100*fulltagok/fulltaghyp, 2)/100
94 fullrecall = round(100*fulltagok/fulltagref, 2)/100
95 fullfmeasure = (round((200*fullprecision*fullrecall)/(fullprecision+fullrecall), 2))/100
96 if return_precision_only:
97 return fullprecision
98 if return_recall_only:
99 return fullrecall
100 if return_fmeasure_only:
101 return fullfmeasure
102 return fullprecision, fullrecall, fullfmeasure
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/opennmt/utils/fmeasure.py b/opennmt/utils/fmeasure.py
--- a/opennmt/utils/fmeasure.py
+++ b/opennmt/utils/fmeasure.py
@@ -9,21 +9,15 @@
with open(ref_path) as ref_fp, open(hyp_path) as hyp_fp:
list_null_tags = ["X", "null", "NULL", "Null", "O"]
listtags = []
- linecpt = 0
classref = []
classrandom = []
classhyp = []
nbrtagref = {}
nbrtaghyp = {}
nbrtagok = {}
- for tag in listtags:
- nbrtagref[tag] = 0
- nbrtaghyp[tag] = 0
- nbrtagok[tag] = 0
for line in ref_fp:
line = line.strip()
tabline = line.split(' ')
- tagcpt = 0
lineref = []
for tag in tabline:
lineref.append(tag)
@@ -31,36 +25,29 @@
nbrtagref[tag] = nbrtagref[tag]+1
else:
nbrtagref[tag] = 1
- tagcpt = tagcpt+1
classref.append(lineref)
- linecpt = linecpt+1
- linecpt = 0
- for line in hyp_fp:
+ for line, lineref in zip(hyp_fp, classref):
line = line.strip()
tabline = line.split(' ')
- tagcpt = 0
linehyp = []
linerandom = []
- for tag in tabline:
+ for tagcpt, tag in enumerate(tabline):
linehyp.append(tag)
if tag not in listtags:
listtags.append(tag)
linerandom.append(tag)
- if tag == classref[linecpt][tagcpt]:
+ if tagcpt < len(lineref) and tag == lineref[tagcpt]:
if tag in nbrtagok.keys():
nbrtagok[tag] = nbrtagok[tag]+1
else:
nbrtagok[tag] = 1
- tagcpt = tagcpt+1
if tag in nbrtaghyp.keys():
nbrtaghyp[tag] = nbrtaghyp[tag]+1
else:
nbrtaghyp[tag] = 1
classhyp.append(linehyp)
classrandom.append(linerandom)
- linecpt = linecpt+1
- tagcpt = 0
fullprecision = 0
fullrecall = 0
precision = {}
@@ -87,12 +74,11 @@
fulltagok = fulltagok+nbrtagok[tag]
fulltaghyp = fulltaghyp+nbrtaghyp[tag]
fulltagref = fulltagref+nbrtagref[tag]
-# fullprecision = fullprecision+precision[tag]
-# fullrecall = fullrecall+recall[tag]
- tagcpt = tagcpt+1
- fullprecision = round(100*fulltagok/fulltaghyp, 2)/100
- fullrecall = round(100*fulltagok/fulltagref, 2)/100
- fullfmeasure = (round((200*fullprecision*fullrecall)/(fullprecision+fullrecall), 2))/100
+ fullprecision = fulltagok / fulltaghyp if fulltaghyp != 0 else 0
+ fullrecall = fulltagok / fulltagref if fulltagref != 0 else 0
+ fullfmeasure = (
+ (2 * fullprecision * fullrecall) / (fullprecision + fullrecall)
+ if (fullprecision + fullrecall) != 0 else 0)
if return_precision_only:
return fullprecision
if return_recall_only:
| {"golden_diff": "diff --git a/opennmt/utils/fmeasure.py b/opennmt/utils/fmeasure.py\n--- a/opennmt/utils/fmeasure.py\n+++ b/opennmt/utils/fmeasure.py\n@@ -9,21 +9,15 @@\n with open(ref_path) as ref_fp, open(hyp_path) as hyp_fp:\n list_null_tags = [\"X\", \"null\", \"NULL\", \"Null\", \"O\"]\n listtags = []\n- linecpt = 0\n classref = []\n classrandom = []\n classhyp = []\n nbrtagref = {}\n nbrtaghyp = {}\n nbrtagok = {}\n- for tag in listtags:\n- nbrtagref[tag] = 0\n- nbrtaghyp[tag] = 0\n- nbrtagok[tag] = 0\n for line in ref_fp:\n line = line.strip()\n tabline = line.split(' ')\n- tagcpt = 0\n lineref = []\n for tag in tabline:\n lineref.append(tag)\n@@ -31,36 +25,29 @@\n nbrtagref[tag] = nbrtagref[tag]+1\n else:\n nbrtagref[tag] = 1\n- tagcpt = tagcpt+1\n classref.append(lineref)\n- linecpt = linecpt+1\n- linecpt = 0\n- for line in hyp_fp:\n+ for line, lineref in zip(hyp_fp, classref):\n line = line.strip()\n tabline = line.split(' ')\n- tagcpt = 0\n linehyp = []\n linerandom = []\n- for tag in tabline:\n+ for tagcpt, tag in enumerate(tabline):\n linehyp.append(tag)\n if tag not in listtags:\n listtags.append(tag)\n linerandom.append(tag)\n- if tag == classref[linecpt][tagcpt]:\n+ if tagcpt < len(lineref) and tag == lineref[tagcpt]:\n if tag in nbrtagok.keys():\n nbrtagok[tag] = nbrtagok[tag]+1\n else:\n nbrtagok[tag] = 1\n- tagcpt = tagcpt+1\n if tag in nbrtaghyp.keys():\n nbrtaghyp[tag] = nbrtaghyp[tag]+1\n else:\n nbrtaghyp[tag] = 1\n classhyp.append(linehyp)\n classrandom.append(linerandom)\n- linecpt = linecpt+1\n \n- tagcpt = 0\n fullprecision = 0\n fullrecall = 0\n precision = {}\n@@ -87,12 +74,11 @@\n fulltagok = fulltagok+nbrtagok[tag]\n fulltaghyp = fulltaghyp+nbrtaghyp[tag]\n fulltagref = fulltagref+nbrtagref[tag]\n-# fullprecision = fullprecision+precision[tag]\n-# fullrecall = fullrecall+recall[tag]\n- tagcpt = tagcpt+1\n- fullprecision = round(100*fulltagok/fulltaghyp, 2)/100\n- fullrecall = round(100*fulltagok/fulltagref, 2)/100\n- fullfmeasure = (round((200*fullprecision*fullrecall)/(fullprecision+fullrecall), 2))/100\n+ fullprecision = fulltagok / fulltaghyp if fulltaghyp != 0 else 0\n+ fullrecall = fulltagok / fulltagref if fulltagref != 0 else 0\n+ fullfmeasure = (\n+ (2 * fullprecision * fullrecall) / (fullprecision + fullrecall)\n+ if (fullprecision + fullrecall) != 0 else 0)\n if return_precision_only:\n return fullprecision\n if return_recall_only:\n", "issue": "PRF evaluator: list index out of range\nHi! \r\nI'm getting `list index out of range` when prf evaluator is used.\r\n\r\n**Config:**\r\nModel: TransformerRelative\r\nparams:\r\n beam_width: 1\r\n\r\ntrain:\r\n maximum_features_length: 50\r\n maximum_labels_length: 50\r\n save_summary_steps: 100\r\n sample_buffer_size: 1000000\r\n keep_checkpoint_max: 20\r\n save_checkpoints_steps: 5000\r\n max_step: 2000000\r\n\r\neval:\r\n batch_size: 32\r\n steps: 5000\r\n export_on_best: bleu\r\n external_evaluators: [ \"bleu\", \"prf\", \"wer\" ]\r\n\r\ninfer:\r\n batch_size: 1024\r\n\r\n**Full stack:**\r\nW tensorflow/core/kernels/data/generator_dataset_op.cc:103] Error occurred when finalizing GeneratorDataset iterator: Cancelled: Operation was cancelled\r\nTraceback (most recent call last):\r\n File \"/home/dima/anaconda3/envs/tf/bin/onmt-main\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/bin/main.py\", line 224, in main\r\n hvd=hvd)\r\n File \"/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/runner.py\", line 217, in train\r\n moving_average_decay=train_config.get(\"moving_average_decay\"))\r\n File \"/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/training.py\", line 118, in __call__\r\n early_stop = self._evaluate(evaluator, step, moving_average=moving_average)\r\n File \"/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/training.py\", line 140, in _evaluate\r\n evaluator(step)\r\n File \"/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/evaluation.py\", line 299, in __call__\r\n score = scorer(self._labels_file, output_path)\r\n File \"/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/utils/scorers.py\", line 132, in __call__\r\n precision_score, recall_score, fmeasure_score = fmeasure(ref_path, hyp_path)\r\n File \"/home/dima/anaconda3/envs/tf/lib/python3.7/site-packages/opennmt/utils/fmeasure.py\", line 49, in fmeasure\r\n if tag == classref[linecpt][tagcpt]:\r\nIndexError: list index out of range\r\n\r\nCan I help you with the issue? I'm not familiar with the code base, but I can try to reproduce it locally and extract the context if necessary.\r\n\n", "before_files": [{"content": "\"\"\"Hypotheses file scoring for Precision Recall and F-Measure.\"\"\"\n\ndef fmeasure(ref_path,\n hyp_path,\n return_precision_only=False,\n return_recall_only=False,\n return_fmeasure_only=False):\n \"\"\"Compute Precision Recall and F-Measure between two files\"\"\"\n with open(ref_path) as ref_fp, open(hyp_path) as hyp_fp:\n list_null_tags = [\"X\", \"null\", \"NULL\", \"Null\", \"O\"]\n listtags = []\n linecpt = 0\n classref = []\n classrandom = []\n classhyp = []\n nbrtagref = {}\n nbrtaghyp = {}\n nbrtagok = {}\n for tag in listtags:\n nbrtagref[tag] = 0\n nbrtaghyp[tag] = 0\n nbrtagok[tag] = 0\n for line in ref_fp:\n line = line.strip()\n tabline = line.split(' ')\n tagcpt = 0\n lineref = []\n for tag in tabline:\n lineref.append(tag)\n if tag in nbrtagref.keys() and tag not in list_null_tags:\n nbrtagref[tag] = nbrtagref[tag]+1\n else:\n nbrtagref[tag] = 1\n tagcpt = tagcpt+1\n classref.append(lineref)\n linecpt = linecpt+1\n linecpt = 0\n for line in hyp_fp:\n line = line.strip()\n tabline = line.split(' ')\n tagcpt = 0\n linehyp = []\n linerandom = []\n for tag in tabline:\n linehyp.append(tag)\n if tag not in listtags:\n listtags.append(tag)\n linerandom.append(tag)\n if tag == classref[linecpt][tagcpt]:\n if tag in nbrtagok.keys():\n nbrtagok[tag] = nbrtagok[tag]+1\n else:\n nbrtagok[tag] = 1\n tagcpt = tagcpt+1\n if tag in nbrtaghyp.keys():\n nbrtaghyp[tag] = nbrtaghyp[tag]+1\n else:\n nbrtaghyp[tag] = 1\n classhyp.append(linehyp)\n classrandom.append(linerandom)\n linecpt = linecpt+1\n\n tagcpt = 0\n fullprecision = 0\n fullrecall = 0\n precision = {}\n recall = {}\n fulltagok = 0.00\n fulltaghyp = 0.00\n fulltagref = 0.00\n for tag in listtags:\n if tag not in nbrtagok:\n nbrtagok[tag] = 0\n if tag not in nbrtaghyp:\n nbrtaghyp[tag] = 0\n if tag not in nbrtagref:\n nbrtagref[tag] = 0\n if nbrtaghyp[tag] != 0:\n precision[tag] = nbrtagok[tag]/nbrtaghyp[tag]\n else:\n precision[tag] = 0\n if nbrtagref[tag] != 0:\n recall[tag] = nbrtagok[tag]/nbrtagref[tag]\n else:\n recall[tag] = 0\n if tag not in list_null_tags:\n fulltagok = fulltagok+nbrtagok[tag]\n fulltaghyp = fulltaghyp+nbrtaghyp[tag]\n fulltagref = fulltagref+nbrtagref[tag]\n# fullprecision = fullprecision+precision[tag]\n# fullrecall = fullrecall+recall[tag]\n tagcpt = tagcpt+1\n fullprecision = round(100*fulltagok/fulltaghyp, 2)/100\n fullrecall = round(100*fulltagok/fulltagref, 2)/100\n fullfmeasure = (round((200*fullprecision*fullrecall)/(fullprecision+fullrecall), 2))/100\n if return_precision_only:\n return fullprecision\n if return_recall_only:\n return fullrecall\n if return_fmeasure_only:\n return fullfmeasure\n return fullprecision, fullrecall, fullfmeasure\n", "path": "opennmt/utils/fmeasure.py"}], "after_files": [{"content": "\"\"\"Hypotheses file scoring for Precision Recall and F-Measure.\"\"\"\n\ndef fmeasure(ref_path,\n hyp_path,\n return_precision_only=False,\n return_recall_only=False,\n return_fmeasure_only=False):\n \"\"\"Compute Precision Recall and F-Measure between two files\"\"\"\n with open(ref_path) as ref_fp, open(hyp_path) as hyp_fp:\n list_null_tags = [\"X\", \"null\", \"NULL\", \"Null\", \"O\"]\n listtags = []\n classref = []\n classrandom = []\n classhyp = []\n nbrtagref = {}\n nbrtaghyp = {}\n nbrtagok = {}\n for line in ref_fp:\n line = line.strip()\n tabline = line.split(' ')\n lineref = []\n for tag in tabline:\n lineref.append(tag)\n if tag in nbrtagref.keys() and tag not in list_null_tags:\n nbrtagref[tag] = nbrtagref[tag]+1\n else:\n nbrtagref[tag] = 1\n classref.append(lineref)\n for line, lineref in zip(hyp_fp, classref):\n line = line.strip()\n tabline = line.split(' ')\n linehyp = []\n linerandom = []\n for tagcpt, tag in enumerate(tabline):\n linehyp.append(tag)\n if tag not in listtags:\n listtags.append(tag)\n linerandom.append(tag)\n if tagcpt < len(lineref) and tag == lineref[tagcpt]:\n if tag in nbrtagok.keys():\n nbrtagok[tag] = nbrtagok[tag]+1\n else:\n nbrtagok[tag] = 1\n if tag in nbrtaghyp.keys():\n nbrtaghyp[tag] = nbrtaghyp[tag]+1\n else:\n nbrtaghyp[tag] = 1\n classhyp.append(linehyp)\n classrandom.append(linerandom)\n\n fullprecision = 0\n fullrecall = 0\n precision = {}\n recall = {}\n fulltagok = 0.00\n fulltaghyp = 0.00\n fulltagref = 0.00\n for tag in listtags:\n if tag not in nbrtagok:\n nbrtagok[tag] = 0\n if tag not in nbrtaghyp:\n nbrtaghyp[tag] = 0\n if tag not in nbrtagref:\n nbrtagref[tag] = 0\n if nbrtaghyp[tag] != 0:\n precision[tag] = nbrtagok[tag]/nbrtaghyp[tag]\n else:\n precision[tag] = 0\n if nbrtagref[tag] != 0:\n recall[tag] = nbrtagok[tag]/nbrtagref[tag]\n else:\n recall[tag] = 0\n if tag not in list_null_tags:\n fulltagok = fulltagok+nbrtagok[tag]\n fulltaghyp = fulltaghyp+nbrtaghyp[tag]\n fulltagref = fulltagref+nbrtagref[tag]\n fullprecision = fulltagok / fulltaghyp if fulltaghyp != 0 else 0\n fullrecall = fulltagok / fulltagref if fulltagref != 0 else 0\n fullfmeasure = (\n (2 * fullprecision * fullrecall) / (fullprecision + fullrecall)\n if (fullprecision + fullrecall) != 0 else 0)\n if return_precision_only:\n return fullprecision\n if return_recall_only:\n return fullrecall\n if return_fmeasure_only:\n return fullfmeasure\n return fullprecision, fullrecall, fullfmeasure\n", "path": "opennmt/utils/fmeasure.py"}]} | 2,017 | 847 |
gh_patches_debug_35704 | rasdani/github-patches | git_diff | vllm-project__vllm-2992 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Benchmarking script for openai chat completion api are not supported
When running vllm with openai chat apis, the benchmarking script will fail as it asserts the backend API of `assert api_url.endswith("v1/completions")`.
```
python benchmark_serving.py --backend openai --model mistralai/Mistral-7B-v0.1 --dataset ShareGPT_V3_unfiltered_cleaned_split.json --save-result
```
The logs are as follows:
```
Namespace(backend='openai', version='N/A', base_url=None, host='localhost', port=8000, endpoint='/generate', dataset='ShareGPT_V3_unfiltered_cleaned_split.json', model='mistralai/Mistral-7B-v0.1', tokenizer=None, best_of=1, use_beam_search=False, num_prompts=1000, request_rate=inf, seed=0, trust_remote_code=False, disable_tqdm=False, save_result=True)
0%| | 0/1000 [00:00<?, ?it/s]Traffic request rate: inf
Traceback (most recent call last):
File "/home/chenw/vllm/benchmarks/benchmark_serving.py", line 387, in <module>
main(args)
File "/home/chenw/vllm/benchmarks/benchmark_serving.py", line 259, in main
benchmark_result = asyncio.run(
File "/home/chenw/miniconda3/envs/myenv/lib/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/home/chenw/miniconda3/envs/myenv/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete
return future.result()
File "/home/chenw/vllm/benchmarks/benchmark_serving.py", line 195, in benchmark
outputs = await asyncio.gather(*tasks)
File "/home/chenw/vllm/benchmarks/backend_request_func.py", line 223, in async_request_openai_completions
assert api_url.endswith("v1/completions")
AssertionError
0%| | 0/1000 [00:00<?, ?it/s]
```
The `backend_request_func.py` should not only allow chat apis like: `assert api_url.endswith("v1/chat/completions")`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `benchmarks/backend_request_func.py`
Content:
```
1 import json
2 import os
3 import time
4 from dataclasses import dataclass
5 from typing import Optional
6
7 import aiohttp
8 from tqdm.asyncio import tqdm
9
10 AIOHTTP_TIMEOUT = aiohttp.ClientTimeout(total=6 * 60 * 60)
11
12
13 @dataclass
14 class RequestFuncInput:
15 prompt: str
16 api_url: str
17 prompt_len: int
18 output_len: int
19 model: str
20 best_of: int = 1
21 use_beam_search: bool = False
22
23
24 @dataclass
25 class RequestFuncOutput:
26 generated_text: str = ""
27 success: bool = False
28 latency: float = 0
29 ttft: float = 0
30 prompt_len: int = 0
31
32
33 async def async_request_tgi(
34 request_func_input: RequestFuncInput,
35 pbar: Optional[tqdm] = None,
36 ) -> RequestFuncOutput:
37 api_url = request_func_input.api_url
38 assert api_url.endswith("generate_stream")
39
40 async with aiohttp.ClientSession(timeout=AIOHTTP_TIMEOUT) as session:
41 assert not request_func_input.use_beam_search
42 params = {
43 "best_of": request_func_input.best_of,
44 "max_new_tokens": request_func_input.output_len,
45 "do_sample": True,
46 "temperature": 0.01, # TGI does not accept 0.0 temperature.
47 "top_p": 0.99, # TGI does not accept 1.0 top_p.
48 }
49 payload = {
50 "inputs": request_func_input.prompt,
51 "parameters": params,
52 }
53 output = RequestFuncOutput()
54 output.prompt_len = request_func_input.prompt_len
55
56 ttft = 0
57 st = time.perf_counter()
58 try:
59 async with session.post(url=api_url, json=payload) as response:
60 if response.status == 200:
61 async for data in response.content.iter_any():
62 if ttft == 0:
63 ttft = time.perf_counter() - st
64 output.ttft = ttft
65 output.latency = time.perf_counter() - st
66
67 body = data.decode("utf-8").lstrip("data:")
68 output.generated_text = json.loads(body)["generated_text"]
69 output.success = True
70 else:
71 output.success = False
72 except (aiohttp.ClientOSError, aiohttp.ServerDisconnectedError):
73 output.success = False
74
75 if pbar:
76 pbar.update(1)
77 return output
78
79
80 async def async_request_vllm(
81 request_func_input: RequestFuncInput,
82 pbar: Optional[tqdm] = None,
83 ) -> RequestFuncOutput:
84 api_url = request_func_input.api_url
85 assert api_url.endswith("generate")
86
87 async with aiohttp.ClientSession(timeout=AIOHTTP_TIMEOUT) as session:
88 payload = {
89 "prompt": request_func_input.prompt,
90 "n": 1,
91 "best_of": request_func_input.best_of,
92 "use_beam_search": request_func_input.use_beam_search,
93 "temperature": 0.0 if request_func_input.use_beam_search else 1.0,
94 "top_p": 1.0,
95 "max_tokens": request_func_input.output_len,
96 "ignore_eos": True,
97 "stream": True,
98 }
99 output = RequestFuncOutput()
100 output.prompt_len = request_func_input.prompt_len
101
102 ttft = 0
103 st = time.perf_counter()
104 try:
105 async with session.post(url=api_url, json=payload) as response:
106 if response.status == 200:
107 async for data in response.content.iter_any():
108 if ttft == 0:
109 ttft = time.perf_counter() - st
110 output.ttft = ttft
111 output.latency = time.perf_counter() - st
112
113 # When streaming, '\0' is appended to the end of the response.
114 body = data.decode("utf-8").strip("\0")
115 output.generated_text = json.loads(
116 body)["text"][0][len(request_func_input.prompt):]
117 output.success = True
118
119 else:
120 output.success = False
121 except (aiohttp.ClientOSError, aiohttp.ServerDisconnectedError):
122 output.success = False
123
124 if pbar:
125 pbar.update(1)
126 return output
127
128
129 async def async_request_trt_llm(
130 request_func_input: RequestFuncInput,
131 pbar: Optional[tqdm] = None,
132 ) -> RequestFuncOutput:
133 api_url = request_func_input.api_url
134 assert api_url.endswith("generate_stream")
135
136 async with aiohttp.ClientSession(timeout=AIOHTTP_TIMEOUT) as session:
137 assert not request_func_input.use_beam_search
138 assert request_func_input.best_of == 1
139 payload = {
140 "accumulate_tokens": True,
141 "text_input": request_func_input.prompt,
142 "temperature": 0.0,
143 "top_p": 1.0,
144 "max_tokens": request_func_input.output_len,
145 "stream": True,
146 }
147 output = RequestFuncOutput()
148 output.prompt_len = request_func_input.prompt_len
149 ttft = 0
150
151 st = time.perf_counter()
152 try:
153 async with session.post(url=api_url, json=payload) as resp:
154 if resp.status == 200:
155 async for data in resp.content.iter_any():
156 if ttft == 0:
157 ttft = time.perf_counter() - st
158 output.ttft = ttft
159 output.latency = time.perf_counter() - st
160
161 body = data.decode("utf-8").lstrip("data:")
162 output.generated_text = json.loads(body)["text_output"]
163 output.success = True
164
165 else:
166 output.success = False
167 except (aiohttp.ClientOSError, aiohttp.ServerDisconnectedError):
168 output.success = False
169
170 if pbar:
171 pbar.update(1)
172 return output
173
174
175 async def async_request_deepspeed_mii(
176 request_func_input: RequestFuncInput,
177 pbar: Optional[tqdm] = None,
178 ) -> RequestFuncOutput:
179 async with aiohttp.ClientSession(timeout=AIOHTTP_TIMEOUT) as session:
180 assert request_func_input.best_of == 1
181 assert not request_func_input.use_beam_search
182
183 payload = {
184 "prompts": request_func_input.prompt,
185 "max_new_tokens": request_func_input.output_len,
186 "ignore_eos": True,
187 "do_sample": True,
188 "temperature":
189 0.01, # deepspeed-mii does not accept 0.0 temperature.
190 "top_p": 1.0,
191 }
192 output = RequestFuncOutput()
193 output.prompt_len = request_func_input.prompt_len
194
195 # DeepSpeed-MII doesn't support streaming as of Jan 28 2024, will use 0 as placeholder.
196 # https://github.com/microsoft/DeepSpeed-MII/pull/311
197 output.ttft = 0
198
199 st = time.perf_counter()
200 try:
201 async with session.post(url=request_func_input.api_url,
202 json=payload) as resp:
203 if resp.status == 200:
204 parsed_resp = await resp.json()
205 output.latency = time.perf_counter() - st
206 output.generated_text = parsed_resp[0]["generated_text"]
207 output.success = True
208 else:
209 output.success = False
210 except (aiohttp.ClientOSError, aiohttp.ServerDisconnectedError):
211 output.success = False
212
213 if pbar:
214 pbar.update(1)
215 return output
216
217
218 async def async_request_openai_completions(
219 request_func_input: RequestFuncInput,
220 pbar: Optional[tqdm] = None,
221 ) -> RequestFuncOutput:
222 api_url = request_func_input.api_url
223 assert api_url.endswith("v1/completions")
224
225 async with aiohttp.ClientSession(timeout=AIOHTTP_TIMEOUT) as session:
226 assert not request_func_input.use_beam_search
227 payload = {
228 "model": request_func_input.model,
229 "prompt": request_func_input.prompt,
230 "temperature": 0.0,
231 "best_of": request_func_input.best_of,
232 "max_tokens": request_func_input.output_len,
233 "stream": True,
234 }
235 headers = {
236 "Authorization": f"Bearer {os.environ.get('OPENAI_API_KEY')}"
237 }
238
239 output = RequestFuncOutput()
240 output.prompt_len = request_func_input.prompt_len
241
242 generated_text = ""
243 ttft = 0
244 st = time.perf_counter()
245 try:
246 async with session.post(url=api_url, json=payload,
247 headers=headers) as response:
248 if response.status == 200:
249 async for chunk in response.content:
250 if ttft == 0:
251 ttft = time.perf_counter() - st
252 output.ttft = ttft
253
254 chunk = chunk.strip()
255 if not chunk:
256 continue
257
258 chunk = chunk.decode("utf-8").lstrip("data: ")
259 if chunk == "[DONE]":
260 latency = time.perf_counter() - st
261 else:
262 body = json.loads(chunk)
263 generated_text += body["choices"][0]["text"]
264
265 output.generated_text = generated_text
266 output.success = True
267 output.latency = latency
268 else:
269 output.success = False
270 except (aiohttp.ClientOSError, aiohttp.ServerDisconnectedError):
271 output.success = False
272
273 if pbar:
274 pbar.update(1)
275 return output
276
277
278 ASYNC_REQUEST_FUNCS = {
279 "tgi": async_request_tgi,
280 "vllm": async_request_vllm,
281 "deepspeed-mii": async_request_deepspeed_mii,
282 "openai": async_request_openai_completions,
283 "tensorrt-llm": async_request_trt_llm,
284 }
285
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/benchmarks/backend_request_func.py b/benchmarks/backend_request_func.py
--- a/benchmarks/backend_request_func.py
+++ b/benchmarks/backend_request_func.py
@@ -275,10 +275,80 @@
return output
+async def async_request_openai_chat_completions(
+ request_func_input: RequestFuncInput,
+ pbar: Optional[tqdm] = None,
+) -> RequestFuncOutput:
+ api_url = request_func_input.api_url
+ assert api_url.endswith(
+ "v1/chat/completions"
+ ), "OpenAI Chat API URL must end with 'v1/chat/completions'."
+
+ async with aiohttp.ClientSession(timeout=AIOHTTP_TIMEOUT) as session:
+ assert not request_func_input.use_beam_search
+ payload = {
+ "model": request_func_input.model,
+ "messages": [
+ {
+ "role": "user",
+ "content": request_func_input.prompt,
+ },
+ ],
+ "temperature": 0.0,
+ "max_tokens": request_func_input.output_len,
+ "stream": True,
+ }
+ headers = {
+ "Content-Type": "application/json",
+ "Authorization": f"Bearer {os.environ.get('OPENAI_API_KEY')}"
+ }
+
+ output = RequestFuncOutput()
+ output.prompt_len = request_func_input.prompt_len
+
+ generated_text = ""
+ ttft = 0
+ st = time.perf_counter()
+ try:
+ async with session.post(url=api_url, json=payload,
+ headers=headers) as response:
+ if response.status == 200:
+ async for chunk in response.content:
+ if ttft == 0:
+ ttft = time.perf_counter() - st
+ output.ttft = ttft
+
+ chunk = chunk.strip()
+ if not chunk:
+ continue
+
+ chunk = chunk.decode("utf-8").lstrip("data: ")
+ if chunk == "[DONE]":
+ latency = time.perf_counter() - st
+ else:
+ body = json.loads(chunk)
+ if "content" in body["choices"][0]["delta"]:
+ generated_text += body["choices"][0]["delta"][
+ "content"]
+
+ output.generated_text = generated_text
+ output.success = True
+ output.latency = latency
+ else:
+ output.success = False
+ except (aiohttp.ClientOSError, aiohttp.ServerDisconnectedError):
+ output.success = False
+
+ if pbar:
+ pbar.update(1)
+ return output
+
+
ASYNC_REQUEST_FUNCS = {
"tgi": async_request_tgi,
"vllm": async_request_vllm,
"deepspeed-mii": async_request_deepspeed_mii,
"openai": async_request_openai_completions,
+ "openai-chat": async_request_openai_chat_completions,
"tensorrt-llm": async_request_trt_llm,
}
| {"golden_diff": "diff --git a/benchmarks/backend_request_func.py b/benchmarks/backend_request_func.py\n--- a/benchmarks/backend_request_func.py\n+++ b/benchmarks/backend_request_func.py\n@@ -275,10 +275,80 @@\n return output\n \n \n+async def async_request_openai_chat_completions(\n+ request_func_input: RequestFuncInput,\n+ pbar: Optional[tqdm] = None,\n+) -> RequestFuncOutput:\n+ api_url = request_func_input.api_url\n+ assert api_url.endswith(\n+ \"v1/chat/completions\"\n+ ), \"OpenAI Chat API URL must end with 'v1/chat/completions'.\"\n+\n+ async with aiohttp.ClientSession(timeout=AIOHTTP_TIMEOUT) as session:\n+ assert not request_func_input.use_beam_search\n+ payload = {\n+ \"model\": request_func_input.model,\n+ \"messages\": [\n+ {\n+ \"role\": \"user\",\n+ \"content\": request_func_input.prompt,\n+ },\n+ ],\n+ \"temperature\": 0.0,\n+ \"max_tokens\": request_func_input.output_len,\n+ \"stream\": True,\n+ }\n+ headers = {\n+ \"Content-Type\": \"application/json\",\n+ \"Authorization\": f\"Bearer {os.environ.get('OPENAI_API_KEY')}\"\n+ }\n+\n+ output = RequestFuncOutput()\n+ output.prompt_len = request_func_input.prompt_len\n+\n+ generated_text = \"\"\n+ ttft = 0\n+ st = time.perf_counter()\n+ try:\n+ async with session.post(url=api_url, json=payload,\n+ headers=headers) as response:\n+ if response.status == 200:\n+ async for chunk in response.content:\n+ if ttft == 0:\n+ ttft = time.perf_counter() - st\n+ output.ttft = ttft\n+\n+ chunk = chunk.strip()\n+ if not chunk:\n+ continue\n+\n+ chunk = chunk.decode(\"utf-8\").lstrip(\"data: \")\n+ if chunk == \"[DONE]\":\n+ latency = time.perf_counter() - st\n+ else:\n+ body = json.loads(chunk)\n+ if \"content\" in body[\"choices\"][0][\"delta\"]:\n+ generated_text += body[\"choices\"][0][\"delta\"][\n+ \"content\"]\n+\n+ output.generated_text = generated_text\n+ output.success = True\n+ output.latency = latency\n+ else:\n+ output.success = False\n+ except (aiohttp.ClientOSError, aiohttp.ServerDisconnectedError):\n+ output.success = False\n+\n+ if pbar:\n+ pbar.update(1)\n+ return output\n+\n+\n ASYNC_REQUEST_FUNCS = {\n \"tgi\": async_request_tgi,\n \"vllm\": async_request_vllm,\n \"deepspeed-mii\": async_request_deepspeed_mii,\n \"openai\": async_request_openai_completions,\n+ \"openai-chat\": async_request_openai_chat_completions,\n \"tensorrt-llm\": async_request_trt_llm,\n }\n", "issue": "Benchmarking script for openai chat completion api are not supported\nWhen running vllm with openai chat apis, the benchmarking script will fail as it asserts the backend API of `assert api_url.endswith(\"v1/completions\")`.\r\n\r\n```\r\npython benchmark_serving.py --backend openai --model mistralai/Mistral-7B-v0.1 --dataset ShareGPT_V3_unfiltered_cleaned_split.json --save-result\r\n```\r\n\r\nThe logs are as follows:\r\n```\r\nNamespace(backend='openai', version='N/A', base_url=None, host='localhost', port=8000, endpoint='/generate', dataset='ShareGPT_V3_unfiltered_cleaned_split.json', model='mistralai/Mistral-7B-v0.1', tokenizer=None, best_of=1, use_beam_search=False, num_prompts=1000, request_rate=inf, seed=0, trust_remote_code=False, disable_tqdm=False, save_result=True)\r\n 0%| | 0/1000 [00:00<?, ?it/s]Traffic request rate: inf\r\nTraceback (most recent call last):\r\n File \"/home/chenw/vllm/benchmarks/benchmark_serving.py\", line 387, in <module>\r\n main(args)\r\n File \"/home/chenw/vllm/benchmarks/benchmark_serving.py\", line 259, in main\r\n benchmark_result = asyncio.run(\r\n File \"/home/chenw/miniconda3/envs/myenv/lib/python3.9/asyncio/runners.py\", line 44, in run\r\n return loop.run_until_complete(main)\r\n File \"/home/chenw/miniconda3/envs/myenv/lib/python3.9/asyncio/base_events.py\", line 647, in run_until_complete\r\n return future.result()\r\n File \"/home/chenw/vllm/benchmarks/benchmark_serving.py\", line 195, in benchmark\r\n outputs = await asyncio.gather(*tasks)\r\n File \"/home/chenw/vllm/benchmarks/backend_request_func.py\", line 223, in async_request_openai_completions\r\n assert api_url.endswith(\"v1/completions\")\r\nAssertionError\r\n 0%| | 0/1000 [00:00<?, ?it/s]\r\n```\r\n\r\nThe `backend_request_func.py` should not only allow chat apis like: `assert api_url.endswith(\"v1/chat/completions\")`.\r\n\n", "before_files": [{"content": "import json\nimport os\nimport time\nfrom dataclasses import dataclass\nfrom typing import Optional\n\nimport aiohttp\nfrom tqdm.asyncio import tqdm\n\nAIOHTTP_TIMEOUT = aiohttp.ClientTimeout(total=6 * 60 * 60)\n\n\n@dataclass\nclass RequestFuncInput:\n prompt: str\n api_url: str\n prompt_len: int\n output_len: int\n model: str\n best_of: int = 1\n use_beam_search: bool = False\n\n\n@dataclass\nclass RequestFuncOutput:\n generated_text: str = \"\"\n success: bool = False\n latency: float = 0\n ttft: float = 0\n prompt_len: int = 0\n\n\nasync def async_request_tgi(\n request_func_input: RequestFuncInput,\n pbar: Optional[tqdm] = None,\n) -> RequestFuncOutput:\n api_url = request_func_input.api_url\n assert api_url.endswith(\"generate_stream\")\n\n async with aiohttp.ClientSession(timeout=AIOHTTP_TIMEOUT) as session:\n assert not request_func_input.use_beam_search\n params = {\n \"best_of\": request_func_input.best_of,\n \"max_new_tokens\": request_func_input.output_len,\n \"do_sample\": True,\n \"temperature\": 0.01, # TGI does not accept 0.0 temperature.\n \"top_p\": 0.99, # TGI does not accept 1.0 top_p.\n }\n payload = {\n \"inputs\": request_func_input.prompt,\n \"parameters\": params,\n }\n output = RequestFuncOutput()\n output.prompt_len = request_func_input.prompt_len\n\n ttft = 0\n st = time.perf_counter()\n try:\n async with session.post(url=api_url, json=payload) as response:\n if response.status == 200:\n async for data in response.content.iter_any():\n if ttft == 0:\n ttft = time.perf_counter() - st\n output.ttft = ttft\n output.latency = time.perf_counter() - st\n\n body = data.decode(\"utf-8\").lstrip(\"data:\")\n output.generated_text = json.loads(body)[\"generated_text\"]\n output.success = True\n else:\n output.success = False\n except (aiohttp.ClientOSError, aiohttp.ServerDisconnectedError):\n output.success = False\n\n if pbar:\n pbar.update(1)\n return output\n\n\nasync def async_request_vllm(\n request_func_input: RequestFuncInput,\n pbar: Optional[tqdm] = None,\n) -> RequestFuncOutput:\n api_url = request_func_input.api_url\n assert api_url.endswith(\"generate\")\n\n async with aiohttp.ClientSession(timeout=AIOHTTP_TIMEOUT) as session:\n payload = {\n \"prompt\": request_func_input.prompt,\n \"n\": 1,\n \"best_of\": request_func_input.best_of,\n \"use_beam_search\": request_func_input.use_beam_search,\n \"temperature\": 0.0 if request_func_input.use_beam_search else 1.0,\n \"top_p\": 1.0,\n \"max_tokens\": request_func_input.output_len,\n \"ignore_eos\": True,\n \"stream\": True,\n }\n output = RequestFuncOutput()\n output.prompt_len = request_func_input.prompt_len\n\n ttft = 0\n st = time.perf_counter()\n try:\n async with session.post(url=api_url, json=payload) as response:\n if response.status == 200:\n async for data in response.content.iter_any():\n if ttft == 0:\n ttft = time.perf_counter() - st\n output.ttft = ttft\n output.latency = time.perf_counter() - st\n\n # When streaming, '\\0' is appended to the end of the response.\n body = data.decode(\"utf-8\").strip(\"\\0\")\n output.generated_text = json.loads(\n body)[\"text\"][0][len(request_func_input.prompt):]\n output.success = True\n\n else:\n output.success = False\n except (aiohttp.ClientOSError, aiohttp.ServerDisconnectedError):\n output.success = False\n\n if pbar:\n pbar.update(1)\n return output\n\n\nasync def async_request_trt_llm(\n request_func_input: RequestFuncInput,\n pbar: Optional[tqdm] = None,\n) -> RequestFuncOutput:\n api_url = request_func_input.api_url\n assert api_url.endswith(\"generate_stream\")\n\n async with aiohttp.ClientSession(timeout=AIOHTTP_TIMEOUT) as session:\n assert not request_func_input.use_beam_search\n assert request_func_input.best_of == 1\n payload = {\n \"accumulate_tokens\": True,\n \"text_input\": request_func_input.prompt,\n \"temperature\": 0.0,\n \"top_p\": 1.0,\n \"max_tokens\": request_func_input.output_len,\n \"stream\": True,\n }\n output = RequestFuncOutput()\n output.prompt_len = request_func_input.prompt_len\n ttft = 0\n\n st = time.perf_counter()\n try:\n async with session.post(url=api_url, json=payload) as resp:\n if resp.status == 200:\n async for data in resp.content.iter_any():\n if ttft == 0:\n ttft = time.perf_counter() - st\n output.ttft = ttft\n output.latency = time.perf_counter() - st\n\n body = data.decode(\"utf-8\").lstrip(\"data:\")\n output.generated_text = json.loads(body)[\"text_output\"]\n output.success = True\n\n else:\n output.success = False\n except (aiohttp.ClientOSError, aiohttp.ServerDisconnectedError):\n output.success = False\n\n if pbar:\n pbar.update(1)\n return output\n\n\nasync def async_request_deepspeed_mii(\n request_func_input: RequestFuncInput,\n pbar: Optional[tqdm] = None,\n) -> RequestFuncOutput:\n async with aiohttp.ClientSession(timeout=AIOHTTP_TIMEOUT) as session:\n assert request_func_input.best_of == 1\n assert not request_func_input.use_beam_search\n\n payload = {\n \"prompts\": request_func_input.prompt,\n \"max_new_tokens\": request_func_input.output_len,\n \"ignore_eos\": True,\n \"do_sample\": True,\n \"temperature\":\n 0.01, # deepspeed-mii does not accept 0.0 temperature.\n \"top_p\": 1.0,\n }\n output = RequestFuncOutput()\n output.prompt_len = request_func_input.prompt_len\n\n # DeepSpeed-MII doesn't support streaming as of Jan 28 2024, will use 0 as placeholder.\n # https://github.com/microsoft/DeepSpeed-MII/pull/311\n output.ttft = 0\n\n st = time.perf_counter()\n try:\n async with session.post(url=request_func_input.api_url,\n json=payload) as resp:\n if resp.status == 200:\n parsed_resp = await resp.json()\n output.latency = time.perf_counter() - st\n output.generated_text = parsed_resp[0][\"generated_text\"]\n output.success = True\n else:\n output.success = False\n except (aiohttp.ClientOSError, aiohttp.ServerDisconnectedError):\n output.success = False\n\n if pbar:\n pbar.update(1)\n return output\n\n\nasync def async_request_openai_completions(\n request_func_input: RequestFuncInput,\n pbar: Optional[tqdm] = None,\n) -> RequestFuncOutput:\n api_url = request_func_input.api_url\n assert api_url.endswith(\"v1/completions\")\n\n async with aiohttp.ClientSession(timeout=AIOHTTP_TIMEOUT) as session:\n assert not request_func_input.use_beam_search\n payload = {\n \"model\": request_func_input.model,\n \"prompt\": request_func_input.prompt,\n \"temperature\": 0.0,\n \"best_of\": request_func_input.best_of,\n \"max_tokens\": request_func_input.output_len,\n \"stream\": True,\n }\n headers = {\n \"Authorization\": f\"Bearer {os.environ.get('OPENAI_API_KEY')}\"\n }\n\n output = RequestFuncOutput()\n output.prompt_len = request_func_input.prompt_len\n\n generated_text = \"\"\n ttft = 0\n st = time.perf_counter()\n try:\n async with session.post(url=api_url, json=payload,\n headers=headers) as response:\n if response.status == 200:\n async for chunk in response.content:\n if ttft == 0:\n ttft = time.perf_counter() - st\n output.ttft = ttft\n\n chunk = chunk.strip()\n if not chunk:\n continue\n\n chunk = chunk.decode(\"utf-8\").lstrip(\"data: \")\n if chunk == \"[DONE]\":\n latency = time.perf_counter() - st\n else:\n body = json.loads(chunk)\n generated_text += body[\"choices\"][0][\"text\"]\n\n output.generated_text = generated_text\n output.success = True\n output.latency = latency\n else:\n output.success = False\n except (aiohttp.ClientOSError, aiohttp.ServerDisconnectedError):\n output.success = False\n\n if pbar:\n pbar.update(1)\n return output\n\n\nASYNC_REQUEST_FUNCS = {\n \"tgi\": async_request_tgi,\n \"vllm\": async_request_vllm,\n \"deepspeed-mii\": async_request_deepspeed_mii,\n \"openai\": async_request_openai_completions,\n \"tensorrt-llm\": async_request_trt_llm,\n}\n", "path": "benchmarks/backend_request_func.py"}], "after_files": [{"content": "import json\nimport os\nimport time\nfrom dataclasses import dataclass\nfrom typing import Optional\n\nimport aiohttp\nfrom tqdm.asyncio import tqdm\n\nAIOHTTP_TIMEOUT = aiohttp.ClientTimeout(total=6 * 60 * 60)\n\n\n@dataclass\nclass RequestFuncInput:\n prompt: str\n api_url: str\n prompt_len: int\n output_len: int\n model: str\n best_of: int = 1\n use_beam_search: bool = False\n\n\n@dataclass\nclass RequestFuncOutput:\n generated_text: str = \"\"\n success: bool = False\n latency: float = 0\n ttft: float = 0\n prompt_len: int = 0\n\n\nasync def async_request_tgi(\n request_func_input: RequestFuncInput,\n pbar: Optional[tqdm] = None,\n) -> RequestFuncOutput:\n api_url = request_func_input.api_url\n assert api_url.endswith(\"generate_stream\")\n\n async with aiohttp.ClientSession(timeout=AIOHTTP_TIMEOUT) as session:\n assert not request_func_input.use_beam_search\n params = {\n \"best_of\": request_func_input.best_of,\n \"max_new_tokens\": request_func_input.output_len,\n \"do_sample\": True,\n \"temperature\": 0.01, # TGI does not accept 0.0 temperature.\n \"top_p\": 0.99, # TGI does not accept 1.0 top_p.\n }\n payload = {\n \"inputs\": request_func_input.prompt,\n \"parameters\": params,\n }\n output = RequestFuncOutput()\n output.prompt_len = request_func_input.prompt_len\n\n ttft = 0\n st = time.perf_counter()\n try:\n async with session.post(url=api_url, json=payload) as response:\n if response.status == 200:\n async for data in response.content.iter_any():\n if ttft == 0:\n ttft = time.perf_counter() - st\n output.ttft = ttft\n output.latency = time.perf_counter() - st\n\n body = data.decode(\"utf-8\").lstrip(\"data:\")\n output.generated_text = json.loads(body)[\"generated_text\"]\n output.success = True\n else:\n output.success = False\n except (aiohttp.ClientOSError, aiohttp.ServerDisconnectedError):\n output.success = False\n\n if pbar:\n pbar.update(1)\n return output\n\n\nasync def async_request_vllm(\n request_func_input: RequestFuncInput,\n pbar: Optional[tqdm] = None,\n) -> RequestFuncOutput:\n api_url = request_func_input.api_url\n assert api_url.endswith(\"generate\")\n\n async with aiohttp.ClientSession(timeout=AIOHTTP_TIMEOUT) as session:\n payload = {\n \"prompt\": request_func_input.prompt,\n \"n\": 1,\n \"best_of\": request_func_input.best_of,\n \"use_beam_search\": request_func_input.use_beam_search,\n \"temperature\": 0.0 if request_func_input.use_beam_search else 1.0,\n \"top_p\": 1.0,\n \"max_tokens\": request_func_input.output_len,\n \"ignore_eos\": True,\n \"stream\": True,\n }\n output = RequestFuncOutput()\n output.prompt_len = request_func_input.prompt_len\n\n ttft = 0\n st = time.perf_counter()\n try:\n async with session.post(url=api_url, json=payload) as response:\n if response.status == 200:\n async for data in response.content.iter_any():\n if ttft == 0:\n ttft = time.perf_counter() - st\n output.ttft = ttft\n output.latency = time.perf_counter() - st\n\n # When streaming, '\\0' is appended to the end of the response.\n body = data.decode(\"utf-8\").strip(\"\\0\")\n output.generated_text = json.loads(\n body)[\"text\"][0][len(request_func_input.prompt):]\n output.success = True\n\n else:\n output.success = False\n except (aiohttp.ClientOSError, aiohttp.ServerDisconnectedError):\n output.success = False\n\n if pbar:\n pbar.update(1)\n return output\n\n\nasync def async_request_trt_llm(\n request_func_input: RequestFuncInput,\n pbar: Optional[tqdm] = None,\n) -> RequestFuncOutput:\n api_url = request_func_input.api_url\n assert api_url.endswith(\"generate_stream\")\n\n async with aiohttp.ClientSession(timeout=AIOHTTP_TIMEOUT) as session:\n assert not request_func_input.use_beam_search\n assert request_func_input.best_of == 1\n payload = {\n \"accumulate_tokens\": True,\n \"text_input\": request_func_input.prompt,\n \"temperature\": 0.0,\n \"top_p\": 1.0,\n \"max_tokens\": request_func_input.output_len,\n \"stream\": True,\n }\n output = RequestFuncOutput()\n output.prompt_len = request_func_input.prompt_len\n ttft = 0\n\n st = time.perf_counter()\n try:\n async with session.post(url=api_url, json=payload) as resp:\n if resp.status == 200:\n async for data in resp.content.iter_any():\n if ttft == 0:\n ttft = time.perf_counter() - st\n output.ttft = ttft\n output.latency = time.perf_counter() - st\n\n body = data.decode(\"utf-8\").lstrip(\"data:\")\n output.generated_text = json.loads(body)[\"text_output\"]\n output.success = True\n\n else:\n output.success = False\n except (aiohttp.ClientOSError, aiohttp.ServerDisconnectedError):\n output.success = False\n\n if pbar:\n pbar.update(1)\n return output\n\n\nasync def async_request_deepspeed_mii(\n request_func_input: RequestFuncInput,\n pbar: Optional[tqdm] = None,\n) -> RequestFuncOutput:\n async with aiohttp.ClientSession(timeout=AIOHTTP_TIMEOUT) as session:\n assert request_func_input.best_of == 1\n assert not request_func_input.use_beam_search\n\n payload = {\n \"prompts\": request_func_input.prompt,\n \"max_new_tokens\": request_func_input.output_len,\n \"ignore_eos\": True,\n \"do_sample\": True,\n \"temperature\":\n 0.01, # deepspeed-mii does not accept 0.0 temperature.\n \"top_p\": 1.0,\n }\n output = RequestFuncOutput()\n output.prompt_len = request_func_input.prompt_len\n\n # DeepSpeed-MII doesn't support streaming as of Jan 28 2024, will use 0 as placeholder.\n # https://github.com/microsoft/DeepSpeed-MII/pull/311\n output.ttft = 0\n\n st = time.perf_counter()\n try:\n async with session.post(url=request_func_input.api_url,\n json=payload) as resp:\n if resp.status == 200:\n parsed_resp = await resp.json()\n output.latency = time.perf_counter() - st\n output.generated_text = parsed_resp[0][\"generated_text\"]\n output.success = True\n else:\n output.success = False\n except (aiohttp.ClientOSError, aiohttp.ServerDisconnectedError):\n output.success = False\n\n if pbar:\n pbar.update(1)\n return output\n\n\nasync def async_request_openai_completions(\n request_func_input: RequestFuncInput,\n pbar: Optional[tqdm] = None,\n) -> RequestFuncOutput:\n api_url = request_func_input.api_url\n assert api_url.endswith(\"v1/completions\")\n\n async with aiohttp.ClientSession(timeout=AIOHTTP_TIMEOUT) as session:\n assert not request_func_input.use_beam_search\n payload = {\n \"model\": request_func_input.model,\n \"prompt\": request_func_input.prompt,\n \"temperature\": 0.0,\n \"best_of\": request_func_input.best_of,\n \"max_tokens\": request_func_input.output_len,\n \"stream\": True,\n }\n headers = {\n \"Authorization\": f\"Bearer {os.environ.get('OPENAI_API_KEY')}\"\n }\n\n output = RequestFuncOutput()\n output.prompt_len = request_func_input.prompt_len\n\n generated_text = \"\"\n ttft = 0\n st = time.perf_counter()\n try:\n async with session.post(url=api_url, json=payload,\n headers=headers) as response:\n if response.status == 200:\n async for chunk in response.content:\n if ttft == 0:\n ttft = time.perf_counter() - st\n output.ttft = ttft\n\n chunk = chunk.strip()\n if not chunk:\n continue\n\n chunk = chunk.decode(\"utf-8\").lstrip(\"data: \")\n if chunk == \"[DONE]\":\n latency = time.perf_counter() - st\n else:\n body = json.loads(chunk)\n generated_text += body[\"choices\"][0][\"text\"]\n\n output.generated_text = generated_text\n output.success = True\n output.latency = latency\n else:\n output.success = False\n except (aiohttp.ClientOSError, aiohttp.ServerDisconnectedError):\n output.success = False\n\n if pbar:\n pbar.update(1)\n return output\n\n\nasync def async_request_openai_chat_completions(\n request_func_input: RequestFuncInput,\n pbar: Optional[tqdm] = None,\n) -> RequestFuncOutput:\n api_url = request_func_input.api_url\n assert api_url.endswith(\n \"v1/chat/completions\"\n ), \"OpenAI Chat API URL must end with 'v1/chat/completions'.\"\n\n async with aiohttp.ClientSession(timeout=AIOHTTP_TIMEOUT) as session:\n assert not request_func_input.use_beam_search\n payload = {\n \"model\": request_func_input.model,\n \"messages\": [\n {\n \"role\": \"user\",\n \"content\": request_func_input.prompt,\n },\n ],\n \"temperature\": 0.0,\n \"max_tokens\": request_func_input.output_len,\n \"stream\": True,\n }\n headers = {\n \"Content-Type\": \"application/json\",\n \"Authorization\": f\"Bearer {os.environ.get('OPENAI_API_KEY')}\"\n }\n\n output = RequestFuncOutput()\n output.prompt_len = request_func_input.prompt_len\n\n generated_text = \"\"\n ttft = 0\n st = time.perf_counter()\n try:\n async with session.post(url=api_url, json=payload,\n headers=headers) as response:\n if response.status == 200:\n async for chunk in response.content:\n if ttft == 0:\n ttft = time.perf_counter() - st\n output.ttft = ttft\n\n chunk = chunk.strip()\n if not chunk:\n continue\n\n chunk = chunk.decode(\"utf-8\").lstrip(\"data: \")\n if chunk == \"[DONE]\":\n latency = time.perf_counter() - st\n else:\n body = json.loads(chunk)\n if \"content\" in body[\"choices\"][0][\"delta\"]:\n generated_text += body[\"choices\"][0][\"delta\"][\n \"content\"]\n\n output.generated_text = generated_text\n output.success = True\n output.latency = latency\n else:\n output.success = False\n except (aiohttp.ClientOSError, aiohttp.ServerDisconnectedError):\n output.success = False\n\n if pbar:\n pbar.update(1)\n return output\n\n\nASYNC_REQUEST_FUNCS = {\n \"tgi\": async_request_tgi,\n \"vllm\": async_request_vllm,\n \"deepspeed-mii\": async_request_deepspeed_mii,\n \"openai\": async_request_openai_completions,\n \"openai-chat\": async_request_openai_chat_completions,\n \"tensorrt-llm\": async_request_trt_llm,\n}\n", "path": "benchmarks/backend_request_func.py"}]} | 3,716 | 695 |
gh_patches_debug_24537 | rasdani/github-patches | git_diff | pytorch__text-279 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
dataset.sort_key not retained after dataset.split()
Hi I was trying out the new `split()` functionality to split a test set into test and validation set. When the `BucketIterator` is being set up from a newly split dataset the sorting fails because `dataset.sort_key` is None. Looking at the `split()` function I see that a new instance of type `Dataset` is created in which `sort_key` is None by default:
```
def split():
...
if not stratified:
train_data, test_data, val_data = rationed_split(self.examples, train_ratio,
test_ratio, val_ratio, rnd)
return tuple(Dataset(d, self.fields)
for d in (train_data, val_data, test_data) if d)
```
I guess one way would be to explicitly copy the `sort_key` into the new instance. A more generic way could be to make a copy of the current instance and replace only the `examples` attribute but I'm not sure if that is really needed.
Thanks!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchtext/data/dataset.py`
Content:
```
1 import io
2 import os
3 import zipfile
4 import tarfile
5 from functools import partial
6
7 import torch.utils.data
8
9 from .utils import RandomShuffler
10 from .example import Example
11 from ..utils import download_from_url, unicode_csv_reader
12
13
14 class Dataset(torch.utils.data.Dataset):
15 """Defines a dataset composed of Examples along with its Fields.
16
17 Attributes:
18 sort_key (callable): A key to use for sorting dataset examples for batching
19 together examples with similar lengths to minimize padding.
20 examples (list(Example)): The examples in this dataset.
21 fields (dict[str, Field]): Contains the name of each column or field, together
22 with the corresponding Field object. Two fields with the same Field object
23 will have a shared vocabulary.
24 """
25 sort_key = None
26
27 def __init__(self, examples, fields, filter_pred=None):
28 """Create a dataset from a list of Examples and Fields.
29
30 Arguments:
31 examples: List of Examples.
32 fields (List(tuple(str, Field))): The Fields to use in this tuple. The
33 string is a field name, and the Field is the associated field.
34 filter_pred (callable or None): Use only examples for which
35 filter_pred(example) is True, or use all examples if None.
36 Default is None.
37 """
38 if filter_pred is not None:
39 make_list = isinstance(examples, list)
40 examples = filter(filter_pred, examples)
41 if make_list:
42 examples = list(examples)
43 self.examples = examples
44 self.fields = dict(fields)
45 # Unpack field tuples
46 for n, f in list(self.fields.items()):
47 if isinstance(n, tuple):
48 self.fields.update(zip(n, f))
49 del self.fields[n]
50
51 @classmethod
52 def splits(cls, path=None, root='.data', train=None, validation=None,
53 test=None, **kwargs):
54 """Create Dataset objects for multiple splits of a dataset.
55
56 Arguments:
57 path (str): Common prefix of the splits' file paths, or None to use
58 the result of cls.download(root).
59 root (str): Root dataset storage directory. Default is '.data'.
60 train (str): Suffix to add to path for the train set, or None for no
61 train set. Default is None.
62 validation (str): Suffix to add to path for the validation set, or None
63 for no validation set. Default is None.
64 test (str): Suffix to add to path for the test set, or None for no test
65 set. Default is None.
66 Remaining keyword arguments: Passed to the constructor of the
67 Dataset (sub)class being used.
68
69 Returns:
70 Tuple[Dataset]: Datasets for train, validation, and
71 test splits in that order, if provided.
72 """
73 if path is None:
74 path = cls.download(root)
75 train_data = None if train is None else cls(
76 os.path.join(path, train), **kwargs)
77 val_data = None if validation is None else cls(
78 os.path.join(path, validation), **kwargs)
79 test_data = None if test is None else cls(
80 os.path.join(path, test), **kwargs)
81 return tuple(d for d in (train_data, val_data, test_data)
82 if d is not None)
83
84 def split(self, split_ratio=0.7, stratified=False, strata_field='label',
85 random_state=None):
86 """Create train-test(-valid?) splits from the instance's examples.
87
88 Arguments:
89 split_ratio (float or List of floats): a number [0, 1] denoting the amount
90 of data to be used for the training split (rest is used for validation),
91 or a list of numbers denoting the relative sizes of train, test and valid
92 splits respectively. If the relative size for valid is missing, only the
93 train-test split is returned. Default is 0.7 (for th train set).
94 stratified (bool): whether the sampling should be stratified.
95 Default is False.
96 strata_field (str): name of the examples Field stratified over.
97 Default is 'label' for the conventional label field.
98 random_state (int): the random seed used for shuffling.
99
100 Returns:
101 Tuple[Dataset]: Datasets for train, validation, and
102 test splits in that order, if the splits are provided.
103 """
104 train_ratio, test_ratio, val_ratio = check_split_ratio(split_ratio)
105
106 # For the permutations
107 rnd = RandomShuffler(random_state)
108 if not stratified:
109 train_data, test_data, val_data = rationed_split(self.examples, train_ratio,
110 test_ratio, val_ratio, rnd)
111 return tuple(Dataset(d, self.fields)
112 for d in (train_data, val_data, test_data) if d)
113 else:
114 if strata_field not in self.fields:
115 raise ValueError("Invalid field name for strata_field {}"
116 .format(strata_field))
117 strata = stratify(self.examples, strata_field)
118 train_data, test_data, val_data = [], [], []
119 for group in strata:
120 # Stratify each group and add together the indices.
121 group_train, group_test, group_val = rationed_split(group, train_ratio,
122 test_ratio, val_ratio,
123 rnd)
124 train_data += group_train
125 test_data += group_test
126 val_data += group_val
127
128 return tuple(Dataset(d, self.fields)
129 for d in (train_data, val_data, test_data) if d)
130
131 def __getitem__(self, i):
132 return self.examples[i]
133
134 def __len__(self):
135 try:
136 return len(self.examples)
137 except TypeError:
138 return 2**32
139
140 def __iter__(self):
141 for x in self.examples:
142 yield x
143
144 def __getattr__(self, attr):
145 if attr in self.fields:
146 for x in self.examples:
147 yield getattr(x, attr)
148
149 @classmethod
150 def download(cls, root, check=None):
151 """Download and unzip an online archive (.zip, .gz, or .tgz).
152
153 Arguments:
154 root (str): Folder to download data to.
155 check (str or None): Folder whose existence indicates
156 that the dataset has already been downloaded, or
157 None to check the existence of root/{cls.name}.
158
159 Returns:
160 str: Path to extracted dataset.
161 """
162 path = os.path.join(root, cls.name)
163 check = path if check is None else check
164 if not os.path.isdir(check):
165 for url in cls.urls:
166 if isinstance(url, tuple):
167 url, filename = url
168 else:
169 filename = os.path.basename(url)
170 zpath = os.path.join(path, filename)
171 if not os.path.isfile(zpath):
172 if not os.path.exists(os.path.dirname(zpath)):
173 os.makedirs(os.path.dirname(zpath))
174 print('downloading {}'.format(filename))
175 download_from_url(url, zpath)
176 ext = os.path.splitext(filename)[-1]
177 if ext == '.zip':
178 with zipfile.ZipFile(zpath, 'r') as zfile:
179 print('extracting')
180 zfile.extractall(path)
181 elif ext in ['.gz', '.tgz']:
182 with tarfile.open(zpath, 'r:gz') as tar:
183 dirs = [member for member in tar.getmembers()]
184 tar.extractall(path=path, members=dirs)
185 return os.path.join(path, cls.dirname)
186
187
188 class TabularDataset(Dataset):
189 """Defines a Dataset of columns stored in CSV, TSV, or JSON format."""
190
191 def __init__(self, path, format, fields, skip_header=False, **kwargs):
192 """Create a TabularDataset given a path, file format, and field list.
193
194 Arguments:
195 path (str): Path to the data file.
196 format (str): The format of the data file. One of "CSV", "TSV", or
197 "JSON" (case-insensitive).
198 fields (list(tuple(str, Field)) or dict[str: tuple(str, Field)]:
199 If using a list, the format must be CSV or TSV, and the values of the list
200 should be tuples of (name, field).
201 The fields should be in the same order as the columns in the CSV or TSV
202 file, while tuples of (name, None) represent columns that will be ignored.
203
204 If using a dict, the keys should be a subset of the JSON keys or CSV/TSV
205 columns, and the values should be tuples of (name, field).
206 Keys not present in the input dictionary are ignored.
207 This allows the user to rename columns from their JSON/CSV/TSV key names
208 and also enables selecting a subset of columns to load.
209 skip_header (bool): Whether to skip the first line of the input file.
210 """
211 make_example = {
212 'json': Example.fromJSON, 'dict': Example.fromdict,
213 'tsv': Example.fromCSV, 'csv': Example.fromCSV}[format.lower()]
214
215 with io.open(os.path.expanduser(path), encoding="utf8") as f:
216 if format == 'csv':
217 reader = unicode_csv_reader(f)
218 elif format == 'tsv':
219 reader = unicode_csv_reader(f, delimiter='\t')
220 else:
221 reader = f
222
223 if format in ['csv', 'tsv'] and isinstance(fields, dict):
224 if skip_header:
225 raise ValueError('When using a dict to specify fields with a {} file,'
226 'skip_header must be False and'
227 'the file must have a header.'.format(format))
228 header = next(reader)
229 field_to_index = {f: header.index(f) for f in fields.keys()}
230 make_example = partial(make_example, field_to_index=field_to_index)
231
232 if skip_header:
233 next(reader)
234
235 examples = [make_example(line, fields) for line in reader]
236
237 if isinstance(fields, dict):
238 fields, field_dict = [], fields
239 for field in field_dict.values():
240 if isinstance(field, list):
241 fields.extend(field)
242 else:
243 fields.append(field)
244
245 super(TabularDataset, self).__init__(examples, fields, **kwargs)
246
247
248 def check_split_ratio(split_ratio):
249 """Check that the split ratio argument is not malformed"""
250 valid_ratio = 0.
251 if isinstance(split_ratio, float):
252 # Only the train set relative ratio is provided
253 # Assert in bounds, validation size is zero
254 assert split_ratio > 0. and split_ratio < 1., (
255 "Split ratio {} not between 0 and 1".format(split_ratio))
256
257 test_ratio = 1. - split_ratio
258 return (split_ratio, test_ratio, valid_ratio)
259 elif isinstance(split_ratio, list):
260 # A list of relative ratios is provided
261 length = len(split_ratio)
262 assert length == 2 or length == 3, (
263 "Length of split ratio list should be 2 or 3, got {}".format(split_ratio))
264
265 # Normalize if necessary
266 ratio_sum = sum(split_ratio)
267 if not ratio_sum == 1.:
268 split_ratio = [float(ratio) / ratio_sum for ratio in split_ratio]
269
270 if length == 2:
271 return tuple(split_ratio + [valid_ratio])
272 return tuple(split_ratio)
273 else:
274 raise ValueError('Split ratio must be float or a list, got {}'
275 .format(type(split_ratio)))
276
277
278 def stratify(examples, strata_field):
279 # The field has to be hashable otherwise this doesn't work
280 # There's two iterations over the whole dataset here, which can be
281 # reduced to just one if a dedicated method for stratified splitting is used
282 unique_strata = set(getattr(example, strata_field) for example in examples)
283 strata_maps = {s: [] for s in unique_strata}
284 for example in examples:
285 strata_maps[getattr(example, strata_field)].append(example)
286 return list(strata_maps.values())
287
288
289 def rationed_split(examples, train_ratio, test_ratio, val_ratio, rnd):
290 # Create a random permutation of examples, then split them
291 # by ratio x length slices for each of the train/test/dev? splits
292 N = len(examples)
293 randperm = rnd(range(N))
294 train_len = int(round(train_ratio * N))
295
296 # Due to possible rounding problems
297 if not val_ratio:
298 test_len = N - train_len
299 else:
300 test_len = int(round(test_ratio * N))
301
302 indices = (randperm[:train_len], # Train
303 randperm[train_len:train_len + test_len], # Test
304 randperm[train_len + test_len:]) # Validation
305
306 # There's a possibly empty list for the validation set
307 data = tuple([examples[i] for i in index] for index in indices)
308
309 return data
310
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/torchtext/data/dataset.py b/torchtext/data/dataset.py
--- a/torchtext/data/dataset.py
+++ b/torchtext/data/dataset.py
@@ -108,8 +108,6 @@
if not stratified:
train_data, test_data, val_data = rationed_split(self.examples, train_ratio,
test_ratio, val_ratio, rnd)
- return tuple(Dataset(d, self.fields)
- for d in (train_data, val_data, test_data) if d)
else:
if strata_field not in self.fields:
raise ValueError("Invalid field name for strata_field {}"
@@ -125,8 +123,14 @@
test_data += group_test
val_data += group_val
- return tuple(Dataset(d, self.fields)
- for d in (train_data, val_data, test_data) if d)
+ splits = tuple(Dataset(d, self.fields)
+ for d in (train_data, val_data, test_data) if d)
+
+ # In case the parent sort key isn't none
+ if self.sort_key:
+ for subset in splits:
+ subset.sort_key = self.sort_key
+ return splits
def __getitem__(self, i):
return self.examples[i]
| {"golden_diff": "diff --git a/torchtext/data/dataset.py b/torchtext/data/dataset.py\n--- a/torchtext/data/dataset.py\n+++ b/torchtext/data/dataset.py\n@@ -108,8 +108,6 @@\n if not stratified:\n train_data, test_data, val_data = rationed_split(self.examples, train_ratio,\n test_ratio, val_ratio, rnd)\n- return tuple(Dataset(d, self.fields)\n- for d in (train_data, val_data, test_data) if d)\n else:\n if strata_field not in self.fields:\n raise ValueError(\"Invalid field name for strata_field {}\"\n@@ -125,8 +123,14 @@\n test_data += group_test\n val_data += group_val\n \n- return tuple(Dataset(d, self.fields)\n- for d in (train_data, val_data, test_data) if d)\n+ splits = tuple(Dataset(d, self.fields)\n+ for d in (train_data, val_data, test_data) if d)\n+\n+ # In case the parent sort key isn't none\n+ if self.sort_key:\n+ for subset in splits:\n+ subset.sort_key = self.sort_key\n+ return splits\n \n def __getitem__(self, i):\n return self.examples[i]\n", "issue": "dataset.sort_key not retained after dataset.split()\nHi I was trying out the new `split()` functionality to split a test set into test and validation set. When the `BucketIterator` is being set up from a newly split dataset the sorting fails because `dataset.sort_key` is None. Looking at the `split()` function I see that a new instance of type `Dataset` is created in which `sort_key` is None by default:\r\n```\r\ndef split():\r\n...\r\nif not stratified:\r\n train_data, test_data, val_data = rationed_split(self.examples, train_ratio,\r\n test_ratio, val_ratio, rnd)\r\n return tuple(Dataset(d, self.fields)\r\n for d in (train_data, val_data, test_data) if d)\r\n```\r\n\r\nI guess one way would be to explicitly copy the `sort_key` into the new instance. A more generic way could be to make a copy of the current instance and replace only the `examples` attribute but I'm not sure if that is really needed.\r\n\r\nThanks!\n", "before_files": [{"content": "import io\nimport os\nimport zipfile\nimport tarfile\nfrom functools import partial\n\nimport torch.utils.data\n\nfrom .utils import RandomShuffler\nfrom .example import Example\nfrom ..utils import download_from_url, unicode_csv_reader\n\n\nclass Dataset(torch.utils.data.Dataset):\n \"\"\"Defines a dataset composed of Examples along with its Fields.\n\n Attributes:\n sort_key (callable): A key to use for sorting dataset examples for batching\n together examples with similar lengths to minimize padding.\n examples (list(Example)): The examples in this dataset.\n fields (dict[str, Field]): Contains the name of each column or field, together\n with the corresponding Field object. Two fields with the same Field object\n will have a shared vocabulary.\n \"\"\"\n sort_key = None\n\n def __init__(self, examples, fields, filter_pred=None):\n \"\"\"Create a dataset from a list of Examples and Fields.\n\n Arguments:\n examples: List of Examples.\n fields (List(tuple(str, Field))): The Fields to use in this tuple. The\n string is a field name, and the Field is the associated field.\n filter_pred (callable or None): Use only examples for which\n filter_pred(example) is True, or use all examples if None.\n Default is None.\n \"\"\"\n if filter_pred is not None:\n make_list = isinstance(examples, list)\n examples = filter(filter_pred, examples)\n if make_list:\n examples = list(examples)\n self.examples = examples\n self.fields = dict(fields)\n # Unpack field tuples\n for n, f in list(self.fields.items()):\n if isinstance(n, tuple):\n self.fields.update(zip(n, f))\n del self.fields[n]\n\n @classmethod\n def splits(cls, path=None, root='.data', train=None, validation=None,\n test=None, **kwargs):\n \"\"\"Create Dataset objects for multiple splits of a dataset.\n\n Arguments:\n path (str): Common prefix of the splits' file paths, or None to use\n the result of cls.download(root).\n root (str): Root dataset storage directory. Default is '.data'.\n train (str): Suffix to add to path for the train set, or None for no\n train set. Default is None.\n validation (str): Suffix to add to path for the validation set, or None\n for no validation set. Default is None.\n test (str): Suffix to add to path for the test set, or None for no test\n set. Default is None.\n Remaining keyword arguments: Passed to the constructor of the\n Dataset (sub)class being used.\n\n Returns:\n Tuple[Dataset]: Datasets for train, validation, and\n test splits in that order, if provided.\n \"\"\"\n if path is None:\n path = cls.download(root)\n train_data = None if train is None else cls(\n os.path.join(path, train), **kwargs)\n val_data = None if validation is None else cls(\n os.path.join(path, validation), **kwargs)\n test_data = None if test is None else cls(\n os.path.join(path, test), **kwargs)\n return tuple(d for d in (train_data, val_data, test_data)\n if d is not None)\n\n def split(self, split_ratio=0.7, stratified=False, strata_field='label',\n random_state=None):\n \"\"\"Create train-test(-valid?) splits from the instance's examples.\n\n Arguments:\n split_ratio (float or List of floats): a number [0, 1] denoting the amount\n of data to be used for the training split (rest is used for validation),\n or a list of numbers denoting the relative sizes of train, test and valid\n splits respectively. If the relative size for valid is missing, only the\n train-test split is returned. Default is 0.7 (for th train set).\n stratified (bool): whether the sampling should be stratified.\n Default is False.\n strata_field (str): name of the examples Field stratified over.\n Default is 'label' for the conventional label field.\n random_state (int): the random seed used for shuffling.\n\n Returns:\n Tuple[Dataset]: Datasets for train, validation, and\n test splits in that order, if the splits are provided.\n \"\"\"\n train_ratio, test_ratio, val_ratio = check_split_ratio(split_ratio)\n\n # For the permutations\n rnd = RandomShuffler(random_state)\n if not stratified:\n train_data, test_data, val_data = rationed_split(self.examples, train_ratio,\n test_ratio, val_ratio, rnd)\n return tuple(Dataset(d, self.fields)\n for d in (train_data, val_data, test_data) if d)\n else:\n if strata_field not in self.fields:\n raise ValueError(\"Invalid field name for strata_field {}\"\n .format(strata_field))\n strata = stratify(self.examples, strata_field)\n train_data, test_data, val_data = [], [], []\n for group in strata:\n # Stratify each group and add together the indices.\n group_train, group_test, group_val = rationed_split(group, train_ratio,\n test_ratio, val_ratio,\n rnd)\n train_data += group_train\n test_data += group_test\n val_data += group_val\n\n return tuple(Dataset(d, self.fields)\n for d in (train_data, val_data, test_data) if d)\n\n def __getitem__(self, i):\n return self.examples[i]\n\n def __len__(self):\n try:\n return len(self.examples)\n except TypeError:\n return 2**32\n\n def __iter__(self):\n for x in self.examples:\n yield x\n\n def __getattr__(self, attr):\n if attr in self.fields:\n for x in self.examples:\n yield getattr(x, attr)\n\n @classmethod\n def download(cls, root, check=None):\n \"\"\"Download and unzip an online archive (.zip, .gz, or .tgz).\n\n Arguments:\n root (str): Folder to download data to.\n check (str or None): Folder whose existence indicates\n that the dataset has already been downloaded, or\n None to check the existence of root/{cls.name}.\n\n Returns:\n str: Path to extracted dataset.\n \"\"\"\n path = os.path.join(root, cls.name)\n check = path if check is None else check\n if not os.path.isdir(check):\n for url in cls.urls:\n if isinstance(url, tuple):\n url, filename = url\n else:\n filename = os.path.basename(url)\n zpath = os.path.join(path, filename)\n if not os.path.isfile(zpath):\n if not os.path.exists(os.path.dirname(zpath)):\n os.makedirs(os.path.dirname(zpath))\n print('downloading {}'.format(filename))\n download_from_url(url, zpath)\n ext = os.path.splitext(filename)[-1]\n if ext == '.zip':\n with zipfile.ZipFile(zpath, 'r') as zfile:\n print('extracting')\n zfile.extractall(path)\n elif ext in ['.gz', '.tgz']:\n with tarfile.open(zpath, 'r:gz') as tar:\n dirs = [member for member in tar.getmembers()]\n tar.extractall(path=path, members=dirs)\n return os.path.join(path, cls.dirname)\n\n\nclass TabularDataset(Dataset):\n \"\"\"Defines a Dataset of columns stored in CSV, TSV, or JSON format.\"\"\"\n\n def __init__(self, path, format, fields, skip_header=False, **kwargs):\n \"\"\"Create a TabularDataset given a path, file format, and field list.\n\n Arguments:\n path (str): Path to the data file.\n format (str): The format of the data file. One of \"CSV\", \"TSV\", or\n \"JSON\" (case-insensitive).\n fields (list(tuple(str, Field)) or dict[str: tuple(str, Field)]:\n If using a list, the format must be CSV or TSV, and the values of the list\n should be tuples of (name, field).\n The fields should be in the same order as the columns in the CSV or TSV\n file, while tuples of (name, None) represent columns that will be ignored.\n\n If using a dict, the keys should be a subset of the JSON keys or CSV/TSV\n columns, and the values should be tuples of (name, field).\n Keys not present in the input dictionary are ignored.\n This allows the user to rename columns from their JSON/CSV/TSV key names\n and also enables selecting a subset of columns to load.\n skip_header (bool): Whether to skip the first line of the input file.\n \"\"\"\n make_example = {\n 'json': Example.fromJSON, 'dict': Example.fromdict,\n 'tsv': Example.fromCSV, 'csv': Example.fromCSV}[format.lower()]\n\n with io.open(os.path.expanduser(path), encoding=\"utf8\") as f:\n if format == 'csv':\n reader = unicode_csv_reader(f)\n elif format == 'tsv':\n reader = unicode_csv_reader(f, delimiter='\\t')\n else:\n reader = f\n\n if format in ['csv', 'tsv'] and isinstance(fields, dict):\n if skip_header:\n raise ValueError('When using a dict to specify fields with a {} file,'\n 'skip_header must be False and'\n 'the file must have a header.'.format(format))\n header = next(reader)\n field_to_index = {f: header.index(f) for f in fields.keys()}\n make_example = partial(make_example, field_to_index=field_to_index)\n\n if skip_header:\n next(reader)\n\n examples = [make_example(line, fields) for line in reader]\n\n if isinstance(fields, dict):\n fields, field_dict = [], fields\n for field in field_dict.values():\n if isinstance(field, list):\n fields.extend(field)\n else:\n fields.append(field)\n\n super(TabularDataset, self).__init__(examples, fields, **kwargs)\n\n\ndef check_split_ratio(split_ratio):\n \"\"\"Check that the split ratio argument is not malformed\"\"\"\n valid_ratio = 0.\n if isinstance(split_ratio, float):\n # Only the train set relative ratio is provided\n # Assert in bounds, validation size is zero\n assert split_ratio > 0. and split_ratio < 1., (\n \"Split ratio {} not between 0 and 1\".format(split_ratio))\n\n test_ratio = 1. - split_ratio\n return (split_ratio, test_ratio, valid_ratio)\n elif isinstance(split_ratio, list):\n # A list of relative ratios is provided\n length = len(split_ratio)\n assert length == 2 or length == 3, (\n \"Length of split ratio list should be 2 or 3, got {}\".format(split_ratio))\n\n # Normalize if necessary\n ratio_sum = sum(split_ratio)\n if not ratio_sum == 1.:\n split_ratio = [float(ratio) / ratio_sum for ratio in split_ratio]\n\n if length == 2:\n return tuple(split_ratio + [valid_ratio])\n return tuple(split_ratio)\n else:\n raise ValueError('Split ratio must be float or a list, got {}'\n .format(type(split_ratio)))\n\n\ndef stratify(examples, strata_field):\n # The field has to be hashable otherwise this doesn't work\n # There's two iterations over the whole dataset here, which can be\n # reduced to just one if a dedicated method for stratified splitting is used\n unique_strata = set(getattr(example, strata_field) for example in examples)\n strata_maps = {s: [] for s in unique_strata}\n for example in examples:\n strata_maps[getattr(example, strata_field)].append(example)\n return list(strata_maps.values())\n\n\ndef rationed_split(examples, train_ratio, test_ratio, val_ratio, rnd):\n # Create a random permutation of examples, then split them\n # by ratio x length slices for each of the train/test/dev? splits\n N = len(examples)\n randperm = rnd(range(N))\n train_len = int(round(train_ratio * N))\n\n # Due to possible rounding problems\n if not val_ratio:\n test_len = N - train_len\n else:\n test_len = int(round(test_ratio * N))\n\n indices = (randperm[:train_len], # Train\n randperm[train_len:train_len + test_len], # Test\n randperm[train_len + test_len:]) # Validation\n\n # There's a possibly empty list for the validation set\n data = tuple([examples[i] for i in index] for index in indices)\n\n return data\n", "path": "torchtext/data/dataset.py"}], "after_files": [{"content": "import io\nimport os\nimport zipfile\nimport tarfile\nfrom functools import partial\n\nimport torch.utils.data\n\nfrom .utils import RandomShuffler\nfrom .example import Example\nfrom ..utils import download_from_url, unicode_csv_reader\n\n\nclass Dataset(torch.utils.data.Dataset):\n \"\"\"Defines a dataset composed of Examples along with its Fields.\n\n Attributes:\n sort_key (callable): A key to use for sorting dataset examples for batching\n together examples with similar lengths to minimize padding.\n examples (list(Example)): The examples in this dataset.\n fields (dict[str, Field]): Contains the name of each column or field, together\n with the corresponding Field object. Two fields with the same Field object\n will have a shared vocabulary.\n \"\"\"\n sort_key = None\n\n def __init__(self, examples, fields, filter_pred=None):\n \"\"\"Create a dataset from a list of Examples and Fields.\n\n Arguments:\n examples: List of Examples.\n fields (List(tuple(str, Field))): The Fields to use in this tuple. The\n string is a field name, and the Field is the associated field.\n filter_pred (callable or None): Use only examples for which\n filter_pred(example) is True, or use all examples if None.\n Default is None.\n \"\"\"\n if filter_pred is not None:\n make_list = isinstance(examples, list)\n examples = filter(filter_pred, examples)\n if make_list:\n examples = list(examples)\n self.examples = examples\n self.fields = dict(fields)\n # Unpack field tuples\n for n, f in list(self.fields.items()):\n if isinstance(n, tuple):\n self.fields.update(zip(n, f))\n del self.fields[n]\n\n @classmethod\n def splits(cls, path=None, root='.data', train=None, validation=None,\n test=None, **kwargs):\n \"\"\"Create Dataset objects for multiple splits of a dataset.\n\n Arguments:\n path (str): Common prefix of the splits' file paths, or None to use\n the result of cls.download(root).\n root (str): Root dataset storage directory. Default is '.data'.\n train (str): Suffix to add to path for the train set, or None for no\n train set. Default is None.\n validation (str): Suffix to add to path for the validation set, or None\n for no validation set. Default is None.\n test (str): Suffix to add to path for the test set, or None for no test\n set. Default is None.\n Remaining keyword arguments: Passed to the constructor of the\n Dataset (sub)class being used.\n\n Returns:\n Tuple[Dataset]: Datasets for train, validation, and\n test splits in that order, if provided.\n \"\"\"\n if path is None:\n path = cls.download(root)\n train_data = None if train is None else cls(\n os.path.join(path, train), **kwargs)\n val_data = None if validation is None else cls(\n os.path.join(path, validation), **kwargs)\n test_data = None if test is None else cls(\n os.path.join(path, test), **kwargs)\n return tuple(d for d in (train_data, val_data, test_data)\n if d is not None)\n\n def split(self, split_ratio=0.7, stratified=False, strata_field='label',\n random_state=None):\n \"\"\"Create train-test(-valid?) splits from the instance's examples.\n\n Arguments:\n split_ratio (float or List of floats): a number [0, 1] denoting the amount\n of data to be used for the training split (rest is used for validation),\n or a list of numbers denoting the relative sizes of train, test and valid\n splits respectively. If the relative size for valid is missing, only the\n train-test split is returned. Default is 0.7 (for th train set).\n stratified (bool): whether the sampling should be stratified.\n Default is False.\n strata_field (str): name of the examples Field stratified over.\n Default is 'label' for the conventional label field.\n random_state (int): the random seed used for shuffling.\n\n Returns:\n Tuple[Dataset]: Datasets for train, validation, and\n test splits in that order, if the splits are provided.\n \"\"\"\n train_ratio, test_ratio, val_ratio = check_split_ratio(split_ratio)\n\n # For the permutations\n rnd = RandomShuffler(random_state)\n if not stratified:\n train_data, test_data, val_data = rationed_split(self.examples, train_ratio,\n test_ratio, val_ratio, rnd)\n else:\n if strata_field not in self.fields:\n raise ValueError(\"Invalid field name for strata_field {}\"\n .format(strata_field))\n strata = stratify(self.examples, strata_field)\n train_data, test_data, val_data = [], [], []\n for group in strata:\n # Stratify each group and add together the indices.\n group_train, group_test, group_val = rationed_split(group, train_ratio,\n test_ratio, val_ratio,\n rnd)\n train_data += group_train\n test_data += group_test\n val_data += group_val\n\n splits = tuple(Dataset(d, self.fields)\n for d in (train_data, val_data, test_data) if d)\n\n # In case the parent sort key isn't none\n if self.sort_key:\n for subset in splits:\n subset.sort_key = self.sort_key\n return splits\n\n def __getitem__(self, i):\n return self.examples[i]\n\n def __len__(self):\n try:\n return len(self.examples)\n except TypeError:\n return 2**32\n\n def __iter__(self):\n for x in self.examples:\n yield x\n\n def __getattr__(self, attr):\n if attr in self.fields:\n for x in self.examples:\n yield getattr(x, attr)\n\n @classmethod\n def download(cls, root, check=None):\n \"\"\"Download and unzip an online archive (.zip, .gz, or .tgz).\n\n Arguments:\n root (str): Folder to download data to.\n check (str or None): Folder whose existence indicates\n that the dataset has already been downloaded, or\n None to check the existence of root/{cls.name}.\n\n Returns:\n str: Path to extracted dataset.\n \"\"\"\n path = os.path.join(root, cls.name)\n check = path if check is None else check\n if not os.path.isdir(check):\n for url in cls.urls:\n if isinstance(url, tuple):\n url, filename = url\n else:\n filename = os.path.basename(url)\n zpath = os.path.join(path, filename)\n if not os.path.isfile(zpath):\n if not os.path.exists(os.path.dirname(zpath)):\n os.makedirs(os.path.dirname(zpath))\n print('downloading {}'.format(filename))\n download_from_url(url, zpath)\n ext = os.path.splitext(filename)[-1]\n if ext == '.zip':\n with zipfile.ZipFile(zpath, 'r') as zfile:\n print('extracting')\n zfile.extractall(path)\n elif ext in ['.gz', '.tgz']:\n with tarfile.open(zpath, 'r:gz') as tar:\n dirs = [member for member in tar.getmembers()]\n tar.extractall(path=path, members=dirs)\n return os.path.join(path, cls.dirname)\n\n\nclass TabularDataset(Dataset):\n \"\"\"Defines a Dataset of columns stored in CSV, TSV, or JSON format.\"\"\"\n\n def __init__(self, path, format, fields, skip_header=False, **kwargs):\n \"\"\"Create a TabularDataset given a path, file format, and field list.\n\n Arguments:\n path (str): Path to the data file.\n format (str): The format of the data file. One of \"CSV\", \"TSV\", or\n \"JSON\" (case-insensitive).\n fields (list(tuple(str, Field)) or dict[str: tuple(str, Field)]:\n If using a list, the format must be CSV or TSV, and the values of the list\n should be tuples of (name, field).\n The fields should be in the same order as the columns in the CSV or TSV\n file, while tuples of (name, None) represent columns that will be ignored.\n\n If using a dict, the keys should be a subset of the JSON keys or CSV/TSV\n columns, and the values should be tuples of (name, field).\n Keys not present in the input dictionary are ignored.\n This allows the user to rename columns from their JSON/CSV/TSV key names\n and also enables selecting a subset of columns to load.\n skip_header (bool): Whether to skip the first line of the input file.\n \"\"\"\n make_example = {\n 'json': Example.fromJSON, 'dict': Example.fromdict,\n 'tsv': Example.fromCSV, 'csv': Example.fromCSV}[format.lower()]\n\n with io.open(os.path.expanduser(path), encoding=\"utf8\") as f:\n if format == 'csv':\n reader = unicode_csv_reader(f)\n elif format == 'tsv':\n reader = unicode_csv_reader(f, delimiter='\\t')\n else:\n reader = f\n\n if format in ['csv', 'tsv'] and isinstance(fields, dict):\n if skip_header:\n raise ValueError('When using a dict to specify fields with a {} file,'\n 'skip_header must be False and'\n 'the file must have a header.'.format(format))\n header = next(reader)\n field_to_index = {f: header.index(f) for f in fields.keys()}\n make_example = partial(make_example, field_to_index=field_to_index)\n\n if skip_header:\n next(reader)\n\n examples = [make_example(line, fields) for line in reader]\n\n if isinstance(fields, dict):\n fields, field_dict = [], fields\n for field in field_dict.values():\n if isinstance(field, list):\n fields.extend(field)\n else:\n fields.append(field)\n\n super(TabularDataset, self).__init__(examples, fields, **kwargs)\n\n\ndef check_split_ratio(split_ratio):\n \"\"\"Check that the split ratio argument is not malformed\"\"\"\n valid_ratio = 0.\n if isinstance(split_ratio, float):\n # Only the train set relative ratio is provided\n # Assert in bounds, validation size is zero\n assert split_ratio > 0. and split_ratio < 1., (\n \"Split ratio {} not between 0 and 1\".format(split_ratio))\n\n test_ratio = 1. - split_ratio\n return (split_ratio, test_ratio, valid_ratio)\n elif isinstance(split_ratio, list):\n # A list of relative ratios is provided\n length = len(split_ratio)\n assert length == 2 or length == 3, (\n \"Length of split ratio list should be 2 or 3, got {}\".format(split_ratio))\n\n # Normalize if necessary\n ratio_sum = sum(split_ratio)\n if not ratio_sum == 1.:\n split_ratio = [float(ratio) / ratio_sum for ratio in split_ratio]\n\n if length == 2:\n return tuple(split_ratio + [valid_ratio])\n return tuple(split_ratio)\n else:\n raise ValueError('Split ratio must be float or a list, got {}'\n .format(type(split_ratio)))\n\n\ndef stratify(examples, strata_field):\n # The field has to be hashable otherwise this doesn't work\n # There's two iterations over the whole dataset here, which can be\n # reduced to just one if a dedicated method for stratified splitting is used\n unique_strata = set(getattr(example, strata_field) for example in examples)\n strata_maps = {s: [] for s in unique_strata}\n for example in examples:\n strata_maps[getattr(example, strata_field)].append(example)\n return list(strata_maps.values())\n\n\ndef rationed_split(examples, train_ratio, test_ratio, val_ratio, rnd):\n # Create a random permutation of examples, then split them\n # by ratio x length slices for each of the train/test/dev? splits\n N = len(examples)\n randperm = rnd(range(N))\n train_len = int(round(train_ratio * N))\n\n # Due to possible rounding problems\n if not val_ratio:\n test_len = N - train_len\n else:\n test_len = int(round(test_ratio * N))\n\n indices = (randperm[:train_len], # Train\n randperm[train_len:train_len + test_len], # Test\n randperm[train_len + test_len:]) # Validation\n\n # There's a possibly empty list for the validation set\n data = tuple([examples[i] for i in index] for index in indices)\n\n return data\n", "path": "torchtext/data/dataset.py"}]} | 4,076 | 289 |
gh_patches_debug_39804 | rasdani/github-patches | git_diff | conan-io__conan-center-index-3024 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[package] proj/7.1.1: Fails to build on iOS
<!--
Please don't forget to update the issue title.
Include all applicable information to help us reproduce your problem.
-->
### Package and Environment Details (include every applicable attribute)
* Package Name/Version: **proj/7.1.1**
* Operating System+version: **iOS 11.0**
* Compiler+version: **apple-clang 11.0**
* Conan version: **conan 1.29.2**
* Python version: **Python 3.8.5**
### Conan profile
```
[settings]
arch=armv8
arch_build=x86_64
build_type=Release
compiler=apple-clang
compiler.libcxx=libc++
compiler.version=11.0
os=iOS
os.version=11.0
os_build=Macos
[options]
*:build_executable=False
proj:with_curl=False
proj:with_tiff=False
[build_requires]
*: darwin-toolchain/1.0.8@theodelrieu/stable
[env]
```
### Steps to reproduce (Include if Applicable)
`conan install proj/7.1.0@ --profile ios11-arm64 -o '*:build_executable=False' -o 'proj:with_tiff=False' -o 'proj:with_curl=False' --build=missing`
### Logs (Include/Attach if Applicable)
<details><summary>Click to expand log</summary>
```
CMake Error at source_subfolder/src/bin_cct.cmake:14 (install):
install TARGETS given no BUNDLE DESTINATION for MACOSX_BUNDLE executable
target "cct".
Call Stack (most recent call first):
source_subfolder/src/CMakeLists.txt:45 (include)
CMake Error at source_subfolder/src/bin_cs2cs.cmake:13 (install):
install TARGETS given no BUNDLE DESTINATION for MACOSX_BUNDLE executable
target "cs2cs".
Call Stack (most recent call first):
source_subfolder/src/CMakeLists.txt:50 (include)
CMake Error at source_subfolder/src/bin_geod.cmake:15 (install):
install TARGETS given no BUNDLE DESTINATION for MACOSX_BUNDLE executable
target "geod".
Call Stack (most recent call first):
source_subfolder/src/CMakeLists.txt:55 (include)
CMake Error at source_subfolder/src/bin_proj.cmake:16 (install):
install TARGETS given no BUNDLE DESTINATION for MACOSX_BUNDLE executable
target "binproj".
Call Stack (most recent call first):
source_subfolder/src/CMakeLists.txt:63 (include)
CMake Error at source_subfolder/src/bin_projinfo.cmake:12 (install):
install TARGETS given no BUNDLE DESTINATION for MACOSX_BUNDLE executable
target "binprojinfo".
Call Stack (most recent call first):
source_subfolder/src/CMakeLists.txt:68 (include)
CMake Error at source_subfolder/src/bin_gie.cmake:14 (install):
install TARGETS given no BUNDLE DESTINATION for MACOSX_BUNDLE executable
target "gie".
Call Stack (most recent call first):
source_subfolder/src/CMakeLists.txt:73 (include)
```
</details>
I would suggest adding an option that disables all the executable targets from being generated and built, like `build_executables`. Alternatively, the recipe could define individual `build_` options for each executable, but I don't know how worthwhile that level of granularity is since these are not large applications. (I personally prefer the single `build_executables` option.)
For reference, `glslang` provides this `build_executables` option to enable/disable its binaries, while `sqlite3`, `bzip2`, and `spirv-cross` provide the similarly named `build_executable` option for their (single) binary executable.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `recipes/proj/all/conanfile.py`
Content:
```
1 import os
2
3 from conans import ConanFile, CMake, tools, RunEnvironment
4
5 required_conan_version = ">=1.28.0"
6
7 class ProjConan(ConanFile):
8 name = "proj"
9 description = "Cartographic Projections and Coordinate Transformations Library."
10 license = "MIT"
11 topics = ("conan", "dsp", "proj", "proj4", "projections", "gis", "geospatial")
12 homepage = "https://proj.org"
13 url = "https://github.com/conan-io/conan-center-index"
14 exports_sources = ["CMakeLists.txt", "patches/**"]
15 generators = "cmake", "cmake_find_package"
16 settings = "os", "arch", "compiler", "build_type"
17 options = {
18 "shared": [True, False],
19 "fPIC": [True, False],
20 "threadsafe": [True, False],
21 "with_tiff": [True, False],
22 "with_curl": [True, False]
23 }
24 default_options = {
25 "shared": False,
26 "fPIC": True,
27 "threadsafe": True,
28 "with_tiff": True,
29 "with_curl": True
30 }
31
32 _cmake = None
33
34 @property
35 def _source_subfolder(self):
36 return "source_subfolder"
37
38 def config_options(self):
39 if self.settings.os == "Windows":
40 del self.options.fPIC
41 if tools.Version(self.version) < "7.0.0":
42 del self.options.with_tiff
43 del self.options.with_curl
44
45 def configure(self):
46 if self.options.shared:
47 del self.options.fPIC
48
49 def requirements(self):
50 self.requires("sqlite3/3.32.3")
51 if self.options.get_safe("with_tiff"):
52 self.requires("libtiff/4.1.0")
53 if self.options.get_safe("with_curl"):
54 self.requires("libcurl/7.72.0")
55
56 def build_requirements(self):
57 self.build_requires("sqlite3/3.32.3")
58
59 def source(self):
60 tools.get(**self.conan_data["sources"][self.version])
61 os.rename(self.name + "-" + self.version, self._source_subfolder)
62
63 def build(self):
64 self._patch_sources()
65 with tools.environment_append(RunEnvironment(self).vars):
66 cmake = self._configure_cmake()
67 cmake.build()
68
69 def _patch_sources(self):
70 for patch in self.conan_data.get("patches", {}).get(self.version, []):
71 tools.patch(**patch)
72 tools.replace_in_file(os.path.join(self._source_subfolder, "CMakeLists.txt"), "/W4", "")
73
74 def _configure_cmake(self):
75 if self._cmake:
76 return self._cmake
77 self._cmake = CMake(self)
78 self._cmake.definitions["USE_THREAD"] = self.options.threadsafe
79 self._cmake.definitions["BUILD_CCT"] = True
80 self._cmake.definitions["BUILD_CS2CS"] = True
81 self._cmake.definitions["BUILD_GEOD"] = True
82 self._cmake.definitions["BUILD_GIE"] = True
83 self._cmake.definitions["BUILD_PROJ"] = True
84 self._cmake.definitions["BUILD_PROJINFO"] = True
85 self._cmake.definitions["PROJ_DATA_SUBDIR"] = "res"
86 if tools.Version(self.version) < "7.0.0":
87 self._cmake.definitions["PROJ_TESTS"] = False
88 self._cmake.definitions["BUILD_LIBPROJ_SHARED"] = self.options.shared
89 self._cmake.definitions["ENABLE_LTO"] = False
90 self._cmake.definitions["JNI_SUPPORT"] = False
91 else:
92 self._cmake.definitions["ENABLE_TIFF"] = self.options.with_tiff
93 self._cmake.definitions["ENABLE_CURL"] = self.options.with_curl
94 self._cmake.definitions["BUILD_TESTING"] = False
95 self._cmake.definitions["ENABLE_IPO"] = False
96 self._cmake.definitions["BUILD_PROJSYNC"] = self.options.with_curl
97 self._cmake.configure()
98 return self._cmake
99
100 def package(self):
101 self.copy("COPYING", dst="licenses", src=self._source_subfolder)
102 cmake = self._configure_cmake()
103 cmake.install()
104 tools.rmdir(os.path.join(self.package_folder, "share"))
105 tools.rmdir(os.path.join(self.package_folder, "lib", "cmake"))
106
107 def package_info(self):
108 proj_version = tools.Version(self.version)
109 cmake_config_filename = "proj" if proj_version >= "7.0.0" else "proj4"
110 cmake_namespace = "PROJ" if proj_version >= "7.0.0" else "PROJ4"
111 self.cpp_info.filenames["cmake_find_package"] = cmake_config_filename
112 self.cpp_info.filenames["cmake_find_package_multi"] = cmake_config_filename
113 self.cpp_info.names["cmake_find_package"] = cmake_namespace
114 self.cpp_info.names["cmake_find_package_multi"] = cmake_namespace
115 self.cpp_info.components["projlib"].names["cmake_find_package"] = "proj"
116 self.cpp_info.components["projlib"].names["cmake_find_package_multi"] = "proj"
117 self.cpp_info.components["projlib"].libs = tools.collect_libs(self)
118 if self.settings.os == "Linux":
119 self.cpp_info.components["projlib"].system_libs.append("m")
120 if self.options.threadsafe:
121 self.cpp_info.components["projlib"].system_libs.append("pthread")
122 elif self.settings.os == "Windows":
123 if proj_version >= "7.0.0":
124 self.cpp_info.components["projlib"].system_libs.append("shell32")
125 if proj_version >= "7.1.0":
126 self.cpp_info.components["projlib"].system_libs.append("Ole32")
127 if not self.options.shared and tools.stdcpp_library(self):
128 self.cpp_info.components["projlib"].system_libs.append(tools.stdcpp_library(self))
129 self.cpp_info.components["projlib"].requires.append("sqlite3::sqlite3")
130 if self.options.get_safe("with_tiff"):
131 self.cpp_info.components["projlib"].requires.append("libtiff::libtiff")
132 if self.options.get_safe("with_curl"):
133 self.cpp_info.components["projlib"].requires.append("libcurl::libcurl")
134 if self.options.shared and self.settings.compiler == "Visual Studio":
135 self.cpp_info.components["projlib"].defines.append("PROJ_MSVC_DLL_IMPORT")
136
137 res_path = os.path.join(self.package_folder, "res")
138 self.output.info("Appending PROJ_LIB environment variable: {}".format(res_path))
139 self.env_info.PROJ_LIB.append(res_path)
140 bin_path = os.path.join(self.package_folder, "bin")
141 self.output.info("Appending PATH environment variable: {}".format(bin_path))
142 self.env_info.PATH.append(bin_path)
143
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/recipes/proj/all/conanfile.py b/recipes/proj/all/conanfile.py
--- a/recipes/proj/all/conanfile.py
+++ b/recipes/proj/all/conanfile.py
@@ -19,14 +19,16 @@
"fPIC": [True, False],
"threadsafe": [True, False],
"with_tiff": [True, False],
- "with_curl": [True, False]
+ "with_curl": [True, False],
+ "build_executables": [True, False]
}
default_options = {
"shared": False,
"fPIC": True,
"threadsafe": True,
"with_tiff": True,
- "with_curl": True
+ "with_curl": True,
+ "build_executables": True
}
_cmake = None
@@ -76,12 +78,12 @@
return self._cmake
self._cmake = CMake(self)
self._cmake.definitions["USE_THREAD"] = self.options.threadsafe
- self._cmake.definitions["BUILD_CCT"] = True
- self._cmake.definitions["BUILD_CS2CS"] = True
- self._cmake.definitions["BUILD_GEOD"] = True
- self._cmake.definitions["BUILD_GIE"] = True
- self._cmake.definitions["BUILD_PROJ"] = True
- self._cmake.definitions["BUILD_PROJINFO"] = True
+ self._cmake.definitions["BUILD_CCT"] = self.options.build_executables
+ self._cmake.definitions["BUILD_CS2CS"] = self.options.build_executables
+ self._cmake.definitions["BUILD_GEOD"] = self.options.build_executables
+ self._cmake.definitions["BUILD_GIE"] = self.options.build_executables
+ self._cmake.definitions["BUILD_PROJ"] = self.options.build_executables
+ self._cmake.definitions["BUILD_PROJINFO"] = self.options.build_executables
self._cmake.definitions["PROJ_DATA_SUBDIR"] = "res"
if tools.Version(self.version) < "7.0.0":
self._cmake.definitions["PROJ_TESTS"] = False
@@ -93,7 +95,7 @@
self._cmake.definitions["ENABLE_CURL"] = self.options.with_curl
self._cmake.definitions["BUILD_TESTING"] = False
self._cmake.definitions["ENABLE_IPO"] = False
- self._cmake.definitions["BUILD_PROJSYNC"] = self.options.with_curl
+ self._cmake.definitions["BUILD_PROJSYNC"] = self.options.build_executables and self.options.with_curl
self._cmake.configure()
return self._cmake
@@ -137,6 +139,7 @@
res_path = os.path.join(self.package_folder, "res")
self.output.info("Appending PROJ_LIB environment variable: {}".format(res_path))
self.env_info.PROJ_LIB.append(res_path)
- bin_path = os.path.join(self.package_folder, "bin")
- self.output.info("Appending PATH environment variable: {}".format(bin_path))
- self.env_info.PATH.append(bin_path)
+ if self.options.build_executables:
+ bin_path = os.path.join(self.package_folder, "bin")
+ self.output.info("Appending PATH environment variable: {}".format(bin_path))
+ self.env_info.PATH.append(bin_path)
| {"golden_diff": "diff --git a/recipes/proj/all/conanfile.py b/recipes/proj/all/conanfile.py\n--- a/recipes/proj/all/conanfile.py\n+++ b/recipes/proj/all/conanfile.py\n@@ -19,14 +19,16 @@\n \"fPIC\": [True, False],\n \"threadsafe\": [True, False],\n \"with_tiff\": [True, False],\n- \"with_curl\": [True, False]\n+ \"with_curl\": [True, False],\n+ \"build_executables\": [True, False]\n }\n default_options = {\n \"shared\": False,\n \"fPIC\": True,\n \"threadsafe\": True,\n \"with_tiff\": True,\n- \"with_curl\": True\n+ \"with_curl\": True,\n+ \"build_executables\": True\n }\n \n _cmake = None\n@@ -76,12 +78,12 @@\n return self._cmake\n self._cmake = CMake(self)\n self._cmake.definitions[\"USE_THREAD\"] = self.options.threadsafe\n- self._cmake.definitions[\"BUILD_CCT\"] = True\n- self._cmake.definitions[\"BUILD_CS2CS\"] = True\n- self._cmake.definitions[\"BUILD_GEOD\"] = True\n- self._cmake.definitions[\"BUILD_GIE\"] = True\n- self._cmake.definitions[\"BUILD_PROJ\"] = True\n- self._cmake.definitions[\"BUILD_PROJINFO\"] = True\n+ self._cmake.definitions[\"BUILD_CCT\"] = self.options.build_executables\n+ self._cmake.definitions[\"BUILD_CS2CS\"] = self.options.build_executables\n+ self._cmake.definitions[\"BUILD_GEOD\"] = self.options.build_executables\n+ self._cmake.definitions[\"BUILD_GIE\"] = self.options.build_executables\n+ self._cmake.definitions[\"BUILD_PROJ\"] = self.options.build_executables\n+ self._cmake.definitions[\"BUILD_PROJINFO\"] = self.options.build_executables\n self._cmake.definitions[\"PROJ_DATA_SUBDIR\"] = \"res\"\n if tools.Version(self.version) < \"7.0.0\":\n self._cmake.definitions[\"PROJ_TESTS\"] = False\n@@ -93,7 +95,7 @@\n self._cmake.definitions[\"ENABLE_CURL\"] = self.options.with_curl\n self._cmake.definitions[\"BUILD_TESTING\"] = False\n self._cmake.definitions[\"ENABLE_IPO\"] = False\n- self._cmake.definitions[\"BUILD_PROJSYNC\"] = self.options.with_curl\n+ self._cmake.definitions[\"BUILD_PROJSYNC\"] = self.options.build_executables and self.options.with_curl\n self._cmake.configure()\n return self._cmake\n \n@@ -137,6 +139,7 @@\n res_path = os.path.join(self.package_folder, \"res\")\n self.output.info(\"Appending PROJ_LIB environment variable: {}\".format(res_path))\n self.env_info.PROJ_LIB.append(res_path)\n- bin_path = os.path.join(self.package_folder, \"bin\")\n- self.output.info(\"Appending PATH environment variable: {}\".format(bin_path))\n- self.env_info.PATH.append(bin_path)\n+ if self.options.build_executables:\n+ bin_path = os.path.join(self.package_folder, \"bin\")\n+ self.output.info(\"Appending PATH environment variable: {}\".format(bin_path))\n+ self.env_info.PATH.append(bin_path)\n", "issue": "[package] proj/7.1.1: Fails to build on iOS\n<!-- \r\n Please don't forget to update the issue title.\r\n Include all applicable information to help us reproduce your problem.\r\n-->\r\n\r\n### Package and Environment Details (include every applicable attribute)\r\n * Package Name/Version: **proj/7.1.1**\r\n * Operating System+version: **iOS 11.0**\r\n * Compiler+version: **apple-clang 11.0**\r\n * Conan version: **conan 1.29.2**\r\n * Python version: **Python 3.8.5**\r\n\r\n### Conan profile\r\n```\r\n[settings]\r\narch=armv8\r\narch_build=x86_64\r\nbuild_type=Release\r\ncompiler=apple-clang\r\ncompiler.libcxx=libc++\r\ncompiler.version=11.0\r\nos=iOS\r\nos.version=11.0\r\nos_build=Macos\r\n[options]\r\n*:build_executable=False\r\nproj:with_curl=False\r\nproj:with_tiff=False\r\n[build_requires]\r\n*: darwin-toolchain/1.0.8@theodelrieu/stable\r\n[env]\r\n```\r\n\r\n\r\n### Steps to reproduce (Include if Applicable)\r\n\r\n`conan install proj/7.1.0@ --profile ios11-arm64 -o '*:build_executable=False' -o 'proj:with_tiff=False' -o 'proj:with_curl=False' --build=missing`\r\n\r\n### Logs (Include/Attach if Applicable)\r\n<details><summary>Click to expand log</summary>\r\n\r\n```\r\nCMake Error at source_subfolder/src/bin_cct.cmake:14 (install):\r\n install TARGETS given no BUNDLE DESTINATION for MACOSX_BUNDLE executable\r\n target \"cct\".\r\nCall Stack (most recent call first):\r\n source_subfolder/src/CMakeLists.txt:45 (include)\r\n\r\n\r\nCMake Error at source_subfolder/src/bin_cs2cs.cmake:13 (install):\r\n install TARGETS given no BUNDLE DESTINATION for MACOSX_BUNDLE executable\r\n target \"cs2cs\".\r\nCall Stack (most recent call first):\r\n source_subfolder/src/CMakeLists.txt:50 (include)\r\n\r\n\r\nCMake Error at source_subfolder/src/bin_geod.cmake:15 (install):\r\n install TARGETS given no BUNDLE DESTINATION for MACOSX_BUNDLE executable\r\n target \"geod\".\r\nCall Stack (most recent call first):\r\n source_subfolder/src/CMakeLists.txt:55 (include)\r\n\r\n\r\nCMake Error at source_subfolder/src/bin_proj.cmake:16 (install):\r\n install TARGETS given no BUNDLE DESTINATION for MACOSX_BUNDLE executable\r\n target \"binproj\".\r\nCall Stack (most recent call first):\r\n source_subfolder/src/CMakeLists.txt:63 (include)\r\n\r\n\r\nCMake Error at source_subfolder/src/bin_projinfo.cmake:12 (install):\r\n install TARGETS given no BUNDLE DESTINATION for MACOSX_BUNDLE executable\r\n target \"binprojinfo\".\r\nCall Stack (most recent call first):\r\n source_subfolder/src/CMakeLists.txt:68 (include)\r\n\r\n\r\nCMake Error at source_subfolder/src/bin_gie.cmake:14 (install):\r\n install TARGETS given no BUNDLE DESTINATION for MACOSX_BUNDLE executable\r\n target \"gie\".\r\nCall Stack (most recent call first):\r\n source_subfolder/src/CMakeLists.txt:73 (include)\r\n```\r\n\r\n</details>\r\n\r\nI would suggest adding an option that disables all the executable targets from being generated and built, like `build_executables`. Alternatively, the recipe could define individual `build_` options for each executable, but I don't know how worthwhile that level of granularity is since these are not large applications. (I personally prefer the single `build_executables` option.)\r\n\r\nFor reference, `glslang` provides this `build_executables` option to enable/disable its binaries, while `sqlite3`, `bzip2`, and `spirv-cross` provide the similarly named `build_executable` option for their (single) binary executable.\n", "before_files": [{"content": "import os\n\nfrom conans import ConanFile, CMake, tools, RunEnvironment\n\nrequired_conan_version = \">=1.28.0\"\n\nclass ProjConan(ConanFile):\n name = \"proj\"\n description = \"Cartographic Projections and Coordinate Transformations Library.\"\n license = \"MIT\"\n topics = (\"conan\", \"dsp\", \"proj\", \"proj4\", \"projections\", \"gis\", \"geospatial\")\n homepage = \"https://proj.org\"\n url = \"https://github.com/conan-io/conan-center-index\"\n exports_sources = [\"CMakeLists.txt\", \"patches/**\"]\n generators = \"cmake\", \"cmake_find_package\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\n \"shared\": [True, False],\n \"fPIC\": [True, False],\n \"threadsafe\": [True, False],\n \"with_tiff\": [True, False],\n \"with_curl\": [True, False]\n }\n default_options = {\n \"shared\": False,\n \"fPIC\": True,\n \"threadsafe\": True,\n \"with_tiff\": True,\n \"with_curl\": True\n }\n\n _cmake = None\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n if tools.Version(self.version) < \"7.0.0\":\n del self.options.with_tiff\n del self.options.with_curl\n\n def configure(self):\n if self.options.shared:\n del self.options.fPIC\n\n def requirements(self):\n self.requires(\"sqlite3/3.32.3\")\n if self.options.get_safe(\"with_tiff\"):\n self.requires(\"libtiff/4.1.0\")\n if self.options.get_safe(\"with_curl\"):\n self.requires(\"libcurl/7.72.0\")\n\n def build_requirements(self):\n self.build_requires(\"sqlite3/3.32.3\")\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n os.rename(self.name + \"-\" + self.version, self._source_subfolder)\n\n def build(self):\n self._patch_sources()\n with tools.environment_append(RunEnvironment(self).vars):\n cmake = self._configure_cmake()\n cmake.build()\n\n def _patch_sources(self):\n for patch in self.conan_data.get(\"patches\", {}).get(self.version, []):\n tools.patch(**patch)\n tools.replace_in_file(os.path.join(self._source_subfolder, \"CMakeLists.txt\"), \"/W4\", \"\")\n\n def _configure_cmake(self):\n if self._cmake:\n return self._cmake\n self._cmake = CMake(self)\n self._cmake.definitions[\"USE_THREAD\"] = self.options.threadsafe\n self._cmake.definitions[\"BUILD_CCT\"] = True\n self._cmake.definitions[\"BUILD_CS2CS\"] = True\n self._cmake.definitions[\"BUILD_GEOD\"] = True\n self._cmake.definitions[\"BUILD_GIE\"] = True\n self._cmake.definitions[\"BUILD_PROJ\"] = True\n self._cmake.definitions[\"BUILD_PROJINFO\"] = True\n self._cmake.definitions[\"PROJ_DATA_SUBDIR\"] = \"res\"\n if tools.Version(self.version) < \"7.0.0\":\n self._cmake.definitions[\"PROJ_TESTS\"] = False\n self._cmake.definitions[\"BUILD_LIBPROJ_SHARED\"] = self.options.shared\n self._cmake.definitions[\"ENABLE_LTO\"] = False\n self._cmake.definitions[\"JNI_SUPPORT\"] = False\n else:\n self._cmake.definitions[\"ENABLE_TIFF\"] = self.options.with_tiff\n self._cmake.definitions[\"ENABLE_CURL\"] = self.options.with_curl\n self._cmake.definitions[\"BUILD_TESTING\"] = False\n self._cmake.definitions[\"ENABLE_IPO\"] = False\n self._cmake.definitions[\"BUILD_PROJSYNC\"] = self.options.with_curl\n self._cmake.configure()\n return self._cmake\n\n def package(self):\n self.copy(\"COPYING\", dst=\"licenses\", src=self._source_subfolder)\n cmake = self._configure_cmake()\n cmake.install()\n tools.rmdir(os.path.join(self.package_folder, \"share\"))\n tools.rmdir(os.path.join(self.package_folder, \"lib\", \"cmake\"))\n\n def package_info(self):\n proj_version = tools.Version(self.version)\n cmake_config_filename = \"proj\" if proj_version >= \"7.0.0\" else \"proj4\"\n cmake_namespace = \"PROJ\" if proj_version >= \"7.0.0\" else \"PROJ4\"\n self.cpp_info.filenames[\"cmake_find_package\"] = cmake_config_filename\n self.cpp_info.filenames[\"cmake_find_package_multi\"] = cmake_config_filename\n self.cpp_info.names[\"cmake_find_package\"] = cmake_namespace\n self.cpp_info.names[\"cmake_find_package_multi\"] = cmake_namespace\n self.cpp_info.components[\"projlib\"].names[\"cmake_find_package\"] = \"proj\"\n self.cpp_info.components[\"projlib\"].names[\"cmake_find_package_multi\"] = \"proj\"\n self.cpp_info.components[\"projlib\"].libs = tools.collect_libs(self)\n if self.settings.os == \"Linux\":\n self.cpp_info.components[\"projlib\"].system_libs.append(\"m\")\n if self.options.threadsafe:\n self.cpp_info.components[\"projlib\"].system_libs.append(\"pthread\")\n elif self.settings.os == \"Windows\":\n if proj_version >= \"7.0.0\":\n self.cpp_info.components[\"projlib\"].system_libs.append(\"shell32\")\n if proj_version >= \"7.1.0\":\n self.cpp_info.components[\"projlib\"].system_libs.append(\"Ole32\")\n if not self.options.shared and tools.stdcpp_library(self):\n self.cpp_info.components[\"projlib\"].system_libs.append(tools.stdcpp_library(self))\n self.cpp_info.components[\"projlib\"].requires.append(\"sqlite3::sqlite3\")\n if self.options.get_safe(\"with_tiff\"):\n self.cpp_info.components[\"projlib\"].requires.append(\"libtiff::libtiff\")\n if self.options.get_safe(\"with_curl\"):\n self.cpp_info.components[\"projlib\"].requires.append(\"libcurl::libcurl\")\n if self.options.shared and self.settings.compiler == \"Visual Studio\":\n self.cpp_info.components[\"projlib\"].defines.append(\"PROJ_MSVC_DLL_IMPORT\")\n\n res_path = os.path.join(self.package_folder, \"res\")\n self.output.info(\"Appending PROJ_LIB environment variable: {}\".format(res_path))\n self.env_info.PROJ_LIB.append(res_path)\n bin_path = os.path.join(self.package_folder, \"bin\")\n self.output.info(\"Appending PATH environment variable: {}\".format(bin_path))\n self.env_info.PATH.append(bin_path)\n", "path": "recipes/proj/all/conanfile.py"}], "after_files": [{"content": "import os\n\nfrom conans import ConanFile, CMake, tools, RunEnvironment\n\nrequired_conan_version = \">=1.28.0\"\n\nclass ProjConan(ConanFile):\n name = \"proj\"\n description = \"Cartographic Projections and Coordinate Transformations Library.\"\n license = \"MIT\"\n topics = (\"conan\", \"dsp\", \"proj\", \"proj4\", \"projections\", \"gis\", \"geospatial\")\n homepage = \"https://proj.org\"\n url = \"https://github.com/conan-io/conan-center-index\"\n exports_sources = [\"CMakeLists.txt\", \"patches/**\"]\n generators = \"cmake\", \"cmake_find_package\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\n \"shared\": [True, False],\n \"fPIC\": [True, False],\n \"threadsafe\": [True, False],\n \"with_tiff\": [True, False],\n \"with_curl\": [True, False],\n \"build_executables\": [True, False]\n }\n default_options = {\n \"shared\": False,\n \"fPIC\": True,\n \"threadsafe\": True,\n \"with_tiff\": True,\n \"with_curl\": True,\n \"build_executables\": True\n }\n\n _cmake = None\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n if tools.Version(self.version) < \"7.0.0\":\n del self.options.with_tiff\n del self.options.with_curl\n\n def configure(self):\n if self.options.shared:\n del self.options.fPIC\n\n def requirements(self):\n self.requires(\"sqlite3/3.32.3\")\n if self.options.get_safe(\"with_tiff\"):\n self.requires(\"libtiff/4.1.0\")\n if self.options.get_safe(\"with_curl\"):\n self.requires(\"libcurl/7.72.0\")\n\n def build_requirements(self):\n self.build_requires(\"sqlite3/3.32.3\")\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n os.rename(self.name + \"-\" + self.version, self._source_subfolder)\n\n def build(self):\n self._patch_sources()\n with tools.environment_append(RunEnvironment(self).vars):\n cmake = self._configure_cmake()\n cmake.build()\n\n def _patch_sources(self):\n for patch in self.conan_data.get(\"patches\", {}).get(self.version, []):\n tools.patch(**patch)\n tools.replace_in_file(os.path.join(self._source_subfolder, \"CMakeLists.txt\"), \"/W4\", \"\")\n\n def _configure_cmake(self):\n if self._cmake:\n return self._cmake\n self._cmake = CMake(self)\n self._cmake.definitions[\"USE_THREAD\"] = self.options.threadsafe\n self._cmake.definitions[\"BUILD_CCT\"] = self.options.build_executables\n self._cmake.definitions[\"BUILD_CS2CS\"] = self.options.build_executables\n self._cmake.definitions[\"BUILD_GEOD\"] = self.options.build_executables\n self._cmake.definitions[\"BUILD_GIE\"] = self.options.build_executables\n self._cmake.definitions[\"BUILD_PROJ\"] = self.options.build_executables\n self._cmake.definitions[\"BUILD_PROJINFO\"] = self.options.build_executables\n self._cmake.definitions[\"PROJ_DATA_SUBDIR\"] = \"res\"\n if tools.Version(self.version) < \"7.0.0\":\n self._cmake.definitions[\"PROJ_TESTS\"] = False\n self._cmake.definitions[\"BUILD_LIBPROJ_SHARED\"] = self.options.shared\n self._cmake.definitions[\"ENABLE_LTO\"] = False\n self._cmake.definitions[\"JNI_SUPPORT\"] = False\n else:\n self._cmake.definitions[\"ENABLE_TIFF\"] = self.options.with_tiff\n self._cmake.definitions[\"ENABLE_CURL\"] = self.options.with_curl\n self._cmake.definitions[\"BUILD_TESTING\"] = False\n self._cmake.definitions[\"ENABLE_IPO\"] = False\n self._cmake.definitions[\"BUILD_PROJSYNC\"] = self.options.build_executables and self.options.with_curl\n self._cmake.configure()\n return self._cmake\n\n def package(self):\n self.copy(\"COPYING\", dst=\"licenses\", src=self._source_subfolder)\n cmake = self._configure_cmake()\n cmake.install()\n tools.rmdir(os.path.join(self.package_folder, \"share\"))\n tools.rmdir(os.path.join(self.package_folder, \"lib\", \"cmake\"))\n\n def package_info(self):\n proj_version = tools.Version(self.version)\n cmake_config_filename = \"proj\" if proj_version >= \"7.0.0\" else \"proj4\"\n cmake_namespace = \"PROJ\" if proj_version >= \"7.0.0\" else \"PROJ4\"\n self.cpp_info.filenames[\"cmake_find_package\"] = cmake_config_filename\n self.cpp_info.filenames[\"cmake_find_package_multi\"] = cmake_config_filename\n self.cpp_info.names[\"cmake_find_package\"] = cmake_namespace\n self.cpp_info.names[\"cmake_find_package_multi\"] = cmake_namespace\n self.cpp_info.components[\"projlib\"].names[\"cmake_find_package\"] = \"proj\"\n self.cpp_info.components[\"projlib\"].names[\"cmake_find_package_multi\"] = \"proj\"\n self.cpp_info.components[\"projlib\"].libs = tools.collect_libs(self)\n if self.settings.os == \"Linux\":\n self.cpp_info.components[\"projlib\"].system_libs.append(\"m\")\n if self.options.threadsafe:\n self.cpp_info.components[\"projlib\"].system_libs.append(\"pthread\")\n elif self.settings.os == \"Windows\":\n if proj_version >= \"7.0.0\":\n self.cpp_info.components[\"projlib\"].system_libs.append(\"shell32\")\n if proj_version >= \"7.1.0\":\n self.cpp_info.components[\"projlib\"].system_libs.append(\"Ole32\")\n if not self.options.shared and tools.stdcpp_library(self):\n self.cpp_info.components[\"projlib\"].system_libs.append(tools.stdcpp_library(self))\n self.cpp_info.components[\"projlib\"].requires.append(\"sqlite3::sqlite3\")\n if self.options.get_safe(\"with_tiff\"):\n self.cpp_info.components[\"projlib\"].requires.append(\"libtiff::libtiff\")\n if self.options.get_safe(\"with_curl\"):\n self.cpp_info.components[\"projlib\"].requires.append(\"libcurl::libcurl\")\n if self.options.shared and self.settings.compiler == \"Visual Studio\":\n self.cpp_info.components[\"projlib\"].defines.append(\"PROJ_MSVC_DLL_IMPORT\")\n\n res_path = os.path.join(self.package_folder, \"res\")\n self.output.info(\"Appending PROJ_LIB environment variable: {}\".format(res_path))\n self.env_info.PROJ_LIB.append(res_path)\n if self.options.build_executables:\n bin_path = os.path.join(self.package_folder, \"bin\")\n self.output.info(\"Appending PATH environment variable: {}\".format(bin_path))\n self.env_info.PATH.append(bin_path)\n", "path": "recipes/proj/all/conanfile.py"}]} | 3,004 | 799 |
gh_patches_debug_38914 | rasdani/github-patches | git_diff | akvo__akvo-rsr-1783 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Can't add locations to update through REST API
## Test plan
GIVEN the Up app
WHEN the user tries to add an update
THEN this should not give a 400 error
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `akvo/rest/serializers/__init__.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 """Akvo RSR is covered by the GNU Affero General Public License.
4 See more details in the license.txt file located at the root folder of the
5 Akvo RSR module. For additional details on the GNU license please
6 see < http://www.gnu.org/licenses/agpl.html >.
7 """
8
9
10 from .benchmark import BenchmarkSerializer
11 from .benchmark_name import BenchmarknameSerializer
12 from .budget_item import BudgetItemSerializer, CountryBudgetItemSerializer
13 from .budget_item_label import BudgetItemLabelSerializer
14 from .category import CategorySerializer
15 from .country import CountrySerializer
16 from .custom_field import OrganisationCustomFieldSerializer, ProjectCustomFieldSerializer
17 from .employment import EmploymentSerializer
18 from .focus_area import FocusAreaSerializer
19 from .goal import GoalSerializer
20 from .indicator import IndicatorPeriodSerializer, IndicatorSerializer
21 from .internal_organisation_id import InternalOrganisationIDSerializer
22 from .invoice import InvoiceSerializer
23 from .keyword import KeywordSerializer
24 from .legacy_data import LegacyDataSerializer
25 from .link import LinkSerializer
26 from .organisation import OrganisationSerializer
27 from .organisation_location import (OrganisationLocationSerializer,
28 MapOrganisationLocationSerializer)
29 from .partner_site import PartnerSiteSerializer
30 from .partnership import PartnershipSerializer
31 from .planned_disbursement import PlannedDisbursementSerializer
32 from .policy_marker import PolicyMarkerSerializer
33 from .project import ProjectSerializer, ProjectExtraSerializer, ProjectUpSerializer
34 from .project_comment import ProjectCommentSerializer
35 from .project_condition import ProjectConditionSerializer
36 from .project_contact import ProjectContactSerializer
37 from .project_document import ProjectDocumentSerializer
38 from .project_location import (ProjectLocationSerializer, AdministrativeLocationSerializer,
39 MapProjectLocationSerializer)
40 from .project_update import (ProjectUpdateSerializer,
41 ProjectUpdateExtraSerializer)
42 from .project_update_location import (ProjectUpdateLocationSerializer,
43 MapProjectUpdateLocationSerializer)
44 from .publishing_status import PublishingStatusSerializer
45 from .recipient_country import RecipientCountrySerializer
46 from .region import RecipientRegionSerializer
47 from .related_project import RelatedProjectSerializer
48 from .result import ResultSerializer
49 from .sector import SectorSerializer
50 from .transaction import TransactionSerializer, TransactionSectorSerializer
51 from .typeahead import (TypeaheadCountrySerializer,
52 TypeaheadOrganisationSerializer,
53 TypeaheadProjectSerializer,
54 TypeaheadProjectUpdateSerializer)
55 from .user import UserSerializer, UserDetailsSerializer, UserPasswordSerializer
56
57 __all__ = [
58 'AdministrativeLocationSerializer',
59 'BenchmarknameSerializer',
60 'BenchmarkSerializer',
61 'BudgetItemLabelSerializer',
62 'BudgetItemSerializer',
63 'CategorySerializer',
64 'CountrySerializer',
65 'CountryBudgetItemSerializer',
66 'EmploymentSerializer',
67 'FocusAreaSerializer',
68 'GoalSerializer',
69 'IndicatorPeriodSerializer',
70 'IndicatorSerializer',
71 'InternalOrganisationIDSerializer',
72 'InvoiceSerializer',
73 'KeywordSerializer',
74 'LegacyDataSerializer',
75 'LinkSerializer',
76 'MapOrganisationLocationSerializer',
77 'MapProjectLocationSerializer',
78 'MapProjectUpdateLocationSerializer',
79 'OrganisationSerializer',
80 'OrganisationCustomFieldSerializer',
81 'OrganisationLocationSerializer',
82 'PartnershipSerializer',
83 'PartnerSiteSerializer',
84 'PlannedDisbursementSerializer',
85 'PolicyMarkerSerializer',
86 'ProjectCommentSerializer',
87 'ProjectConditionSerializer',
88 'ProjectContactSerializer',
89 'ProjectCustomFieldSerializer',
90 'ProjectDocumentSerializer',
91 'ProjectExtraSerializer',
92 'ProjectLocationSerializer',
93 'ProjectSerializer',
94 'ProjectUpdateExtraSerializer',
95 'ProjectUpdateLocationSerializer',
96 'ProjectUpdateSerializer',
97 'ProjectUpSerializer',
98 'PublishingStatusSerializer',
99 'RecipientCountrySerializer',
100 'RecipientRegionSerializer',
101 'RelatedProjectSerializer',
102 'ResultSerializer',
103 'SectorSerializer',
104 'TransactionSerializer',
105 'TransactionSectorSerializer',
106 'TypeaheadCountrySerializer',
107 'TypeaheadOrganisationSerializer',
108 'TypeaheadProjectSerializer',
109 'TypeaheadProjectUpdateSerializer',
110 'UserDetailsSerializer',
111 'UserPasswordSerializer',
112 'UserSerializer',
113 ]
114
```
Path: `akvo/rest/serializers/project_update.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """Akvo RSR is covered by the GNU Affero General Public License.
3
4 See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6 """
7
8 from rest_framework import serializers
9 from akvo.rsr.models import ProjectUpdate
10 from ..fields import Base64ImageField
11 from .project_update_location import (ProjectUpdateLocationSerializer,
12 ProjectUpdateLocationExtraSerializer)
13 from .rsr_serializer import BaseRSRSerializer
14 from .user import UserSerializer
15
16
17 class ProjectUpdateSerializer(BaseRSRSerializer):
18
19 """Serializer for project updates."""
20
21 locations = ProjectUpdateLocationSerializer(source='locations', many=True, required=False,
22 allow_add_remove=True)
23 photo = Base64ImageField(required=False, allow_empty_file=True)
24
25 class Meta:
26 model = ProjectUpdate
27
28
29 class ProjectUpdateExtraSerializer(BaseRSRSerializer):
30
31 """This serializer includes data about user and connected organisation."""
32
33 photo = Base64ImageField(required=False, allow_empty_file=True)
34 primary_location = ProjectUpdateLocationExtraSerializer()
35 # Limit project data to its PK, this is needed because of Meta.depth = 2
36 project = serializers.Field(source='project.pk')
37 user = UserSerializer()
38
39 class Meta:
40 model = ProjectUpdate
41 depth = 2
42
```
Path: `akvo/rest/serializers/project_update_location.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """Akvo RSR is covered by the GNU Affero General Public License.
3 See more details in the license.txt file located at the root folder of the Akvo RSR module.
4 For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
5 """
6
7 from rest_framework import serializers
8 from akvo.rsr.models import ProjectUpdateLocation
9 from ..fields import Base64ImageField
10 from .rsr_serializer import BaseRSRSerializer
11
12
13 class ProjectUpdateLocationSerializer(BaseRSRSerializer):
14
15 class Meta:
16 model = ProjectUpdateLocation
17
18
19 class ProjectUpdateLocationExtraSerializer(ProjectUpdateLocationSerializer):
20
21 # Limit update data to its PK, this is needed because of Meta.depth = 2
22 location_target = serializers.Field(source='location_target.pk')
23
24 class Meta(ProjectUpdateLocationSerializer.Meta):
25 depth = 2
26
27
28 class MapProjectUpdateSerializer(serializers.Serializer):
29
30 """To serialize the update field of the update map resource."""
31
32 id = serializers.IntegerField()
33 title = serializers.CharField()
34 url = serializers.URLField(source='get_absolute_url')
35 photo = Base64ImageField(required=False, allow_empty_file=True)
36 video = serializers.CharField(required=False)
37
38
39 class MapProjectUpdateLocationSerializer(serializers.Serializer):
40
41 """To serialize the update map resource."""
42
43 id = serializers.IntegerField()
44 latitude = serializers.FloatField()
45 longitude = serializers.FloatField()
46 update = MapProjectUpdateSerializer(source='location_target')
47
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/akvo/rest/serializers/__init__.py b/akvo/rest/serializers/__init__.py
--- a/akvo/rest/serializers/__init__.py
+++ b/akvo/rest/serializers/__init__.py
@@ -40,6 +40,7 @@
from .project_update import (ProjectUpdateSerializer,
ProjectUpdateExtraSerializer)
from .project_update_location import (ProjectUpdateLocationSerializer,
+ ProjectUpdateLocationNestedSerializer,
MapProjectUpdateLocationSerializer)
from .publishing_status import PublishingStatusSerializer
from .recipient_country import RecipientCountrySerializer
@@ -93,6 +94,7 @@
'ProjectSerializer',
'ProjectUpdateExtraSerializer',
'ProjectUpdateLocationSerializer',
+ 'ProjectUpdateLocationNestedSerializer',
'ProjectUpdateSerializer',
'ProjectUpSerializer',
'PublishingStatusSerializer',
diff --git a/akvo/rest/serializers/project_update.py b/akvo/rest/serializers/project_update.py
--- a/akvo/rest/serializers/project_update.py
+++ b/akvo/rest/serializers/project_update.py
@@ -8,7 +8,7 @@
from rest_framework import serializers
from akvo.rsr.models import ProjectUpdate
from ..fields import Base64ImageField
-from .project_update_location import (ProjectUpdateLocationSerializer,
+from .project_update_location import (ProjectUpdateLocationNestedSerializer,
ProjectUpdateLocationExtraSerializer)
from .rsr_serializer import BaseRSRSerializer
from .user import UserSerializer
@@ -18,8 +18,8 @@
"""Serializer for project updates."""
- locations = ProjectUpdateLocationSerializer(source='locations', many=True, required=False,
- allow_add_remove=True)
+ locations = ProjectUpdateLocationNestedSerializer(source='locations', many=True, required=False,
+ allow_add_remove=True)
photo = Base64ImageField(required=False, allow_empty_file=True)
class Meta:
diff --git a/akvo/rest/serializers/project_update_location.py b/akvo/rest/serializers/project_update_location.py
--- a/akvo/rest/serializers/project_update_location.py
+++ b/akvo/rest/serializers/project_update_location.py
@@ -16,6 +16,14 @@
model = ProjectUpdateLocation
+class ProjectUpdateLocationNestedSerializer(ProjectUpdateLocationSerializer):
+
+ class Meta(ProjectUpdateLocationSerializer.Meta):
+ # Exclude the mandatory 'location_target' field, so that it is possible to create a
+ # project update location at the same time as the project update.
+ exclude = ('location_target',)
+
+
class ProjectUpdateLocationExtraSerializer(ProjectUpdateLocationSerializer):
# Limit update data to its PK, this is needed because of Meta.depth = 2
| {"golden_diff": "diff --git a/akvo/rest/serializers/__init__.py b/akvo/rest/serializers/__init__.py\n--- a/akvo/rest/serializers/__init__.py\n+++ b/akvo/rest/serializers/__init__.py\n@@ -40,6 +40,7 @@\n from .project_update import (ProjectUpdateSerializer,\n ProjectUpdateExtraSerializer)\n from .project_update_location import (ProjectUpdateLocationSerializer,\n+ ProjectUpdateLocationNestedSerializer,\n MapProjectUpdateLocationSerializer)\n from .publishing_status import PublishingStatusSerializer\n from .recipient_country import RecipientCountrySerializer\n@@ -93,6 +94,7 @@\n 'ProjectSerializer',\n 'ProjectUpdateExtraSerializer',\n 'ProjectUpdateLocationSerializer',\n+ 'ProjectUpdateLocationNestedSerializer',\n 'ProjectUpdateSerializer',\n 'ProjectUpSerializer',\n 'PublishingStatusSerializer',\ndiff --git a/akvo/rest/serializers/project_update.py b/akvo/rest/serializers/project_update.py\n--- a/akvo/rest/serializers/project_update.py\n+++ b/akvo/rest/serializers/project_update.py\n@@ -8,7 +8,7 @@\n from rest_framework import serializers\n from akvo.rsr.models import ProjectUpdate\n from ..fields import Base64ImageField\n-from .project_update_location import (ProjectUpdateLocationSerializer,\n+from .project_update_location import (ProjectUpdateLocationNestedSerializer,\n ProjectUpdateLocationExtraSerializer)\n from .rsr_serializer import BaseRSRSerializer\n from .user import UserSerializer\n@@ -18,8 +18,8 @@\n \n \"\"\"Serializer for project updates.\"\"\"\n \n- locations = ProjectUpdateLocationSerializer(source='locations', many=True, required=False,\n- allow_add_remove=True)\n+ locations = ProjectUpdateLocationNestedSerializer(source='locations', many=True, required=False,\n+ allow_add_remove=True)\n photo = Base64ImageField(required=False, allow_empty_file=True)\n \n class Meta:\ndiff --git a/akvo/rest/serializers/project_update_location.py b/akvo/rest/serializers/project_update_location.py\n--- a/akvo/rest/serializers/project_update_location.py\n+++ b/akvo/rest/serializers/project_update_location.py\n@@ -16,6 +16,14 @@\n model = ProjectUpdateLocation\n \n \n+class ProjectUpdateLocationNestedSerializer(ProjectUpdateLocationSerializer):\n+\n+ class Meta(ProjectUpdateLocationSerializer.Meta):\n+ # Exclude the mandatory 'location_target' field, so that it is possible to create a\n+ # project update location at the same time as the project update.\n+ exclude = ('location_target',)\n+\n+\n class ProjectUpdateLocationExtraSerializer(ProjectUpdateLocationSerializer):\n \n # Limit update data to its PK, this is needed because of Meta.depth = 2\n", "issue": "Can't add locations to update through REST API\n## Test plan\n\nGIVEN the Up app\nWHEN the user tries to add an update\nTHEN this should not give a 400 error\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\nSee more details in the license.txt file located at the root folder of the\nAkvo RSR module. For additional details on the GNU license please\nsee < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\n\nfrom .benchmark import BenchmarkSerializer\nfrom .benchmark_name import BenchmarknameSerializer\nfrom .budget_item import BudgetItemSerializer, CountryBudgetItemSerializer\nfrom .budget_item_label import BudgetItemLabelSerializer\nfrom .category import CategorySerializer\nfrom .country import CountrySerializer\nfrom .custom_field import OrganisationCustomFieldSerializer, ProjectCustomFieldSerializer\nfrom .employment import EmploymentSerializer\nfrom .focus_area import FocusAreaSerializer\nfrom .goal import GoalSerializer\nfrom .indicator import IndicatorPeriodSerializer, IndicatorSerializer\nfrom .internal_organisation_id import InternalOrganisationIDSerializer\nfrom .invoice import InvoiceSerializer\nfrom .keyword import KeywordSerializer\nfrom .legacy_data import LegacyDataSerializer\nfrom .link import LinkSerializer\nfrom .organisation import OrganisationSerializer\nfrom .organisation_location import (OrganisationLocationSerializer,\n MapOrganisationLocationSerializer)\nfrom .partner_site import PartnerSiteSerializer\nfrom .partnership import PartnershipSerializer\nfrom .planned_disbursement import PlannedDisbursementSerializer\nfrom .policy_marker import PolicyMarkerSerializer\nfrom .project import ProjectSerializer, ProjectExtraSerializer, ProjectUpSerializer\nfrom .project_comment import ProjectCommentSerializer\nfrom .project_condition import ProjectConditionSerializer\nfrom .project_contact import ProjectContactSerializer\nfrom .project_document import ProjectDocumentSerializer\nfrom .project_location import (ProjectLocationSerializer, AdministrativeLocationSerializer,\n MapProjectLocationSerializer)\nfrom .project_update import (ProjectUpdateSerializer,\n ProjectUpdateExtraSerializer)\nfrom .project_update_location import (ProjectUpdateLocationSerializer,\n MapProjectUpdateLocationSerializer)\nfrom .publishing_status import PublishingStatusSerializer\nfrom .recipient_country import RecipientCountrySerializer\nfrom .region import RecipientRegionSerializer\nfrom .related_project import RelatedProjectSerializer\nfrom .result import ResultSerializer\nfrom .sector import SectorSerializer\nfrom .transaction import TransactionSerializer, TransactionSectorSerializer\nfrom .typeahead import (TypeaheadCountrySerializer,\n TypeaheadOrganisationSerializer,\n TypeaheadProjectSerializer,\n TypeaheadProjectUpdateSerializer)\nfrom .user import UserSerializer, UserDetailsSerializer, UserPasswordSerializer\n\n__all__ = [\n 'AdministrativeLocationSerializer',\n 'BenchmarknameSerializer',\n 'BenchmarkSerializer',\n 'BudgetItemLabelSerializer',\n 'BudgetItemSerializer',\n 'CategorySerializer',\n 'CountrySerializer',\n 'CountryBudgetItemSerializer',\n 'EmploymentSerializer',\n 'FocusAreaSerializer',\n 'GoalSerializer',\n 'IndicatorPeriodSerializer',\n 'IndicatorSerializer',\n 'InternalOrganisationIDSerializer',\n 'InvoiceSerializer',\n 'KeywordSerializer',\n 'LegacyDataSerializer',\n 'LinkSerializer',\n 'MapOrganisationLocationSerializer',\n 'MapProjectLocationSerializer',\n 'MapProjectUpdateLocationSerializer',\n 'OrganisationSerializer',\n 'OrganisationCustomFieldSerializer',\n 'OrganisationLocationSerializer',\n 'PartnershipSerializer',\n 'PartnerSiteSerializer',\n 'PlannedDisbursementSerializer',\n 'PolicyMarkerSerializer',\n 'ProjectCommentSerializer',\n 'ProjectConditionSerializer',\n 'ProjectContactSerializer',\n 'ProjectCustomFieldSerializer',\n 'ProjectDocumentSerializer',\n 'ProjectExtraSerializer',\n 'ProjectLocationSerializer',\n 'ProjectSerializer',\n 'ProjectUpdateExtraSerializer',\n 'ProjectUpdateLocationSerializer',\n 'ProjectUpdateSerializer',\n 'ProjectUpSerializer',\n 'PublishingStatusSerializer',\n 'RecipientCountrySerializer',\n 'RecipientRegionSerializer',\n 'RelatedProjectSerializer',\n 'ResultSerializer',\n 'SectorSerializer',\n 'TransactionSerializer',\n 'TransactionSectorSerializer',\n 'TypeaheadCountrySerializer',\n 'TypeaheadOrganisationSerializer',\n 'TypeaheadProjectSerializer',\n 'TypeaheadProjectUpdateSerializer',\n 'UserDetailsSerializer',\n 'UserPasswordSerializer',\n 'UserSerializer',\n]\n", "path": "akvo/rest/serializers/__init__.py"}, {"content": "# -*- coding: utf-8 -*-\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\n\nSee more details in the license.txt file located at the root folder of the Akvo RSR module.\nFor additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom rest_framework import serializers\nfrom akvo.rsr.models import ProjectUpdate\nfrom ..fields import Base64ImageField\nfrom .project_update_location import (ProjectUpdateLocationSerializer,\n ProjectUpdateLocationExtraSerializer)\nfrom .rsr_serializer import BaseRSRSerializer\nfrom .user import UserSerializer\n\n\nclass ProjectUpdateSerializer(BaseRSRSerializer):\n\n \"\"\"Serializer for project updates.\"\"\"\n\n locations = ProjectUpdateLocationSerializer(source='locations', many=True, required=False,\n allow_add_remove=True)\n photo = Base64ImageField(required=False, allow_empty_file=True)\n\n class Meta:\n model = ProjectUpdate\n\n\nclass ProjectUpdateExtraSerializer(BaseRSRSerializer):\n\n \"\"\"This serializer includes data about user and connected organisation.\"\"\"\n\n photo = Base64ImageField(required=False, allow_empty_file=True)\n primary_location = ProjectUpdateLocationExtraSerializer()\n # Limit project data to its PK, this is needed because of Meta.depth = 2\n project = serializers.Field(source='project.pk')\n user = UserSerializer()\n\n class Meta:\n model = ProjectUpdate\n depth = 2\n", "path": "akvo/rest/serializers/project_update.py"}, {"content": "# -*- coding: utf-8 -*-\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\nSee more details in the license.txt file located at the root folder of the Akvo RSR module.\nFor additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom rest_framework import serializers\nfrom akvo.rsr.models import ProjectUpdateLocation\nfrom ..fields import Base64ImageField\nfrom .rsr_serializer import BaseRSRSerializer\n\n\nclass ProjectUpdateLocationSerializer(BaseRSRSerializer):\n\n class Meta:\n model = ProjectUpdateLocation\n\n\nclass ProjectUpdateLocationExtraSerializer(ProjectUpdateLocationSerializer):\n\n # Limit update data to its PK, this is needed because of Meta.depth = 2\n location_target = serializers.Field(source='location_target.pk')\n\n class Meta(ProjectUpdateLocationSerializer.Meta):\n depth = 2\n\n\nclass MapProjectUpdateSerializer(serializers.Serializer):\n\n \"\"\"To serialize the update field of the update map resource.\"\"\"\n\n id = serializers.IntegerField()\n title = serializers.CharField()\n url = serializers.URLField(source='get_absolute_url')\n photo = Base64ImageField(required=False, allow_empty_file=True)\n video = serializers.CharField(required=False)\n\n\nclass MapProjectUpdateLocationSerializer(serializers.Serializer):\n\n \"\"\"To serialize the update map resource.\"\"\"\n\n id = serializers.IntegerField()\n latitude = serializers.FloatField()\n longitude = serializers.FloatField()\n update = MapProjectUpdateSerializer(source='location_target')\n", "path": "akvo/rest/serializers/project_update_location.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\nSee more details in the license.txt file located at the root folder of the\nAkvo RSR module. For additional details on the GNU license please\nsee < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\n\nfrom .benchmark import BenchmarkSerializer\nfrom .benchmark_name import BenchmarknameSerializer\nfrom .budget_item import BudgetItemSerializer, CountryBudgetItemSerializer\nfrom .budget_item_label import BudgetItemLabelSerializer\nfrom .category import CategorySerializer\nfrom .country import CountrySerializer\nfrom .custom_field import OrganisationCustomFieldSerializer, ProjectCustomFieldSerializer\nfrom .employment import EmploymentSerializer\nfrom .focus_area import FocusAreaSerializer\nfrom .goal import GoalSerializer\nfrom .indicator import IndicatorPeriodSerializer, IndicatorSerializer\nfrom .internal_organisation_id import InternalOrganisationIDSerializer\nfrom .invoice import InvoiceSerializer\nfrom .keyword import KeywordSerializer\nfrom .legacy_data import LegacyDataSerializer\nfrom .link import LinkSerializer\nfrom .organisation import OrganisationSerializer\nfrom .organisation_location import (OrganisationLocationSerializer,\n MapOrganisationLocationSerializer)\nfrom .partner_site import PartnerSiteSerializer\nfrom .partnership import PartnershipSerializer\nfrom .planned_disbursement import PlannedDisbursementSerializer\nfrom .policy_marker import PolicyMarkerSerializer\nfrom .project import ProjectSerializer, ProjectExtraSerializer, ProjectUpSerializer\nfrom .project_comment import ProjectCommentSerializer\nfrom .project_condition import ProjectConditionSerializer\nfrom .project_contact import ProjectContactSerializer\nfrom .project_document import ProjectDocumentSerializer\nfrom .project_location import (ProjectLocationSerializer, AdministrativeLocationSerializer,\n MapProjectLocationSerializer)\nfrom .project_update import (ProjectUpdateSerializer,\n ProjectUpdateExtraSerializer)\nfrom .project_update_location import (ProjectUpdateLocationSerializer,\n ProjectUpdateLocationNestedSerializer,\n MapProjectUpdateLocationSerializer)\nfrom .publishing_status import PublishingStatusSerializer\nfrom .recipient_country import RecipientCountrySerializer\nfrom .region import RecipientRegionSerializer\nfrom .related_project import RelatedProjectSerializer\nfrom .result import ResultSerializer\nfrom .sector import SectorSerializer\nfrom .transaction import TransactionSerializer, TransactionSectorSerializer\nfrom .typeahead import (TypeaheadCountrySerializer,\n TypeaheadOrganisationSerializer,\n TypeaheadProjectSerializer,\n TypeaheadProjectUpdateSerializer)\nfrom .user import UserSerializer, UserDetailsSerializer, UserPasswordSerializer\n\n__all__ = [\n 'AdministrativeLocationSerializer',\n 'BenchmarknameSerializer',\n 'BenchmarkSerializer',\n 'BudgetItemLabelSerializer',\n 'BudgetItemSerializer',\n 'CategorySerializer',\n 'CountrySerializer',\n 'CountryBudgetItemSerializer',\n 'EmploymentSerializer',\n 'FocusAreaSerializer',\n 'GoalSerializer',\n 'IndicatorPeriodSerializer',\n 'IndicatorSerializer',\n 'InternalOrganisationIDSerializer',\n 'InvoiceSerializer',\n 'KeywordSerializer',\n 'LegacyDataSerializer',\n 'LinkSerializer',\n 'MapOrganisationLocationSerializer',\n 'MapProjectLocationSerializer',\n 'MapProjectUpdateLocationSerializer',\n 'OrganisationSerializer',\n 'OrganisationCustomFieldSerializer',\n 'OrganisationLocationSerializer',\n 'PartnershipSerializer',\n 'PartnerSiteSerializer',\n 'PlannedDisbursementSerializer',\n 'PolicyMarkerSerializer',\n 'ProjectCommentSerializer',\n 'ProjectConditionSerializer',\n 'ProjectContactSerializer',\n 'ProjectCustomFieldSerializer',\n 'ProjectDocumentSerializer',\n 'ProjectExtraSerializer',\n 'ProjectLocationSerializer',\n 'ProjectSerializer',\n 'ProjectUpdateExtraSerializer',\n 'ProjectUpdateLocationSerializer',\n 'ProjectUpdateLocationNestedSerializer',\n 'ProjectUpdateSerializer',\n 'ProjectUpSerializer',\n 'PublishingStatusSerializer',\n 'RecipientCountrySerializer',\n 'RecipientRegionSerializer',\n 'RelatedProjectSerializer',\n 'ResultSerializer',\n 'SectorSerializer',\n 'TransactionSerializer',\n 'TransactionSectorSerializer',\n 'TypeaheadCountrySerializer',\n 'TypeaheadOrganisationSerializer',\n 'TypeaheadProjectSerializer',\n 'TypeaheadProjectUpdateSerializer',\n 'UserDetailsSerializer',\n 'UserPasswordSerializer',\n 'UserSerializer',\n]\n", "path": "akvo/rest/serializers/__init__.py"}, {"content": "# -*- coding: utf-8 -*-\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\n\nSee more details in the license.txt file located at the root folder of the Akvo RSR module.\nFor additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom rest_framework import serializers\nfrom akvo.rsr.models import ProjectUpdate\nfrom ..fields import Base64ImageField\nfrom .project_update_location import (ProjectUpdateLocationNestedSerializer,\n ProjectUpdateLocationExtraSerializer)\nfrom .rsr_serializer import BaseRSRSerializer\nfrom .user import UserSerializer\n\n\nclass ProjectUpdateSerializer(BaseRSRSerializer):\n\n \"\"\"Serializer for project updates.\"\"\"\n\n locations = ProjectUpdateLocationNestedSerializer(source='locations', many=True, required=False,\n allow_add_remove=True)\n photo = Base64ImageField(required=False, allow_empty_file=True)\n\n class Meta:\n model = ProjectUpdate\n\n\nclass ProjectUpdateExtraSerializer(BaseRSRSerializer):\n\n \"\"\"This serializer includes data about user and connected organisation.\"\"\"\n\n photo = Base64ImageField(required=False, allow_empty_file=True)\n primary_location = ProjectUpdateLocationExtraSerializer()\n # Limit project data to its PK, this is needed because of Meta.depth = 2\n project = serializers.Field(source='project.pk')\n user = UserSerializer()\n\n class Meta:\n model = ProjectUpdate\n depth = 2\n", "path": "akvo/rest/serializers/project_update.py"}, {"content": "# -*- coding: utf-8 -*-\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\nSee more details in the license.txt file located at the root folder of the Akvo RSR module.\nFor additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom rest_framework import serializers\nfrom akvo.rsr.models import ProjectUpdateLocation\nfrom ..fields import Base64ImageField\nfrom .rsr_serializer import BaseRSRSerializer\n\n\nclass ProjectUpdateLocationSerializer(BaseRSRSerializer):\n\n class Meta:\n model = ProjectUpdateLocation\n\n\nclass ProjectUpdateLocationNestedSerializer(ProjectUpdateLocationSerializer):\n\n class Meta(ProjectUpdateLocationSerializer.Meta):\n # Exclude the mandatory 'location_target' field, so that it is possible to create a\n # project update location at the same time as the project update.\n exclude = ('location_target',)\n\n\nclass ProjectUpdateLocationExtraSerializer(ProjectUpdateLocationSerializer):\n\n # Limit update data to its PK, this is needed because of Meta.depth = 2\n location_target = serializers.Field(source='location_target.pk')\n\n class Meta(ProjectUpdateLocationSerializer.Meta):\n depth = 2\n\n\nclass MapProjectUpdateSerializer(serializers.Serializer):\n\n \"\"\"To serialize the update field of the update map resource.\"\"\"\n\n id = serializers.IntegerField()\n title = serializers.CharField()\n url = serializers.URLField(source='get_absolute_url')\n photo = Base64ImageField(required=False, allow_empty_file=True)\n video = serializers.CharField(required=False)\n\n\nclass MapProjectUpdateLocationSerializer(serializers.Serializer):\n\n \"\"\"To serialize the update map resource.\"\"\"\n\n id = serializers.IntegerField()\n latitude = serializers.FloatField()\n longitude = serializers.FloatField()\n update = MapProjectUpdateSerializer(source='location_target')\n", "path": "akvo/rest/serializers/project_update_location.py"}]} | 2,208 | 600 |
gh_patches_debug_2290 | rasdani/github-patches | git_diff | TheAlgorithms__Python-4779 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug with union in disjoint_set
https://github.com/TheAlgorithms/Python/blob/master/data_structures/disjoint_set/disjoint_set.py
```python
def union_set(x, y):
"""
union two sets.
set with bigger rank should be parent, so that the
disjoint set tree will be more flat.
"""
x, y = find_set(x), find_set(y)
if x.rank > y.rank:
y.parent = x
else:
x.parent = y
if x.rank == y.rank:
y.rank += 1
```
here need check if x==y
Bug with union in disjoint_set
https://github.com/TheAlgorithms/Python/blob/master/data_structures/disjoint_set/disjoint_set.py
```python
def union_set(x, y):
"""
union two sets.
set with bigger rank should be parent, so that the
disjoint set tree will be more flat.
"""
x, y = find_set(x), find_set(y)
if x.rank > y.rank:
y.parent = x
else:
x.parent = y
if x.rank == y.rank:
y.rank += 1
```
here need check if x==y
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `data_structures/disjoint_set/disjoint_set.py`
Content:
```
1 """
2 disjoint set
3 Reference: https://en.wikipedia.org/wiki/Disjoint-set_data_structure
4 """
5
6
7 class Node:
8 def __init__(self, data):
9 self.data = data
10
11
12 def make_set(x):
13 """
14 make x as a set.
15 """
16 # rank is the distance from x to its' parent
17 # root's rank is 0
18 x.rank = 0
19 x.parent = x
20
21
22 def union_set(x, y):
23 """
24 union two sets.
25 set with bigger rank should be parent, so that the
26 disjoint set tree will be more flat.
27 """
28 x, y = find_set(x), find_set(y)
29 if x.rank > y.rank:
30 y.parent = x
31 else:
32 x.parent = y
33 if x.rank == y.rank:
34 y.rank += 1
35
36
37 def find_set(x):
38 """
39 return the parent of x
40 """
41 if x != x.parent:
42 x.parent = find_set(x.parent)
43 return x.parent
44
45
46 def find_python_set(node: Node) -> set:
47 """
48 Return a Python Standard Library set that contains i.
49 """
50 sets = ({0, 1, 2}, {3, 4, 5})
51 for s in sets:
52 if node.data in s:
53 return s
54 raise ValueError(f"{node.data} is not in {sets}")
55
56
57 def test_disjoint_set():
58 """
59 >>> test_disjoint_set()
60 """
61 vertex = [Node(i) for i in range(6)]
62 for v in vertex:
63 make_set(v)
64
65 union_set(vertex[0], vertex[1])
66 union_set(vertex[1], vertex[2])
67 union_set(vertex[3], vertex[4])
68 union_set(vertex[3], vertex[5])
69
70 for node0 in vertex:
71 for node1 in vertex:
72 if find_python_set(node0).isdisjoint(find_python_set(node1)):
73 assert find_set(node0) != find_set(node1)
74 else:
75 assert find_set(node0) == find_set(node1)
76
77
78 if __name__ == "__main__":
79 test_disjoint_set()
80
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/data_structures/disjoint_set/disjoint_set.py b/data_structures/disjoint_set/disjoint_set.py
--- a/data_structures/disjoint_set/disjoint_set.py
+++ b/data_structures/disjoint_set/disjoint_set.py
@@ -26,7 +26,10 @@
disjoint set tree will be more flat.
"""
x, y = find_set(x), find_set(y)
- if x.rank > y.rank:
+ if x == y:
+ return
+
+ elif x.rank > y.rank:
y.parent = x
else:
x.parent = y
| {"golden_diff": "diff --git a/data_structures/disjoint_set/disjoint_set.py b/data_structures/disjoint_set/disjoint_set.py\n--- a/data_structures/disjoint_set/disjoint_set.py\n+++ b/data_structures/disjoint_set/disjoint_set.py\n@@ -26,7 +26,10 @@\n disjoint set tree will be more flat.\r\n \"\"\"\r\n x, y = find_set(x), find_set(y)\r\n- if x.rank > y.rank:\r\n+ if x == y:\r\n+ return\r\n+\r\n+ elif x.rank > y.rank:\r\n y.parent = x\r\n else:\r\n x.parent = y\n", "issue": "Bug with union in disjoint_set\nhttps://github.com/TheAlgorithms/Python/blob/master/data_structures/disjoint_set/disjoint_set.py\r\n```python\r\ndef union_set(x, y):\r\n \"\"\"\r\n union two sets.\r\n set with bigger rank should be parent, so that the\r\n disjoint set tree will be more flat.\r\n \"\"\"\r\n x, y = find_set(x), find_set(y)\r\n if x.rank > y.rank:\r\n y.parent = x\r\n else:\r\n x.parent = y\r\n if x.rank == y.rank:\r\n y.rank += 1\r\n```\r\n\r\nhere need check if x==y\r\n\nBug with union in disjoint_set\nhttps://github.com/TheAlgorithms/Python/blob/master/data_structures/disjoint_set/disjoint_set.py\r\n```python\r\ndef union_set(x, y):\r\n \"\"\"\r\n union two sets.\r\n set with bigger rank should be parent, so that the\r\n disjoint set tree will be more flat.\r\n \"\"\"\r\n x, y = find_set(x), find_set(y)\r\n if x.rank > y.rank:\r\n y.parent = x\r\n else:\r\n x.parent = y\r\n if x.rank == y.rank:\r\n y.rank += 1\r\n```\r\n\r\nhere need check if x==y\r\n\n", "before_files": [{"content": "\"\"\"\r\n disjoint set\r\n Reference: https://en.wikipedia.org/wiki/Disjoint-set_data_structure\r\n\"\"\"\r\n\r\n\r\nclass Node:\r\n def __init__(self, data):\r\n self.data = data\r\n\r\n\r\ndef make_set(x):\r\n \"\"\"\r\n make x as a set.\r\n \"\"\"\r\n # rank is the distance from x to its' parent\r\n # root's rank is 0\r\n x.rank = 0\r\n x.parent = x\r\n\r\n\r\ndef union_set(x, y):\r\n \"\"\"\r\n union two sets.\r\n set with bigger rank should be parent, so that the\r\n disjoint set tree will be more flat.\r\n \"\"\"\r\n x, y = find_set(x), find_set(y)\r\n if x.rank > y.rank:\r\n y.parent = x\r\n else:\r\n x.parent = y\r\n if x.rank == y.rank:\r\n y.rank += 1\r\n\r\n\r\ndef find_set(x):\r\n \"\"\"\r\n return the parent of x\r\n \"\"\"\r\n if x != x.parent:\r\n x.parent = find_set(x.parent)\r\n return x.parent\r\n\r\n\r\ndef find_python_set(node: Node) -> set:\r\n \"\"\"\r\n Return a Python Standard Library set that contains i.\r\n \"\"\"\r\n sets = ({0, 1, 2}, {3, 4, 5})\r\n for s in sets:\r\n if node.data in s:\r\n return s\r\n raise ValueError(f\"{node.data} is not in {sets}\")\r\n\r\n\r\ndef test_disjoint_set():\r\n \"\"\"\r\n >>> test_disjoint_set()\r\n \"\"\"\r\n vertex = [Node(i) for i in range(6)]\r\n for v in vertex:\r\n make_set(v)\r\n\r\n union_set(vertex[0], vertex[1])\r\n union_set(vertex[1], vertex[2])\r\n union_set(vertex[3], vertex[4])\r\n union_set(vertex[3], vertex[5])\r\n\r\n for node0 in vertex:\r\n for node1 in vertex:\r\n if find_python_set(node0).isdisjoint(find_python_set(node1)):\r\n assert find_set(node0) != find_set(node1)\r\n else:\r\n assert find_set(node0) == find_set(node1)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n test_disjoint_set()\r\n", "path": "data_structures/disjoint_set/disjoint_set.py"}], "after_files": [{"content": "\"\"\"\r\n disjoint set\r\n Reference: https://en.wikipedia.org/wiki/Disjoint-set_data_structure\r\n\"\"\"\r\n\r\n\r\nclass Node:\r\n def __init__(self, data):\r\n self.data = data\r\n\r\n\r\ndef make_set(x):\r\n \"\"\"\r\n make x as a set.\r\n \"\"\"\r\n # rank is the distance from x to its' parent\r\n # root's rank is 0\r\n x.rank = 0\r\n x.parent = x\r\n\r\n\r\ndef union_set(x, y):\r\n \"\"\"\r\n union two sets.\r\n set with bigger rank should be parent, so that the\r\n disjoint set tree will be more flat.\r\n \"\"\"\r\n x, y = find_set(x), find_set(y)\r\n if x == y:\r\n return\r\n\r\n elif x.rank > y.rank:\r\n y.parent = x\r\n else:\r\n x.parent = y\r\n if x.rank == y.rank:\r\n y.rank += 1\r\n\r\n\r\ndef find_set(x):\r\n \"\"\"\r\n return the parent of x\r\n \"\"\"\r\n if x != x.parent:\r\n x.parent = find_set(x.parent)\r\n return x.parent\r\n\r\n\r\ndef find_python_set(node: Node) -> set:\r\n \"\"\"\r\n Return a Python Standard Library set that contains i.\r\n \"\"\"\r\n sets = ({0, 1, 2}, {3, 4, 5})\r\n for s in sets:\r\n if node.data in s:\r\n return s\r\n raise ValueError(f\"{node.data} is not in {sets}\")\r\n\r\n\r\ndef test_disjoint_set():\r\n \"\"\"\r\n >>> test_disjoint_set()\r\n \"\"\"\r\n vertex = [Node(i) for i in range(6)]\r\n for v in vertex:\r\n make_set(v)\r\n\r\n union_set(vertex[0], vertex[1])\r\n union_set(vertex[1], vertex[2])\r\n union_set(vertex[3], vertex[4])\r\n union_set(vertex[3], vertex[5])\r\n\r\n for node0 in vertex:\r\n for node1 in vertex:\r\n if find_python_set(node0).isdisjoint(find_python_set(node1)):\r\n assert find_set(node0) != find_set(node1)\r\n else:\r\n assert find_set(node0) == find_set(node1)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n test_disjoint_set()\r\n", "path": "data_structures/disjoint_set/disjoint_set.py"}]} | 1,150 | 135 |
gh_patches_debug_25461 | rasdani/github-patches | git_diff | cornellius-gp__gpytorch-785 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug] get_fantasy_model does not work for SGPR with InducingPointKernel
# 🐛 Bug
Not sure if this should be considered a bug or a feature request, but gpytorch's implementation of SGPR using the InducingPointKernel kernel seems to not support get_fantasy_model.
## To reproduce
I am including the smallest mwe (or should I say mnwe) here. Note that I get the same behaviour by taking the [example tutorial for SGPR](https://gpytorch.readthedocs.io/en/latest/examples/05_Scalable_GP_Regression_Multidimensional/SGPR_Example_CUDA.html) and add a get_fantasy_model added at the end. I can post that too if required, but it is longer and might clutter the ticket.
**Code snippet to reproduce**
```python
import gpytorch
import torch
from gpytorch.kernels import ScaleKernel, RBFKernel, InducingPointKernel
from gpytorch.distributions import MultivariateNormal
from gpytorch.likelihoods import GaussianLikelihood
from gpytorch.means import ConstantMean
class GPRegressionModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super(GPRegressionModel, self).__init__(train_x, train_y, likelihood)
self.mean_module = ConstantMean()
self.base_covar_module = ScaleKernel(RBFKernel())
self.covar_module = InducingPointKernel(self.base_covar_module, inducing_points=train_x[:500, :], likelihood=likelihood)
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return MultivariateNormal(mean_x, covar_x)
train_X = torch.randn((100,5)).to("cpu")
train_y = torch.randn((100)).to("cpu")
likelihood = GaussianLikelihood()
model = GPRegressionModel(train_X, train_y, likelihood)
model.train()
model.eval()
test_pred = model(torch.randn((1,5)).to("cpu"))
model = model.get_fantasy_model(torch.randn((1,5)).to("cpu"), torch.randn((1)).to("cpu"))
```
**Stack trace/error message**
```
Traceback (most recent call last):
File "mwe_sgpr_fantasy.py", line 31, in <module>
model = model.get_fantasy_model(torch.randn((1,5)).to("cpu"), torch.randn((1)).to("cpu"))
File "/home/user/miniconda3/lib/python3.7/site-packages/gpytorch/models/exact_gp.py", line 173, in get_fantasy_model
new_model = deepcopy(self)
File "/home/user/miniconda3/lib/python3.7/copy.py", line 180, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/user/miniconda3/lib/python3.7/copy.py", line 280, in _reconstruct
state = deepcopy(state, memo)
File "/home/user/miniconda3/lib/python3.7/copy.py", line 150, in deepcopy
y = copier(x, memo)
File "/home/user/miniconda3/lib/python3.7/copy.py", line 240, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/user/miniconda3/lib/python3.7/copy.py", line 180, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/user/miniconda3/lib/python3.7/copy.py", line 306, in _reconstruct
value = deepcopy(value, memo)
File "/home/user/miniconda3/lib/python3.7/copy.py", line 180, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/user/miniconda3/lib/python3.7/copy.py", line 280, in _reconstruct
state = deepcopy(state, memo)
File "/home/user/miniconda3/lib/python3.7/copy.py", line 150, in deepcopy
y = copier(x, memo)
File "/home/user/miniconda3/lib/python3.7/copy.py", line 240, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/user/miniconda3/lib/python3.7/copy.py", line 161, in deepcopy
y = copier(memo)
File "/home/user/miniconda3/lib/python3.7/site-packages/torch/tensor.py", line 23, in __deepcopy__
raise RuntimeError("Only Tensors created explicitly by the user "
RuntimeError: Only Tensors created explicitly by the user (graph leaves) support the deepcopy protocol at the moment
```
## Expected Behavior
I would expect a fantasized model to be returned efficiently.
## System information
**Please complete the following information:**
- GPyTorch Version 0.3.3
- PyTorch Version 1.1.0
- Ubuntu 18.04
## Additional context
It seems that during the update, the `new_model = deepcopy(self)` tries to copy `self._inducing_inv_root` but detects that it is trainable by autograd and balks. I guess gpytorch made this design choice because of the goal of optimizing the inducing points as a hyperparameter, but as a tradeoff it does not allow for efficient updates.
So far I tried to replace the inducing points with a non-trainable version by setting `requires_grad` to `False`, but it seems to not help. I would guess that [any of these tensors multiplications](https://github.com/cornellius-gp/gpytorch/blob/master/gpytorch/kernels/inducing_point_kernel.py#L45-L47) in the implementation of `_inducing_inv_root` could end up reactivating autograd, and I am afraid that without more knowledge of gpytorch's internals patching them one-by-one might end up in a long whack-a-mole.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gpytorch/kernels/inducing_point_kernel.py`
Content:
```
1 #!/usr/bin/env python3
2
3 import math
4 import torch
5 from .kernel import Kernel
6 from ..lazy import delazify, DiagLazyTensor, MatmulLazyTensor, RootLazyTensor, PsdSumLazyTensor
7 from ..distributions import MultivariateNormal
8 from ..mlls import InducingPointKernelAddedLossTerm
9 from ..utils.cholesky import psd_safe_cholesky
10
11
12 class InducingPointKernel(Kernel):
13 def __init__(self, base_kernel, inducing_points, likelihood, active_dims=None):
14 super(InducingPointKernel, self).__init__(active_dims=active_dims)
15 self.base_kernel = base_kernel
16 self.likelihood = likelihood
17
18 if inducing_points.ndimension() == 1:
19 inducing_points = inducing_points.unsqueeze(-1)
20 if inducing_points.ndimension() != 2:
21 raise RuntimeError("Inducing points should be 2 dimensional")
22 self.register_parameter(name="inducing_points", parameter=torch.nn.Parameter(inducing_points))
23 self.register_added_loss_term("inducing_point_loss_term")
24
25 def train(self, mode=True):
26 if hasattr(self, "_cached_kernel_mat"):
27 del self._cached_kernel_mat
28 return super(InducingPointKernel, self).train(mode)
29
30 @property
31 def _inducing_mat(self):
32 if not self.training and hasattr(self, "_cached_kernel_mat"):
33 return self._cached_kernel_mat
34 else:
35 res = delazify(self.base_kernel(self.inducing_points, self.inducing_points))
36 if not self.training:
37 self._cached_kernel_mat = res
38 return res
39
40 @property
41 def _inducing_inv_root(self):
42 if not self.training and hasattr(self, "_cached_kernel_inv_root"):
43 return self._cached_kernel_inv_root
44 else:
45 chol = psd_safe_cholesky(self._inducing_mat, upper=True)
46 eye = torch.eye(chol.size(-1), device=chol.device, dtype=chol.dtype)
47 inv_root = torch.triangular_solve(eye, chol)[0]
48
49 res = inv_root
50 if not self.training:
51 self._cached_kernel_inv_root = res
52 return res
53
54 def _get_covariance(self, x1, x2):
55 k_ux1 = delazify(self.base_kernel(x1, self.inducing_points))
56 if torch.equal(x1, x2):
57 covar = RootLazyTensor(k_ux1.matmul(self._inducing_inv_root))
58
59 # Diagonal correction for predictive posterior
60 correction = (self.base_kernel(x1, x2, diag=True) - covar.diag()).clamp(0, math.inf)
61 covar = PsdSumLazyTensor(covar, DiagLazyTensor(correction))
62 else:
63 k_ux2 = delazify(self.base_kernel(x2, self.inducing_points))
64 covar = MatmulLazyTensor(
65 k_ux1.matmul(self._inducing_inv_root), k_ux2.matmul(self._inducing_inv_root).transpose(-1, -2)
66 )
67
68 return covar
69
70 def _covar_diag(self, inputs):
71 if inputs.ndimension() == 1:
72 inputs = inputs.unsqueeze(1)
73
74 # Get diagonal of covar
75 covar_diag = delazify(self.base_kernel(inputs, diag=True))
76 return DiagLazyTensor(covar_diag)
77
78 def forward(self, x1, x2, diag=False, **kwargs):
79 covar = self._get_covariance(x1, x2)
80
81 if self.training:
82 if not torch.equal(x1, x2):
83 raise RuntimeError("x1 should equal x2 in training mode")
84 zero_mean = torch.zeros_like(x1.select(-1, 0))
85 new_added_loss_term = InducingPointKernelAddedLossTerm(
86 MultivariateNormal(zero_mean, self._covar_diag(x1)),
87 MultivariateNormal(zero_mean, covar),
88 self.likelihood,
89 )
90 self.update_added_loss_term("inducing_point_loss_term", new_added_loss_term)
91
92 if diag:
93 return covar.diag()
94 else:
95 return covar
96
97 def num_outputs_per_input(self, x1, x2):
98 return self.base_kernel.num_outputs_per_input(x1, x2)
99
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gpytorch/kernels/inducing_point_kernel.py b/gpytorch/kernels/inducing_point_kernel.py
--- a/gpytorch/kernels/inducing_point_kernel.py
+++ b/gpytorch/kernels/inducing_point_kernel.py
@@ -2,6 +2,7 @@
import math
import torch
+import copy
from .kernel import Kernel
from ..lazy import delazify, DiagLazyTensor, MatmulLazyTensor, RootLazyTensor, PsdSumLazyTensor
from ..distributions import MultivariateNormal
@@ -96,3 +97,33 @@
def num_outputs_per_input(self, x1, x2):
return self.base_kernel.num_outputs_per_input(x1, x2)
+
+ def __deepcopy__(self, memo):
+ replace_inv_root = False
+ replace_kernel_mat = False
+
+ if hasattr(self, "_cached_kernel_inv_root"):
+ replace_inv_root = True
+ kernel_inv_root = self._cached_kernel_inv_root
+ self._cached_kernel_inv_root = None
+ if hasattr(self, "_cached_kernel_mat"):
+ replace_kernel_mat = True
+ kernel_mat = self._cached_kernel_mat
+ self._cached_kernel_mat = None
+
+ deepcopy_method = self.__deepcopy__
+ self.__deepcopy__ = None
+ cp = copy.deepcopy(self, memo)
+
+ self.__deepcopy__ = deepcopy_method
+ cp.__deepcopy__ = deepcopy_method
+
+ if replace_inv_root:
+ self._cached_kernel_inv_root = kernel_inv_root
+ cp._cached_kernel_inv_root = kernel_inv_root
+
+ if replace_kernel_mat:
+ self._cached_kernel_mat = kernel_mat
+ cp._cached_kernel_mat = kernel_mat
+
+ return cp
| {"golden_diff": "diff --git a/gpytorch/kernels/inducing_point_kernel.py b/gpytorch/kernels/inducing_point_kernel.py\n--- a/gpytorch/kernels/inducing_point_kernel.py\n+++ b/gpytorch/kernels/inducing_point_kernel.py\n@@ -2,6 +2,7 @@\n \n import math\n import torch\n+import copy\n from .kernel import Kernel\n from ..lazy import delazify, DiagLazyTensor, MatmulLazyTensor, RootLazyTensor, PsdSumLazyTensor\n from ..distributions import MultivariateNormal\n@@ -96,3 +97,33 @@\n \n def num_outputs_per_input(self, x1, x2):\n return self.base_kernel.num_outputs_per_input(x1, x2)\n+\n+ def __deepcopy__(self, memo):\n+ replace_inv_root = False\n+ replace_kernel_mat = False\n+\n+ if hasattr(self, \"_cached_kernel_inv_root\"):\n+ replace_inv_root = True\n+ kernel_inv_root = self._cached_kernel_inv_root\n+ self._cached_kernel_inv_root = None\n+ if hasattr(self, \"_cached_kernel_mat\"):\n+ replace_kernel_mat = True\n+ kernel_mat = self._cached_kernel_mat\n+ self._cached_kernel_mat = None\n+\n+ deepcopy_method = self.__deepcopy__\n+ self.__deepcopy__ = None\n+ cp = copy.deepcopy(self, memo)\n+\n+ self.__deepcopy__ = deepcopy_method\n+ cp.__deepcopy__ = deepcopy_method\n+\n+ if replace_inv_root:\n+ self._cached_kernel_inv_root = kernel_inv_root\n+ cp._cached_kernel_inv_root = kernel_inv_root\n+\n+ if replace_kernel_mat:\n+ self._cached_kernel_mat = kernel_mat\n+ cp._cached_kernel_mat = kernel_mat\n+\n+ return cp\n", "issue": "[Bug] get_fantasy_model does not work for SGPR with InducingPointKernel\n# \ud83d\udc1b Bug\r\n\r\nNot sure if this should be considered a bug or a feature request, but gpytorch's implementation of SGPR using the InducingPointKernel kernel seems to not support get_fantasy_model.\r\n\r\n## To reproduce\r\nI am including the smallest mwe (or should I say mnwe) here. Note that I get the same behaviour by taking the [example tutorial for SGPR](https://gpytorch.readthedocs.io/en/latest/examples/05_Scalable_GP_Regression_Multidimensional/SGPR_Example_CUDA.html) and add a get_fantasy_model added at the end. I can post that too if required, but it is longer and might clutter the ticket.\r\n\r\n**Code snippet to reproduce**\r\n```python\r\nimport gpytorch\r\nimport torch\r\nfrom gpytorch.kernels import ScaleKernel, RBFKernel, InducingPointKernel\r\nfrom gpytorch.distributions import MultivariateNormal\r\nfrom gpytorch.likelihoods import GaussianLikelihood\r\nfrom gpytorch.means import ConstantMean\r\n\r\nclass GPRegressionModel(gpytorch.models.ExactGP):\r\n def __init__(self, train_x, train_y, likelihood):\r\n super(GPRegressionModel, self).__init__(train_x, train_y, likelihood)\r\n self.mean_module = ConstantMean()\r\n self.base_covar_module = ScaleKernel(RBFKernel())\r\n self.covar_module = InducingPointKernel(self.base_covar_module, inducing_points=train_x[:500, :], likelihood=likelihood)\r\n\r\n def forward(self, x):\r\n mean_x = self.mean_module(x)\r\n covar_x = self.covar_module(x)\r\n return MultivariateNormal(mean_x, covar_x)\r\n\r\ntrain_X = torch.randn((100,5)).to(\"cpu\")\r\ntrain_y = torch.randn((100)).to(\"cpu\")\r\n\r\nlikelihood = GaussianLikelihood()\r\n\r\nmodel = GPRegressionModel(train_X, train_y, likelihood)\r\nmodel.train()\r\nmodel.eval()\r\n\r\ntest_pred = model(torch.randn((1,5)).to(\"cpu\"))\r\n\r\nmodel = model.get_fantasy_model(torch.randn((1,5)).to(\"cpu\"), torch.randn((1)).to(\"cpu\"))\r\n```\r\n\r\n**Stack trace/error message**\r\n```\r\nTraceback (most recent call last):\r\n File \"mwe_sgpr_fantasy.py\", line 31, in <module>\r\n model = model.get_fantasy_model(torch.randn((1,5)).to(\"cpu\"), torch.randn((1)).to(\"cpu\"))\r\n File \"/home/user/miniconda3/lib/python3.7/site-packages/gpytorch/models/exact_gp.py\", line 173, in get_fantasy_model\r\n new_model = deepcopy(self)\r\n File \"/home/user/miniconda3/lib/python3.7/copy.py\", line 180, in deepcopy\r\n y = _reconstruct(x, memo, *rv)\r\n File \"/home/user/miniconda3/lib/python3.7/copy.py\", line 280, in _reconstruct\r\n state = deepcopy(state, memo)\r\n File \"/home/user/miniconda3/lib/python3.7/copy.py\", line 150, in deepcopy\r\n y = copier(x, memo)\r\n File \"/home/user/miniconda3/lib/python3.7/copy.py\", line 240, in _deepcopy_dict\r\n y[deepcopy(key, memo)] = deepcopy(value, memo)\r\n File \"/home/user/miniconda3/lib/python3.7/copy.py\", line 180, in deepcopy\r\n y = _reconstruct(x, memo, *rv)\r\n File \"/home/user/miniconda3/lib/python3.7/copy.py\", line 306, in _reconstruct\r\n value = deepcopy(value, memo)\r\n File \"/home/user/miniconda3/lib/python3.7/copy.py\", line 180, in deepcopy\r\n y = _reconstruct(x, memo, *rv)\r\n File \"/home/user/miniconda3/lib/python3.7/copy.py\", line 280, in _reconstruct\r\n state = deepcopy(state, memo)\r\n File \"/home/user/miniconda3/lib/python3.7/copy.py\", line 150, in deepcopy\r\n y = copier(x, memo)\r\n File \"/home/user/miniconda3/lib/python3.7/copy.py\", line 240, in _deepcopy_dict\r\n y[deepcopy(key, memo)] = deepcopy(value, memo)\r\n File \"/home/user/miniconda3/lib/python3.7/copy.py\", line 161, in deepcopy\r\n y = copier(memo)\r\n File \"/home/user/miniconda3/lib/python3.7/site-packages/torch/tensor.py\", line 23, in __deepcopy__\r\n raise RuntimeError(\"Only Tensors created explicitly by the user \"\r\nRuntimeError: Only Tensors created explicitly by the user (graph leaves) support the deepcopy protocol at the moment\r\n```\r\n\r\n## Expected Behavior\r\n\r\nI would expect a fantasized model to be returned efficiently.\r\n\r\n## System information\r\n\r\n**Please complete the following information:**\r\n- GPyTorch Version 0.3.3\r\n\r\n\r\n- PyTorch Version 1.1.0\r\n- Ubuntu 18.04\r\n\r\n## Additional context\r\nIt seems that during the update, the `new_model = deepcopy(self)` tries to copy `self._inducing_inv_root` but detects that it is trainable by autograd and balks. I guess gpytorch made this design choice because of the goal of optimizing the inducing points as a hyperparameter, but as a tradeoff it does not allow for efficient updates.\r\n\r\nSo far I tried to replace the inducing points with a non-trainable version by setting `requires_grad` to `False`, but it seems to not help. I would guess that [any of these tensors multiplications](https://github.com/cornellius-gp/gpytorch/blob/master/gpytorch/kernels/inducing_point_kernel.py#L45-L47) in the implementation of `_inducing_inv_root` could end up reactivating autograd, and I am afraid that without more knowledge of gpytorch's internals patching them one-by-one might end up in a long whack-a-mole.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport math\nimport torch\nfrom .kernel import Kernel\nfrom ..lazy import delazify, DiagLazyTensor, MatmulLazyTensor, RootLazyTensor, PsdSumLazyTensor\nfrom ..distributions import MultivariateNormal\nfrom ..mlls import InducingPointKernelAddedLossTerm\nfrom ..utils.cholesky import psd_safe_cholesky\n\n\nclass InducingPointKernel(Kernel):\n def __init__(self, base_kernel, inducing_points, likelihood, active_dims=None):\n super(InducingPointKernel, self).__init__(active_dims=active_dims)\n self.base_kernel = base_kernel\n self.likelihood = likelihood\n\n if inducing_points.ndimension() == 1:\n inducing_points = inducing_points.unsqueeze(-1)\n if inducing_points.ndimension() != 2:\n raise RuntimeError(\"Inducing points should be 2 dimensional\")\n self.register_parameter(name=\"inducing_points\", parameter=torch.nn.Parameter(inducing_points))\n self.register_added_loss_term(\"inducing_point_loss_term\")\n\n def train(self, mode=True):\n if hasattr(self, \"_cached_kernel_mat\"):\n del self._cached_kernel_mat\n return super(InducingPointKernel, self).train(mode)\n\n @property\n def _inducing_mat(self):\n if not self.training and hasattr(self, \"_cached_kernel_mat\"):\n return self._cached_kernel_mat\n else:\n res = delazify(self.base_kernel(self.inducing_points, self.inducing_points))\n if not self.training:\n self._cached_kernel_mat = res\n return res\n\n @property\n def _inducing_inv_root(self):\n if not self.training and hasattr(self, \"_cached_kernel_inv_root\"):\n return self._cached_kernel_inv_root\n else:\n chol = psd_safe_cholesky(self._inducing_mat, upper=True)\n eye = torch.eye(chol.size(-1), device=chol.device, dtype=chol.dtype)\n inv_root = torch.triangular_solve(eye, chol)[0]\n\n res = inv_root\n if not self.training:\n self._cached_kernel_inv_root = res\n return res\n\n def _get_covariance(self, x1, x2):\n k_ux1 = delazify(self.base_kernel(x1, self.inducing_points))\n if torch.equal(x1, x2):\n covar = RootLazyTensor(k_ux1.matmul(self._inducing_inv_root))\n\n # Diagonal correction for predictive posterior\n correction = (self.base_kernel(x1, x2, diag=True) - covar.diag()).clamp(0, math.inf)\n covar = PsdSumLazyTensor(covar, DiagLazyTensor(correction))\n else:\n k_ux2 = delazify(self.base_kernel(x2, self.inducing_points))\n covar = MatmulLazyTensor(\n k_ux1.matmul(self._inducing_inv_root), k_ux2.matmul(self._inducing_inv_root).transpose(-1, -2)\n )\n\n return covar\n\n def _covar_diag(self, inputs):\n if inputs.ndimension() == 1:\n inputs = inputs.unsqueeze(1)\n\n # Get diagonal of covar\n covar_diag = delazify(self.base_kernel(inputs, diag=True))\n return DiagLazyTensor(covar_diag)\n\n def forward(self, x1, x2, diag=False, **kwargs):\n covar = self._get_covariance(x1, x2)\n\n if self.training:\n if not torch.equal(x1, x2):\n raise RuntimeError(\"x1 should equal x2 in training mode\")\n zero_mean = torch.zeros_like(x1.select(-1, 0))\n new_added_loss_term = InducingPointKernelAddedLossTerm(\n MultivariateNormal(zero_mean, self._covar_diag(x1)),\n MultivariateNormal(zero_mean, covar),\n self.likelihood,\n )\n self.update_added_loss_term(\"inducing_point_loss_term\", new_added_loss_term)\n\n if diag:\n return covar.diag()\n else:\n return covar\n\n def num_outputs_per_input(self, x1, x2):\n return self.base_kernel.num_outputs_per_input(x1, x2)\n", "path": "gpytorch/kernels/inducing_point_kernel.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nimport math\nimport torch\nimport copy\nfrom .kernel import Kernel\nfrom ..lazy import delazify, DiagLazyTensor, MatmulLazyTensor, RootLazyTensor, PsdSumLazyTensor\nfrom ..distributions import MultivariateNormal\nfrom ..mlls import InducingPointKernelAddedLossTerm\nfrom ..utils.cholesky import psd_safe_cholesky\n\n\nclass InducingPointKernel(Kernel):\n def __init__(self, base_kernel, inducing_points, likelihood, active_dims=None):\n super(InducingPointKernel, self).__init__(active_dims=active_dims)\n self.base_kernel = base_kernel\n self.likelihood = likelihood\n\n if inducing_points.ndimension() == 1:\n inducing_points = inducing_points.unsqueeze(-1)\n if inducing_points.ndimension() != 2:\n raise RuntimeError(\"Inducing points should be 2 dimensional\")\n self.register_parameter(name=\"inducing_points\", parameter=torch.nn.Parameter(inducing_points))\n self.register_added_loss_term(\"inducing_point_loss_term\")\n\n def train(self, mode=True):\n if hasattr(self, \"_cached_kernel_mat\"):\n del self._cached_kernel_mat\n return super(InducingPointKernel, self).train(mode)\n\n @property\n def _inducing_mat(self):\n if not self.training and hasattr(self, \"_cached_kernel_mat\"):\n return self._cached_kernel_mat\n else:\n res = delazify(self.base_kernel(self.inducing_points, self.inducing_points))\n if not self.training:\n self._cached_kernel_mat = res\n return res\n\n @property\n def _inducing_inv_root(self):\n if not self.training and hasattr(self, \"_cached_kernel_inv_root\"):\n return self._cached_kernel_inv_root\n else:\n chol = psd_safe_cholesky(self._inducing_mat, upper=True)\n eye = torch.eye(chol.size(-1), device=chol.device, dtype=chol.dtype)\n inv_root = torch.triangular_solve(eye, chol)[0]\n\n res = inv_root\n if not self.training:\n self._cached_kernel_inv_root = res\n return res\n\n def _get_covariance(self, x1, x2):\n k_ux1 = delazify(self.base_kernel(x1, self.inducing_points))\n if torch.equal(x1, x2):\n covar = RootLazyTensor(k_ux1.matmul(self._inducing_inv_root))\n\n # Diagonal correction for predictive posterior\n correction = (self.base_kernel(x1, x2, diag=True) - covar.diag()).clamp(0, math.inf)\n covar = PsdSumLazyTensor(covar, DiagLazyTensor(correction))\n else:\n k_ux2 = delazify(self.base_kernel(x2, self.inducing_points))\n covar = MatmulLazyTensor(\n k_ux1.matmul(self._inducing_inv_root), k_ux2.matmul(self._inducing_inv_root).transpose(-1, -2)\n )\n\n return covar\n\n def _covar_diag(self, inputs):\n if inputs.ndimension() == 1:\n inputs = inputs.unsqueeze(1)\n\n # Get diagonal of covar\n covar_diag = delazify(self.base_kernel(inputs, diag=True))\n return DiagLazyTensor(covar_diag)\n\n def forward(self, x1, x2, diag=False, **kwargs):\n covar = self._get_covariance(x1, x2)\n\n if self.training:\n if not torch.equal(x1, x2):\n raise RuntimeError(\"x1 should equal x2 in training mode\")\n zero_mean = torch.zeros_like(x1.select(-1, 0))\n new_added_loss_term = InducingPointKernelAddedLossTerm(\n MultivariateNormal(zero_mean, self._covar_diag(x1)),\n MultivariateNormal(zero_mean, covar),\n self.likelihood,\n )\n self.update_added_loss_term(\"inducing_point_loss_term\", new_added_loss_term)\n\n if diag:\n return covar.diag()\n else:\n return covar\n\n def num_outputs_per_input(self, x1, x2):\n return self.base_kernel.num_outputs_per_input(x1, x2)\n\n def __deepcopy__(self, memo):\n replace_inv_root = False\n replace_kernel_mat = False\n\n if hasattr(self, \"_cached_kernel_inv_root\"):\n replace_inv_root = True\n kernel_inv_root = self._cached_kernel_inv_root\n self._cached_kernel_inv_root = None\n if hasattr(self, \"_cached_kernel_mat\"):\n replace_kernel_mat = True\n kernel_mat = self._cached_kernel_mat\n self._cached_kernel_mat = None\n\n deepcopy_method = self.__deepcopy__\n self.__deepcopy__ = None\n cp = copy.deepcopy(self, memo)\n\n self.__deepcopy__ = deepcopy_method\n cp.__deepcopy__ = deepcopy_method\n\n if replace_inv_root:\n self._cached_kernel_inv_root = kernel_inv_root\n cp._cached_kernel_inv_root = kernel_inv_root\n\n if replace_kernel_mat:\n self._cached_kernel_mat = kernel_mat\n cp._cached_kernel_mat = kernel_mat\n\n return cp\n", "path": "gpytorch/kernels/inducing_point_kernel.py"}]} | 2,729 | 400 |
gh_patches_debug_36193 | rasdani/github-patches | git_diff | microsoft__ptvsd-552 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Args passed to user script on 'start without debugging' contains ptvsd args
## Environment data
- PTVSD version: Master
- OS and version: Windows 10
- Python version (& distribution if applicable, e.g. Anaconda): Any
- Using VS Code or Visual Studio: VSC
## Actual behavior
```
['c:\\Users\\kanadig\\.vscode\\extensions\\ms-python.python-2018.6.0\\pythonFiles\\experimental\\ptvsd\\ptvsd\\__main__.py', '--nodebug', '--host', 'localhost', '--port', '51225', 'c:\\scratch\\test.py', '--one', '--two', '--three']
```
## Expected behavior
```
['c:\\scratch\\test.py', '--one', '--two', '--three']
```
## Steps to reproduce:
1. Create a script file with this content:
```python
import sys
print(sys.argv)
```
2. Add `args` to python experimental launch configuration:
```json
{
"name": "PyExp: Current File",
"type": "pythonExperimental",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"args": ["--one", "--two", "--three"]
}
```
2. Run using **F5** and **Ctrl+F5**, the output should be same.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ptvsd/__main__.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License. See LICENSE in the project root
3 # for license information.
4
5 import argparse
6 import os.path
7 import sys
8
9 from ptvsd._local import debug_main, run_main
10 from ptvsd.socket import Address
11 from ptvsd.version import __version__, __author__ # noqa
12
13
14 ##################################
15 # the script
16
17 """
18 For the PyDevd CLI handling see:
19
20 https://github.com/fabioz/PyDev.Debugger/blob/master/_pydevd_bundle/pydevd_command_line_handling.py
21 https://github.com/fabioz/PyDev.Debugger/blob/master/pydevd.py#L1450 (main func)
22 """ # noqa
23
24 PYDEVD_OPTS = {
25 '--file',
26 '--client',
27 #'--port',
28 '--vm_type',
29 }
30
31 PYDEVD_FLAGS = {
32 '--DEBUG',
33 '--DEBUG_RECORD_SOCKET_READS',
34 '--cmd-line',
35 '--module',
36 '--multiproc',
37 '--multiprocess',
38 '--print-in-debugger-startup',
39 '--save-signatures',
40 '--save-threading',
41 '--save-asyncio',
42 '--server',
43 '--qt-support=auto',
44 }
45
46 USAGE = """
47 {0} [-h] [-V] [--nodebug] [--host HOST | --server-host HOST] --port PORT -m MODULE [arg ...]
48 {0} [-h] [-V] [--nodebug] [--host HOST | --server-host HOST] --port PORT FILENAME [arg ...]
49 """ # noqa
50
51
52 PYDEVD_DEFAULTS = {
53 '--qt-support=auto',
54 }
55
56
57 def _set_pydevd_defaults(pydevd_args):
58 args_to_append = []
59 for arg in PYDEVD_DEFAULTS:
60 if arg not in pydevd_args:
61 args_to_append.append(arg)
62 return pydevd_args + args_to_append
63
64
65 def parse_args(argv=None):
66 """Return the parsed args to use in main()."""
67 if argv is None:
68 argv = sys.argv
69 prog = argv[0]
70 if prog == __file__:
71 prog = '{} -m ptvsd'.format(os.path.basename(sys.executable))
72 else:
73 prog = argv[0]
74 argv = argv[1:]
75
76 supported, pydevd, script = _group_args(argv)
77 args = _parse_args(prog, supported)
78 pydevd = _set_pydevd_defaults(pydevd)
79 extra = pydevd + ['--']
80 if script:
81 extra += script
82 return args, extra
83
84
85 def _group_args(argv):
86 supported = []
87 pydevd = []
88 script = []
89
90 try:
91 pos = argv.index('--')
92 except ValueError:
93 script = []
94 else:
95 script = argv[pos + 1:]
96 argv = argv[:pos]
97
98 for arg in argv:
99 if arg == '-h' or arg == '--help':
100 return argv, [], script
101
102 gottarget = False
103 skip = 0
104 for i in range(len(argv)):
105 if skip:
106 skip -= 1
107 continue
108
109 arg = argv[i]
110 try:
111 nextarg = argv[i + 1]
112 except IndexError:
113 nextarg = None
114
115 # TODO: Deprecate the PyDevd arg support.
116 # PyDevd support
117 if gottarget:
118 script = argv[i:] + script
119 break
120 if arg == '--client':
121 arg = '--host'
122 elif arg == '--file':
123 if nextarg is None: # The filename is missing...
124 pydevd.append(arg)
125 continue # This will get handled later.
126 if nextarg.endswith(':') and '--module' in pydevd:
127 pydevd.remove('--module')
128 arg = '-m'
129 argv[i + 1] = nextarg = nextarg[:-1]
130 else:
131 arg = nextarg
132 skip += 1
133
134 if arg in PYDEVD_OPTS:
135 pydevd.append(arg)
136 if nextarg is not None:
137 pydevd.append(nextarg)
138 skip += 1
139 elif arg in PYDEVD_FLAGS:
140 pydevd.append(arg)
141 elif arg == '--nodebug':
142 supported.append(arg)
143
144 # ptvsd support
145 elif arg in ('--host', '--server-host', '--port', '-m'):
146 if arg == '-m':
147 gottarget = True
148 supported.append(arg)
149 if nextarg is not None:
150 supported.append(nextarg)
151 skip += 1
152 elif arg in ('--single-session',):
153 supported.append(arg)
154 elif not arg.startswith('-'):
155 supported.append(arg)
156 gottarget = True
157
158 # unsupported arg
159 else:
160 supported.append(arg)
161 break
162
163 return supported, pydevd, script
164
165
166 def _parse_args(prog, argv):
167 parser = argparse.ArgumentParser(
168 prog=prog,
169 usage=USAGE.format(prog),
170 )
171 parser.add_argument('--nodebug', action='store_true')
172 host = parser.add_mutually_exclusive_group()
173 host.add_argument('--host')
174 host.add_argument('--server-host')
175 parser.add_argument('--port', type=int, required=True)
176
177 target = parser.add_mutually_exclusive_group(required=True)
178 target.add_argument('-m', dest='module')
179 target.add_argument('filename', nargs='?')
180
181 parser.add_argument('--single-session', action='store_true')
182 parser.add_argument('-V', '--version', action='version')
183 parser.version = __version__
184
185 args = parser.parse_args(argv)
186 ns = vars(args)
187
188 serverhost = ns.pop('server_host', None)
189 clienthost = ns.pop('host', None)
190 if serverhost:
191 args.address = Address.as_server(serverhost, ns.pop('port'))
192 elif not clienthost:
193 if args.nodebug:
194 args.address = Address.as_client(clienthost, ns.pop('port'))
195 else:
196 args.address = Address.as_server(clienthost, ns.pop('port'))
197 else:
198 args.address = Address.as_client(clienthost, ns.pop('port'))
199
200 module = ns.pop('module')
201 filename = ns.pop('filename')
202 if module is None:
203 args.name = filename
204 args.kind = 'script'
205 else:
206 args.name = module
207 args.kind = 'module'
208 #if argv[-1] != args.name or (module and argv[-1] != '-m'):
209 # parser.error('script/module must be last arg')
210
211 return args
212
213
214 def main(addr, name, kind, extra=(), nodebug=False, **kwargs):
215 if nodebug:
216 run_main(addr, name, kind, *extra, **kwargs)
217 else:
218 debug_main(addr, name, kind, *extra, **kwargs)
219
220
221 if __name__ == '__main__':
222 args, extra = parse_args()
223 main(args.address, args.name, args.kind, extra, nodebug=args.nodebug,
224 singlesession=args.single_session)
225
```
Path: `ptvsd/_local.py`
Content:
```
1 import sys
2
3 import pydevd
4
5 from ptvsd.pydevd_hooks import install
6 from ptvsd.runner import run as no_debug_runner
7 from ptvsd.socket import Address
8
9
10 ########################
11 # high-level functions
12
13 def debug_main(address, name, kind, *extra, **kwargs):
14 if kind == 'module':
15 run_module(address, name, *extra, **kwargs)
16 else:
17 run_file(address, name, *extra, **kwargs)
18
19
20 def run_main(address, name, kind, *extra, **kwargs):
21 no_debug_runner(address, name, kind == 'module', *extra, **kwargs)
22
23
24 ########################
25 # low-level functions
26
27 def run_module(address, modname, *extra, **kwargs):
28 """Run pydevd for the given module."""
29 addr = Address.from_raw(address)
30 if not addr.isserver:
31 kwargs['singlesession'] = True
32 run = kwargs.pop('_run', _run)
33 prog = kwargs.pop('_prog', sys.argv[0])
34 filename = modname + ':'
35 argv = _run_argv(addr, filename, extra, _prog=prog)
36 argv.insert(argv.index('--file'), '--module')
37 run(argv, addr, **kwargs)
38
39
40 def run_file(address, filename, *extra, **kwargs):
41 """Run pydevd for the given Python file."""
42 addr = Address.from_raw(address)
43 if not addr.isserver:
44 kwargs['singlesession'] = True
45 run = kwargs.pop('_run', _run)
46 prog = kwargs.pop('_prog', sys.argv[0])
47 argv = _run_argv(addr, filename, extra, _prog=prog)
48 run(argv, addr, **kwargs)
49
50
51 def _run_argv(address, filename, extra, _prog=sys.argv[0]):
52 """Convert the given values to an argv that pydevd.main() supports."""
53 if '--' in extra:
54 pydevd = list(extra[:extra.index('--')])
55 extra = list(extra[len(pydevd) + 1:])
56 else:
57 pydevd = []
58 extra = list(extra)
59
60 host, port = address
61 argv = [
62 _prog,
63 '--port', str(port),
64 ]
65 if not address.isserver:
66 argv.extend([
67 '--client', host or 'localhost',
68 ])
69 return argv + pydevd + [
70 '--file', filename,
71 ] + extra
72
73
74 def _run(argv, addr, _pydevd=pydevd, _install=install, **kwargs):
75 """Start pydevd with the given commandline args."""
76 #print(' '.join(argv))
77
78 # Pydevd assumes that the "__main__" module is the "pydevd" module
79 # and does some tricky stuff under that assumption. For example,
80 # when the debugger starts up it calls save_main_module()
81 # (in pydevd_bundle/pydevd_utils.py). That function explicitly sets
82 # sys.modules["pydevd"] to sys.modules["__main__"] and then sets
83 # the __main__ module to a new one. This makes some sense since
84 # it gives the debugged script a fresh __main__ module.
85 #
86 # This complicates things for us since we are running a different
87 # file (i.e. this one) as the __main__ module. Consequently,
88 # sys.modules["pydevd"] gets set to ptvsd/__main__.py. Subsequent
89 # imports of the "pydevd" module then return the wrong module. We
90 # work around this by avoiding lazy imports of the "pydevd" module.
91 # We also replace the __main__ module with the "pydevd" module here.
92 if sys.modules['__main__'].__file__ != _pydevd.__file__:
93 sys.modules['__main___orig'] = sys.modules['__main__']
94 sys.modules['__main__'] = _pydevd
95
96 daemon = _install(_pydevd, addr, **kwargs)
97 sys.argv[:] = argv
98 try:
99 _pydevd.main()
100 except SystemExit as ex:
101 daemon.exitcode = int(ex.code)
102 raise
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ptvsd/__main__.py b/ptvsd/__main__.py
--- a/ptvsd/__main__.py
+++ b/ptvsd/__main__.py
@@ -49,19 +49,6 @@
""" # noqa
-PYDEVD_DEFAULTS = {
- '--qt-support=auto',
-}
-
-
-def _set_pydevd_defaults(pydevd_args):
- args_to_append = []
- for arg in PYDEVD_DEFAULTS:
- if arg not in pydevd_args:
- args_to_append.append(arg)
- return pydevd_args + args_to_append
-
-
def parse_args(argv=None):
"""Return the parsed args to use in main()."""
if argv is None:
@@ -75,7 +62,7 @@
supported, pydevd, script = _group_args(argv)
args = _parse_args(prog, supported)
- pydevd = _set_pydevd_defaults(pydevd)
+ # '--' is used in _run_args to extract pydevd specific args
extra = pydevd + ['--']
if script:
extra += script
diff --git a/ptvsd/_local.py b/ptvsd/_local.py
--- a/ptvsd/_local.py
+++ b/ptvsd/_local.py
@@ -7,6 +7,19 @@
from ptvsd.socket import Address
+PYDEVD_DEFAULTS = {
+ '--qt-support=auto',
+}
+
+
+def _set_pydevd_defaults(pydevd_args):
+ args_to_append = []
+ for arg in PYDEVD_DEFAULTS:
+ if arg not in pydevd_args:
+ args_to_append.append(arg)
+ return pydevd_args + args_to_append
+
+
########################
# high-level functions
@@ -18,7 +31,10 @@
def run_main(address, name, kind, *extra, **kwargs):
- no_debug_runner(address, name, kind == 'module', *extra, **kwargs)
+ addr = Address.from_raw(address)
+ sys.argv[:] = _run_main_argv(name, extra)
+ runner = kwargs.pop('_runner', no_debug_runner)
+ runner(addr, name, kind == 'module', *extra, **kwargs)
########################
@@ -57,6 +73,7 @@
pydevd = []
extra = list(extra)
+ pydevd = _set_pydevd_defaults(pydevd)
host, port = address
argv = [
_prog,
@@ -71,6 +88,15 @@
] + extra
+def _run_main_argv(filename, extra):
+ if '--' in extra:
+ pydevd = list(extra[:extra.index('--')])
+ extra = list(extra[len(pydevd) + 1:])
+ else:
+ extra = list(extra)
+ return [filename] + extra
+
+
def _run(argv, addr, _pydevd=pydevd, _install=install, **kwargs):
"""Start pydevd with the given commandline args."""
#print(' '.join(argv))
| {"golden_diff": "diff --git a/ptvsd/__main__.py b/ptvsd/__main__.py\n--- a/ptvsd/__main__.py\n+++ b/ptvsd/__main__.py\n@@ -49,19 +49,6 @@\n \"\"\" # noqa\n \n \n-PYDEVD_DEFAULTS = {\n- '--qt-support=auto',\n-}\n-\n-\n-def _set_pydevd_defaults(pydevd_args):\n- args_to_append = []\n- for arg in PYDEVD_DEFAULTS:\n- if arg not in pydevd_args:\n- args_to_append.append(arg)\n- return pydevd_args + args_to_append\n-\n-\n def parse_args(argv=None):\n \"\"\"Return the parsed args to use in main().\"\"\"\n if argv is None:\n@@ -75,7 +62,7 @@\n \n supported, pydevd, script = _group_args(argv)\n args = _parse_args(prog, supported)\n- pydevd = _set_pydevd_defaults(pydevd)\n+ # '--' is used in _run_args to extract pydevd specific args\n extra = pydevd + ['--']\n if script:\n extra += script\ndiff --git a/ptvsd/_local.py b/ptvsd/_local.py\n--- a/ptvsd/_local.py\n+++ b/ptvsd/_local.py\n@@ -7,6 +7,19 @@\n from ptvsd.socket import Address\n \n \n+PYDEVD_DEFAULTS = {\n+ '--qt-support=auto',\n+}\n+\n+\n+def _set_pydevd_defaults(pydevd_args):\n+ args_to_append = []\n+ for arg in PYDEVD_DEFAULTS:\n+ if arg not in pydevd_args:\n+ args_to_append.append(arg)\n+ return pydevd_args + args_to_append\n+\n+\n ########################\n # high-level functions\n \n@@ -18,7 +31,10 @@\n \n \n def run_main(address, name, kind, *extra, **kwargs):\n- no_debug_runner(address, name, kind == 'module', *extra, **kwargs)\n+ addr = Address.from_raw(address)\n+ sys.argv[:] = _run_main_argv(name, extra)\n+ runner = kwargs.pop('_runner', no_debug_runner)\n+ runner(addr, name, kind == 'module', *extra, **kwargs)\n \n \n ########################\n@@ -57,6 +73,7 @@\n pydevd = []\n extra = list(extra)\n \n+ pydevd = _set_pydevd_defaults(pydevd)\n host, port = address\n argv = [\n _prog,\n@@ -71,6 +88,15 @@\n ] + extra\n \n \n+def _run_main_argv(filename, extra):\n+ if '--' in extra:\n+ pydevd = list(extra[:extra.index('--')])\n+ extra = list(extra[len(pydevd) + 1:])\n+ else:\n+ extra = list(extra)\n+ return [filename] + extra\n+\n+\n def _run(argv, addr, _pydevd=pydevd, _install=install, **kwargs):\n \"\"\"Start pydevd with the given commandline args.\"\"\"\n #print(' '.join(argv))\n", "issue": "Args passed to user script on 'start without debugging' contains ptvsd args\n## Environment data\r\n\r\n- PTVSD version: Master\r\n- OS and version: Windows 10\r\n- Python version (& distribution if applicable, e.g. Anaconda): Any\r\n- Using VS Code or Visual Studio: VSC\r\n\r\n## Actual behavior\r\n\r\n```\r\n['c:\\\\Users\\\\kanadig\\\\.vscode\\\\extensions\\\\ms-python.python-2018.6.0\\\\pythonFiles\\\\experimental\\\\ptvsd\\\\ptvsd\\\\__main__.py', '--nodebug', '--host', 'localhost', '--port', '51225', 'c:\\\\scratch\\\\test.py', '--one', '--two', '--three']\r\n```\r\n\r\n## Expected behavior\r\n\r\n```\r\n['c:\\\\scratch\\\\test.py', '--one', '--two', '--three']\r\n```\r\n\r\n## Steps to reproduce:\r\n1. Create a script file with this content:\r\n```python\r\nimport sys\r\nprint(sys.argv)\r\n```\r\n2. Add `args` to python experimental launch configuration:\r\n```json\r\n{\r\n \"name\": \"PyExp: Current File\",\r\n \"type\": \"pythonExperimental\",\r\n \"request\": \"launch\",\r\n \"program\": \"${file}\",\r\n \"console\": \"integratedTerminal\",\r\n \"args\": [\"--one\", \"--two\", \"--three\"]\r\n}\r\n```\r\n2. Run using **F5** and **Ctrl+F5**, the output should be same.\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nimport argparse\nimport os.path\nimport sys\n\nfrom ptvsd._local import debug_main, run_main\nfrom ptvsd.socket import Address\nfrom ptvsd.version import __version__, __author__ # noqa\n\n\n##################################\n# the script\n\n\"\"\"\nFor the PyDevd CLI handling see:\n\n https://github.com/fabioz/PyDev.Debugger/blob/master/_pydevd_bundle/pydevd_command_line_handling.py\n https://github.com/fabioz/PyDev.Debugger/blob/master/pydevd.py#L1450 (main func)\n\"\"\" # noqa\n\nPYDEVD_OPTS = {\n '--file',\n '--client',\n #'--port',\n '--vm_type',\n}\n\nPYDEVD_FLAGS = {\n '--DEBUG',\n '--DEBUG_RECORD_SOCKET_READS',\n '--cmd-line',\n '--module',\n '--multiproc',\n '--multiprocess',\n '--print-in-debugger-startup',\n '--save-signatures',\n '--save-threading',\n '--save-asyncio',\n '--server',\n '--qt-support=auto',\n}\n\nUSAGE = \"\"\"\n {0} [-h] [-V] [--nodebug] [--host HOST | --server-host HOST] --port PORT -m MODULE [arg ...]\n {0} [-h] [-V] [--nodebug] [--host HOST | --server-host HOST] --port PORT FILENAME [arg ...]\n\"\"\" # noqa\n\n\nPYDEVD_DEFAULTS = {\n '--qt-support=auto',\n}\n\n\ndef _set_pydevd_defaults(pydevd_args):\n args_to_append = []\n for arg in PYDEVD_DEFAULTS:\n if arg not in pydevd_args:\n args_to_append.append(arg)\n return pydevd_args + args_to_append\n\n\ndef parse_args(argv=None):\n \"\"\"Return the parsed args to use in main().\"\"\"\n if argv is None:\n argv = sys.argv\n prog = argv[0]\n if prog == __file__:\n prog = '{} -m ptvsd'.format(os.path.basename(sys.executable))\n else:\n prog = argv[0]\n argv = argv[1:]\n\n supported, pydevd, script = _group_args(argv)\n args = _parse_args(prog, supported)\n pydevd = _set_pydevd_defaults(pydevd)\n extra = pydevd + ['--']\n if script:\n extra += script\n return args, extra\n\n\ndef _group_args(argv):\n supported = []\n pydevd = []\n script = []\n\n try:\n pos = argv.index('--')\n except ValueError:\n script = []\n else:\n script = argv[pos + 1:]\n argv = argv[:pos]\n\n for arg in argv:\n if arg == '-h' or arg == '--help':\n return argv, [], script\n\n gottarget = False\n skip = 0\n for i in range(len(argv)):\n if skip:\n skip -= 1\n continue\n\n arg = argv[i]\n try:\n nextarg = argv[i + 1]\n except IndexError:\n nextarg = None\n\n # TODO: Deprecate the PyDevd arg support.\n # PyDevd support\n if gottarget:\n script = argv[i:] + script\n break\n if arg == '--client':\n arg = '--host'\n elif arg == '--file':\n if nextarg is None: # The filename is missing...\n pydevd.append(arg)\n continue # This will get handled later.\n if nextarg.endswith(':') and '--module' in pydevd:\n pydevd.remove('--module')\n arg = '-m'\n argv[i + 1] = nextarg = nextarg[:-1]\n else:\n arg = nextarg\n skip += 1\n\n if arg in PYDEVD_OPTS:\n pydevd.append(arg)\n if nextarg is not None:\n pydevd.append(nextarg)\n skip += 1\n elif arg in PYDEVD_FLAGS:\n pydevd.append(arg)\n elif arg == '--nodebug':\n supported.append(arg)\n\n # ptvsd support\n elif arg in ('--host', '--server-host', '--port', '-m'):\n if arg == '-m':\n gottarget = True\n supported.append(arg)\n if nextarg is not None:\n supported.append(nextarg)\n skip += 1\n elif arg in ('--single-session',):\n supported.append(arg)\n elif not arg.startswith('-'):\n supported.append(arg)\n gottarget = True\n\n # unsupported arg\n else:\n supported.append(arg)\n break\n\n return supported, pydevd, script\n\n\ndef _parse_args(prog, argv):\n parser = argparse.ArgumentParser(\n prog=prog,\n usage=USAGE.format(prog),\n )\n parser.add_argument('--nodebug', action='store_true')\n host = parser.add_mutually_exclusive_group()\n host.add_argument('--host')\n host.add_argument('--server-host')\n parser.add_argument('--port', type=int, required=True)\n\n target = parser.add_mutually_exclusive_group(required=True)\n target.add_argument('-m', dest='module')\n target.add_argument('filename', nargs='?')\n\n parser.add_argument('--single-session', action='store_true')\n parser.add_argument('-V', '--version', action='version')\n parser.version = __version__\n\n args = parser.parse_args(argv)\n ns = vars(args)\n\n serverhost = ns.pop('server_host', None)\n clienthost = ns.pop('host', None)\n if serverhost:\n args.address = Address.as_server(serverhost, ns.pop('port'))\n elif not clienthost:\n if args.nodebug:\n args.address = Address.as_client(clienthost, ns.pop('port'))\n else:\n args.address = Address.as_server(clienthost, ns.pop('port'))\n else:\n args.address = Address.as_client(clienthost, ns.pop('port'))\n\n module = ns.pop('module')\n filename = ns.pop('filename')\n if module is None:\n args.name = filename\n args.kind = 'script'\n else:\n args.name = module\n args.kind = 'module'\n #if argv[-1] != args.name or (module and argv[-1] != '-m'):\n # parser.error('script/module must be last arg')\n\n return args\n\n\ndef main(addr, name, kind, extra=(), nodebug=False, **kwargs):\n if nodebug:\n run_main(addr, name, kind, *extra, **kwargs)\n else:\n debug_main(addr, name, kind, *extra, **kwargs)\n\n\nif __name__ == '__main__':\n args, extra = parse_args()\n main(args.address, args.name, args.kind, extra, nodebug=args.nodebug,\n singlesession=args.single_session)\n", "path": "ptvsd/__main__.py"}, {"content": "import sys\n\nimport pydevd\n\nfrom ptvsd.pydevd_hooks import install\nfrom ptvsd.runner import run as no_debug_runner\nfrom ptvsd.socket import Address\n\n\n########################\n# high-level functions\n\ndef debug_main(address, name, kind, *extra, **kwargs):\n if kind == 'module':\n run_module(address, name, *extra, **kwargs)\n else:\n run_file(address, name, *extra, **kwargs)\n\n\ndef run_main(address, name, kind, *extra, **kwargs):\n no_debug_runner(address, name, kind == 'module', *extra, **kwargs)\n\n\n########################\n# low-level functions\n\ndef run_module(address, modname, *extra, **kwargs):\n \"\"\"Run pydevd for the given module.\"\"\"\n addr = Address.from_raw(address)\n if not addr.isserver:\n kwargs['singlesession'] = True\n run = kwargs.pop('_run', _run)\n prog = kwargs.pop('_prog', sys.argv[0])\n filename = modname + ':'\n argv = _run_argv(addr, filename, extra, _prog=prog)\n argv.insert(argv.index('--file'), '--module')\n run(argv, addr, **kwargs)\n\n\ndef run_file(address, filename, *extra, **kwargs):\n \"\"\"Run pydevd for the given Python file.\"\"\"\n addr = Address.from_raw(address)\n if not addr.isserver:\n kwargs['singlesession'] = True\n run = kwargs.pop('_run', _run)\n prog = kwargs.pop('_prog', sys.argv[0])\n argv = _run_argv(addr, filename, extra, _prog=prog)\n run(argv, addr, **kwargs)\n\n\ndef _run_argv(address, filename, extra, _prog=sys.argv[0]):\n \"\"\"Convert the given values to an argv that pydevd.main() supports.\"\"\"\n if '--' in extra:\n pydevd = list(extra[:extra.index('--')])\n extra = list(extra[len(pydevd) + 1:])\n else:\n pydevd = []\n extra = list(extra)\n\n host, port = address\n argv = [\n _prog,\n '--port', str(port),\n ]\n if not address.isserver:\n argv.extend([\n '--client', host or 'localhost',\n ])\n return argv + pydevd + [\n '--file', filename,\n ] + extra\n\n\ndef _run(argv, addr, _pydevd=pydevd, _install=install, **kwargs):\n \"\"\"Start pydevd with the given commandline args.\"\"\"\n #print(' '.join(argv))\n\n # Pydevd assumes that the \"__main__\" module is the \"pydevd\" module\n # and does some tricky stuff under that assumption. For example,\n # when the debugger starts up it calls save_main_module()\n # (in pydevd_bundle/pydevd_utils.py). That function explicitly sets\n # sys.modules[\"pydevd\"] to sys.modules[\"__main__\"] and then sets\n # the __main__ module to a new one. This makes some sense since\n # it gives the debugged script a fresh __main__ module.\n #\n # This complicates things for us since we are running a different\n # file (i.e. this one) as the __main__ module. Consequently,\n # sys.modules[\"pydevd\"] gets set to ptvsd/__main__.py. Subsequent\n # imports of the \"pydevd\" module then return the wrong module. We\n # work around this by avoiding lazy imports of the \"pydevd\" module.\n # We also replace the __main__ module with the \"pydevd\" module here.\n if sys.modules['__main__'].__file__ != _pydevd.__file__:\n sys.modules['__main___orig'] = sys.modules['__main__']\n sys.modules['__main__'] = _pydevd\n\n daemon = _install(_pydevd, addr, **kwargs)\n sys.argv[:] = argv\n try:\n _pydevd.main()\n except SystemExit as ex:\n daemon.exitcode = int(ex.code)\n raise\n", "path": "ptvsd/_local.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nimport argparse\nimport os.path\nimport sys\n\nfrom ptvsd._local import debug_main, run_main\nfrom ptvsd.socket import Address\nfrom ptvsd.version import __version__, __author__ # noqa\n\n\n##################################\n# the script\n\n\"\"\"\nFor the PyDevd CLI handling see:\n\n https://github.com/fabioz/PyDev.Debugger/blob/master/_pydevd_bundle/pydevd_command_line_handling.py\n https://github.com/fabioz/PyDev.Debugger/blob/master/pydevd.py#L1450 (main func)\n\"\"\" # noqa\n\nPYDEVD_OPTS = {\n '--file',\n '--client',\n #'--port',\n '--vm_type',\n}\n\nPYDEVD_FLAGS = {\n '--DEBUG',\n '--DEBUG_RECORD_SOCKET_READS',\n '--cmd-line',\n '--module',\n '--multiproc',\n '--multiprocess',\n '--print-in-debugger-startup',\n '--save-signatures',\n '--save-threading',\n '--save-asyncio',\n '--server',\n '--qt-support=auto',\n}\n\nUSAGE = \"\"\"\n {0} [-h] [-V] [--nodebug] [--host HOST | --server-host HOST] --port PORT -m MODULE [arg ...]\n {0} [-h] [-V] [--nodebug] [--host HOST | --server-host HOST] --port PORT FILENAME [arg ...]\n\"\"\" # noqa\n\n\ndef parse_args(argv=None):\n \"\"\"Return the parsed args to use in main().\"\"\"\n if argv is None:\n argv = sys.argv\n prog = argv[0]\n if prog == __file__:\n prog = '{} -m ptvsd'.format(os.path.basename(sys.executable))\n else:\n prog = argv[0]\n argv = argv[1:]\n\n supported, pydevd, script = _group_args(argv)\n args = _parse_args(prog, supported)\n # '--' is used in _run_args to extract pydevd specific args\n extra = pydevd + ['--']\n if script:\n extra += script\n return args, extra\n\n\ndef _group_args(argv):\n supported = []\n pydevd = []\n script = []\n\n try:\n pos = argv.index('--')\n except ValueError:\n script = []\n else:\n script = argv[pos + 1:]\n argv = argv[:pos]\n\n for arg in argv:\n if arg == '-h' or arg == '--help':\n return argv, [], script\n\n gottarget = False\n skip = 0\n for i in range(len(argv)):\n if skip:\n skip -= 1\n continue\n\n arg = argv[i]\n try:\n nextarg = argv[i + 1]\n except IndexError:\n nextarg = None\n\n # TODO: Deprecate the PyDevd arg support.\n # PyDevd support\n if gottarget:\n script = argv[i:] + script\n break\n if arg == '--client':\n arg = '--host'\n elif arg == '--file':\n if nextarg is None: # The filename is missing...\n pydevd.append(arg)\n continue # This will get handled later.\n if nextarg.endswith(':') and '--module' in pydevd:\n pydevd.remove('--module')\n arg = '-m'\n argv[i + 1] = nextarg = nextarg[:-1]\n else:\n arg = nextarg\n skip += 1\n\n if arg in PYDEVD_OPTS:\n pydevd.append(arg)\n if nextarg is not None:\n pydevd.append(nextarg)\n skip += 1\n elif arg in PYDEVD_FLAGS:\n pydevd.append(arg)\n elif arg == '--nodebug':\n supported.append(arg)\n\n # ptvsd support\n elif arg in ('--host', '--server-host', '--port', '-m'):\n if arg == '-m':\n gottarget = True\n supported.append(arg)\n if nextarg is not None:\n supported.append(nextarg)\n skip += 1\n elif arg in ('--single-session',):\n supported.append(arg)\n elif not arg.startswith('-'):\n supported.append(arg)\n gottarget = True\n\n # unsupported arg\n else:\n supported.append(arg)\n break\n\n return supported, pydevd, script\n\n\ndef _parse_args(prog, argv):\n parser = argparse.ArgumentParser(\n prog=prog,\n usage=USAGE.format(prog),\n )\n parser.add_argument('--nodebug', action='store_true')\n host = parser.add_mutually_exclusive_group()\n host.add_argument('--host')\n host.add_argument('--server-host')\n parser.add_argument('--port', type=int, required=True)\n\n target = parser.add_mutually_exclusive_group(required=True)\n target.add_argument('-m', dest='module')\n target.add_argument('filename', nargs='?')\n\n parser.add_argument('--single-session', action='store_true')\n parser.add_argument('-V', '--version', action='version')\n parser.version = __version__\n\n args = parser.parse_args(argv)\n ns = vars(args)\n\n serverhost = ns.pop('server_host', None)\n clienthost = ns.pop('host', None)\n if serverhost:\n args.address = Address.as_server(serverhost, ns.pop('port'))\n elif not clienthost:\n if args.nodebug:\n args.address = Address.as_client(clienthost, ns.pop('port'))\n else:\n args.address = Address.as_server(clienthost, ns.pop('port'))\n else:\n args.address = Address.as_client(clienthost, ns.pop('port'))\n\n module = ns.pop('module')\n filename = ns.pop('filename')\n if module is None:\n args.name = filename\n args.kind = 'script'\n else:\n args.name = module\n args.kind = 'module'\n #if argv[-1] != args.name or (module and argv[-1] != '-m'):\n # parser.error('script/module must be last arg')\n\n return args\n\n\ndef main(addr, name, kind, extra=(), nodebug=False, **kwargs):\n if nodebug:\n run_main(addr, name, kind, *extra, **kwargs)\n else:\n debug_main(addr, name, kind, *extra, **kwargs)\n\n\nif __name__ == '__main__':\n args, extra = parse_args()\n main(args.address, args.name, args.kind, extra, nodebug=args.nodebug,\n singlesession=args.single_session)\n", "path": "ptvsd/__main__.py"}, {"content": "import sys\n\nimport pydevd\n\nfrom ptvsd.pydevd_hooks import install\nfrom ptvsd.runner import run as no_debug_runner\nfrom ptvsd.socket import Address\n\n\nPYDEVD_DEFAULTS = {\n '--qt-support=auto',\n}\n\n\ndef _set_pydevd_defaults(pydevd_args):\n args_to_append = []\n for arg in PYDEVD_DEFAULTS:\n if arg not in pydevd_args:\n args_to_append.append(arg)\n return pydevd_args + args_to_append\n\n\n########################\n# high-level functions\n\ndef debug_main(address, name, kind, *extra, **kwargs):\n if kind == 'module':\n run_module(address, name, *extra, **kwargs)\n else:\n run_file(address, name, *extra, **kwargs)\n\n\ndef run_main(address, name, kind, *extra, **kwargs):\n addr = Address.from_raw(address)\n sys.argv[:] = _run_main_argv(name, extra)\n runner = kwargs.pop('_runner', no_debug_runner)\n runner(addr, name, kind == 'module', *extra, **kwargs)\n\n\n########################\n# low-level functions\n\ndef run_module(address, modname, *extra, **kwargs):\n \"\"\"Run pydevd for the given module.\"\"\"\n addr = Address.from_raw(address)\n if not addr.isserver:\n kwargs['singlesession'] = True\n run = kwargs.pop('_run', _run)\n prog = kwargs.pop('_prog', sys.argv[0])\n filename = modname + ':'\n argv = _run_argv(addr, filename, extra, _prog=prog)\n argv.insert(argv.index('--file'), '--module')\n run(argv, addr, **kwargs)\n\n\ndef run_file(address, filename, *extra, **kwargs):\n \"\"\"Run pydevd for the given Python file.\"\"\"\n addr = Address.from_raw(address)\n if not addr.isserver:\n kwargs['singlesession'] = True\n run = kwargs.pop('_run', _run)\n prog = kwargs.pop('_prog', sys.argv[0])\n argv = _run_argv(addr, filename, extra, _prog=prog)\n run(argv, addr, **kwargs)\n\n\ndef _run_argv(address, filename, extra, _prog=sys.argv[0]):\n \"\"\"Convert the given values to an argv that pydevd.main() supports.\"\"\"\n if '--' in extra:\n pydevd = list(extra[:extra.index('--')])\n extra = list(extra[len(pydevd) + 1:])\n else:\n pydevd = []\n extra = list(extra)\n\n pydevd = _set_pydevd_defaults(pydevd)\n host, port = address\n argv = [\n _prog,\n '--port', str(port),\n ]\n if not address.isserver:\n argv.extend([\n '--client', host or 'localhost',\n ])\n return argv + pydevd + [\n '--file', filename,\n ] + extra\n\n\ndef _run_main_argv(filename, extra):\n if '--' in extra:\n pydevd = list(extra[:extra.index('--')])\n extra = list(extra[len(pydevd) + 1:])\n else:\n extra = list(extra)\n return [filename] + extra\n\n\ndef _run(argv, addr, _pydevd=pydevd, _install=install, **kwargs):\n \"\"\"Start pydevd with the given commandline args.\"\"\"\n #print(' '.join(argv))\n\n # Pydevd assumes that the \"__main__\" module is the \"pydevd\" module\n # and does some tricky stuff under that assumption. For example,\n # when the debugger starts up it calls save_main_module()\n # (in pydevd_bundle/pydevd_utils.py). That function explicitly sets\n # sys.modules[\"pydevd\"] to sys.modules[\"__main__\"] and then sets\n # the __main__ module to a new one. This makes some sense since\n # it gives the debugged script a fresh __main__ module.\n #\n # This complicates things for us since we are running a different\n # file (i.e. this one) as the __main__ module. Consequently,\n # sys.modules[\"pydevd\"] gets set to ptvsd/__main__.py. Subsequent\n # imports of the \"pydevd\" module then return the wrong module. We\n # work around this by avoiding lazy imports of the \"pydevd\" module.\n # We also replace the __main__ module with the \"pydevd\" module here.\n if sys.modules['__main__'].__file__ != _pydevd.__file__:\n sys.modules['__main___orig'] = sys.modules['__main__']\n sys.modules['__main__'] = _pydevd\n\n daemon = _install(_pydevd, addr, **kwargs)\n sys.argv[:] = argv\n try:\n _pydevd.main()\n except SystemExit as ex:\n daemon.exitcode = int(ex.code)\n raise\n", "path": "ptvsd/_local.py"}]} | 3,796 | 706 |
gh_patches_debug_15343 | rasdani/github-patches | git_diff | Pylons__pyramid-1131 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
No way to add query parameters without a value
I occasionally need to put a hint in the query string for a URL, which is essentially a parameter without a value. This can be important to provide information to javascript or as a hint to GA. For example I may need to use `http://localhost/dashboard?new-user` as URL when I redirect a new user to the dashboard after completing registration.
Intuitively I expected this to work:
``` python
return HTTPFound(request.route_url('dashboard', _query={'new-user': None}))
```
but that returns `/dashboard?new-user=None` which is not very pretty.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyramid/encode.py`
Content:
```
1 from pyramid.compat import (
2 text_type,
3 binary_type,
4 is_nonstr_iter,
5 url_quote as _url_quote,
6 url_quote_plus as quote_plus, # bw compat api (dnr)
7 )
8
9 def url_quote(s, safe=''): # bw compat api
10 return _url_quote(s, safe=safe)
11
12 def urlencode(query, doseq=True):
13 """
14 An alternate implementation of Python's stdlib `urllib.urlencode
15 function <http://docs.python.org/library/urllib.html>`_ which
16 accepts unicode keys and values within the ``query``
17 dict/sequence; all Unicode keys and values are first converted to
18 UTF-8 before being used to compose the query string.
19
20 The value of ``query`` must be a sequence of two-tuples
21 representing key/value pairs *or* an object (often a dictionary)
22 with an ``.items()`` method that returns a sequence of two-tuples
23 representing key/value pairs.
24
25 For minimal calling convention backwards compatibility, this
26 version of urlencode accepts *but ignores* a second argument
27 conventionally named ``doseq``. The Python stdlib version behaves
28 differently when ``doseq`` is False and when a sequence is
29 presented as one of the values. This version always behaves in
30 the ``doseq=True`` mode, no matter what the value of the second
31 argument.
32
33 See the Python stdlib documentation for ``urllib.urlencode`` for
34 more information.
35 """
36 try:
37 # presumed to be a dictionary
38 query = query.items()
39 except AttributeError:
40 pass
41
42 result = ''
43 prefix = ''
44
45 for (k, v) in query:
46 k = _enc(k)
47
48 if is_nonstr_iter(v):
49 for x in v:
50 x = _enc(x)
51 result += '%s%s=%s' % (prefix, k, x)
52 prefix = '&'
53 else:
54 v = _enc(v)
55 result += '%s%s=%s' % (prefix, k, v)
56
57 prefix = '&'
58
59 return result
60
61 def _enc(val):
62 cls = val.__class__
63 if cls is text_type:
64 val = val.encode('utf-8')
65 elif cls is not binary_type:
66 val = str(val).encode('utf-8')
67 return quote_plus(val)
68
69
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pyramid/encode.py b/pyramid/encode.py
--- a/pyramid/encode.py
+++ b/pyramid/encode.py
@@ -32,6 +32,10 @@
See the Python stdlib documentation for ``urllib.urlencode`` for
more information.
+
+ .. versionchanged:: 1.5
+ In a key/value pair, if the value is ``None`` then it will be
+ dropped from the resulting output.
"""
try:
# presumed to be a dictionary
@@ -50,6 +54,8 @@
x = _enc(x)
result += '%s%s=%s' % (prefix, k, x)
prefix = '&'
+ elif v is None:
+ result += '%s%s=' % (prefix, k)
else:
v = _enc(v)
result += '%s%s=%s' % (prefix, k, v)
| {"golden_diff": "diff --git a/pyramid/encode.py b/pyramid/encode.py\n--- a/pyramid/encode.py\n+++ b/pyramid/encode.py\n@@ -32,6 +32,10 @@\n \n See the Python stdlib documentation for ``urllib.urlencode`` for\n more information.\n+\n+ .. versionchanged:: 1.5\n+ In a key/value pair, if the value is ``None`` then it will be\n+ dropped from the resulting output.\n \"\"\"\n try:\n # presumed to be a dictionary\n@@ -50,6 +54,8 @@\n x = _enc(x)\n result += '%s%s=%s' % (prefix, k, x)\n prefix = '&'\n+ elif v is None:\n+ result += '%s%s=' % (prefix, k)\n else:\n v = _enc(v)\n result += '%s%s=%s' % (prefix, k, v)\n", "issue": "No way to add query parameters without a value\nI occasionally need to put a hint in the query string for a URL, which is essentially a parameter without a value. This can be important to provide information to javascript or as a hint to GA. For example I may need to use `http://localhost/dashboard?new-user` as URL when I redirect a new user to the dashboard after completing registration.\n\nIntuitively I expected this to work:\n\n``` python\nreturn HTTPFound(request.route_url('dashboard', _query={'new-user': None}))\n```\n\nbut that returns `/dashboard?new-user=None` which is not very pretty.\n\n", "before_files": [{"content": "from pyramid.compat import (\n text_type,\n binary_type,\n is_nonstr_iter,\n url_quote as _url_quote,\n url_quote_plus as quote_plus, # bw compat api (dnr)\n )\n\ndef url_quote(s, safe=''): # bw compat api\n return _url_quote(s, safe=safe)\n\ndef urlencode(query, doseq=True):\n \"\"\"\n An alternate implementation of Python's stdlib `urllib.urlencode\n function <http://docs.python.org/library/urllib.html>`_ which\n accepts unicode keys and values within the ``query``\n dict/sequence; all Unicode keys and values are first converted to\n UTF-8 before being used to compose the query string.\n\n The value of ``query`` must be a sequence of two-tuples\n representing key/value pairs *or* an object (often a dictionary)\n with an ``.items()`` method that returns a sequence of two-tuples\n representing key/value pairs.\n\n For minimal calling convention backwards compatibility, this\n version of urlencode accepts *but ignores* a second argument\n conventionally named ``doseq``. The Python stdlib version behaves\n differently when ``doseq`` is False and when a sequence is\n presented as one of the values. This version always behaves in\n the ``doseq=True`` mode, no matter what the value of the second\n argument.\n\n See the Python stdlib documentation for ``urllib.urlencode`` for\n more information.\n \"\"\"\n try:\n # presumed to be a dictionary\n query = query.items()\n except AttributeError:\n pass\n\n result = ''\n prefix = ''\n\n for (k, v) in query:\n k = _enc(k)\n\n if is_nonstr_iter(v):\n for x in v:\n x = _enc(x)\n result += '%s%s=%s' % (prefix, k, x)\n prefix = '&'\n else:\n v = _enc(v)\n result += '%s%s=%s' % (prefix, k, v)\n\n prefix = '&'\n\n return result\n\ndef _enc(val):\n cls = val.__class__\n if cls is text_type:\n val = val.encode('utf-8')\n elif cls is not binary_type:\n val = str(val).encode('utf-8')\n return quote_plus(val)\n\n", "path": "pyramid/encode.py"}], "after_files": [{"content": "from pyramid.compat import (\n text_type,\n binary_type,\n is_nonstr_iter,\n url_quote as _url_quote,\n url_quote_plus as quote_plus, # bw compat api (dnr)\n )\n\ndef url_quote(s, safe=''): # bw compat api\n return _url_quote(s, safe=safe)\n\ndef urlencode(query, doseq=True):\n \"\"\"\n An alternate implementation of Python's stdlib `urllib.urlencode\n function <http://docs.python.org/library/urllib.html>`_ which\n accepts unicode keys and values within the ``query``\n dict/sequence; all Unicode keys and values are first converted to\n UTF-8 before being used to compose the query string.\n\n The value of ``query`` must be a sequence of two-tuples\n representing key/value pairs *or* an object (often a dictionary)\n with an ``.items()`` method that returns a sequence of two-tuples\n representing key/value pairs.\n\n For minimal calling convention backwards compatibility, this\n version of urlencode accepts *but ignores* a second argument\n conventionally named ``doseq``. The Python stdlib version behaves\n differently when ``doseq`` is False and when a sequence is\n presented as one of the values. This version always behaves in\n the ``doseq=True`` mode, no matter what the value of the second\n argument.\n\n See the Python stdlib documentation for ``urllib.urlencode`` for\n more information.\n\n .. versionchanged:: 1.5\n In a key/value pair, if the value is ``None`` then it will be\n dropped from the resulting output.\n \"\"\"\n try:\n # presumed to be a dictionary\n query = query.items()\n except AttributeError:\n pass\n\n result = ''\n prefix = ''\n\n for (k, v) in query:\n k = _enc(k)\n\n if is_nonstr_iter(v):\n for x in v:\n x = _enc(x)\n result += '%s%s=%s' % (prefix, k, x)\n prefix = '&'\n elif v is None:\n result += '%s%s=' % (prefix, k)\n else:\n v = _enc(v)\n result += '%s%s=%s' % (prefix, k, v)\n\n prefix = '&'\n\n return result\n\ndef _enc(val):\n cls = val.__class__\n if cls is text_type:\n val = val.encode('utf-8')\n elif cls is not binary_type:\n val = str(val).encode('utf-8')\n return quote_plus(val)\n\n", "path": "pyramid/encode.py"}]} | 1,032 | 208 |
gh_patches_debug_37438 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-934 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
B3 trace_id and span_id not handled correctly
These fields are not being handled correctly when an invalid value is passed for one or both of them. Fix that.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import typing
16
17 import opentelemetry.trace as trace
18 from opentelemetry.context import Context
19 from opentelemetry.trace.propagation.httptextformat import (
20 Getter,
21 HTTPTextFormat,
22 HTTPTextFormatT,
23 Setter,
24 )
25
26
27 class B3Format(HTTPTextFormat):
28 """Propagator for the B3 HTTP header format.
29
30 See: https://github.com/openzipkin/b3-propagation
31 """
32
33 SINGLE_HEADER_KEY = "b3"
34 TRACE_ID_KEY = "x-b3-traceid"
35 SPAN_ID_KEY = "x-b3-spanid"
36 PARENT_SPAN_ID_KEY = "x-b3-parentspanid"
37 SAMPLED_KEY = "x-b3-sampled"
38 FLAGS_KEY = "x-b3-flags"
39 _SAMPLE_PROPAGATE_VALUES = set(["1", "True", "true", "d"])
40
41 def extract(
42 self,
43 get_from_carrier: Getter[HTTPTextFormatT],
44 carrier: HTTPTextFormatT,
45 context: typing.Optional[Context] = None,
46 ) -> Context:
47 trace_id = format_trace_id(trace.INVALID_TRACE_ID)
48 span_id = format_span_id(trace.INVALID_SPAN_ID)
49 sampled = "0"
50 flags = None
51
52 single_header = _extract_first_element(
53 get_from_carrier(carrier, self.SINGLE_HEADER_KEY)
54 )
55 if single_header:
56 # The b3 spec calls for the sampling state to be
57 # "deferred", which is unspecified. This concept does not
58 # translate to SpanContext, so we set it as recorded.
59 sampled = "1"
60 fields = single_header.split("-", 4)
61
62 if len(fields) == 1:
63 sampled = fields[0]
64 elif len(fields) == 2:
65 trace_id, span_id = fields
66 elif len(fields) == 3:
67 trace_id, span_id, sampled = fields
68 elif len(fields) == 4:
69 trace_id, span_id, sampled, _ = fields
70 else:
71 return trace.set_span_in_context(trace.INVALID_SPAN)
72 else:
73 trace_id = (
74 _extract_first_element(
75 get_from_carrier(carrier, self.TRACE_ID_KEY)
76 )
77 or trace_id
78 )
79 span_id = (
80 _extract_first_element(
81 get_from_carrier(carrier, self.SPAN_ID_KEY)
82 )
83 or span_id
84 )
85 sampled = (
86 _extract_first_element(
87 get_from_carrier(carrier, self.SAMPLED_KEY)
88 )
89 or sampled
90 )
91 flags = (
92 _extract_first_element(
93 get_from_carrier(carrier, self.FLAGS_KEY)
94 )
95 or flags
96 )
97
98 options = 0
99 # The b3 spec provides no defined behavior for both sample and
100 # flag values set. Since the setting of at least one implies
101 # the desire for some form of sampling, propagate if either
102 # header is set to allow.
103 if sampled in self._SAMPLE_PROPAGATE_VALUES or flags == "1":
104 options |= trace.TraceFlags.SAMPLED
105 return trace.set_span_in_context(
106 trace.DefaultSpan(
107 trace.SpanContext(
108 # trace an span ids are encoded in hex, so must be converted
109 trace_id=int(trace_id, 16),
110 span_id=int(span_id, 16),
111 is_remote=True,
112 trace_flags=trace.TraceFlags(options),
113 trace_state=trace.TraceState(),
114 )
115 )
116 )
117
118 def inject(
119 self,
120 set_in_carrier: Setter[HTTPTextFormatT],
121 carrier: HTTPTextFormatT,
122 context: typing.Optional[Context] = None,
123 ) -> None:
124 span = trace.get_current_span(context=context)
125
126 if span.get_context() == trace.INVALID_SPAN_CONTEXT:
127 return
128
129 sampled = (trace.TraceFlags.SAMPLED & span.context.trace_flags) != 0
130 set_in_carrier(
131 carrier, self.TRACE_ID_KEY, format_trace_id(span.context.trace_id),
132 )
133 set_in_carrier(
134 carrier, self.SPAN_ID_KEY, format_span_id(span.context.span_id)
135 )
136 if span.parent is not None:
137 set_in_carrier(
138 carrier,
139 self.PARENT_SPAN_ID_KEY,
140 format_span_id(span.parent.span_id),
141 )
142 set_in_carrier(carrier, self.SAMPLED_KEY, "1" if sampled else "0")
143
144
145 def format_trace_id(trace_id: int) -> str:
146 """Format the trace id according to b3 specification."""
147 return format(trace_id, "032x")
148
149
150 def format_span_id(span_id: int) -> str:
151 """Format the span id according to b3 specification."""
152 return format(span_id, "016x")
153
154
155 def _extract_first_element(
156 items: typing.Iterable[HTTPTextFormatT],
157 ) -> typing.Optional[HTTPTextFormatT]:
158 if items is None:
159 return None
160 return next(iter(items), None)
161
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py b/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py
--- a/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py
@@ -13,9 +13,11 @@
# limitations under the License.
import typing
+from re import compile as re_compile
import opentelemetry.trace as trace
from opentelemetry.context import Context
+from opentelemetry.sdk.trace import generate_span_id, generate_trace_id
from opentelemetry.trace.propagation.httptextformat import (
Getter,
HTTPTextFormat,
@@ -37,6 +39,8 @@
SAMPLED_KEY = "x-b3-sampled"
FLAGS_KEY = "x-b3-flags"
_SAMPLE_PROPAGATE_VALUES = set(["1", "True", "true", "d"])
+ _trace_id_regex = re_compile(r"[\da-fA-F]{16}|[\da-fA-F]{32}")
+ _span_id_regex = re_compile(r"[\da-fA-F]{16}")
def extract(
self,
@@ -95,6 +99,18 @@
or flags
)
+ if (
+ self._trace_id_regex.fullmatch(trace_id) is None
+ or self._span_id_regex.fullmatch(span_id) is None
+ ):
+ trace_id = generate_trace_id()
+ span_id = generate_span_id()
+ sampled = "0"
+
+ else:
+ trace_id = int(trace_id, 16)
+ span_id = int(span_id, 16)
+
options = 0
# The b3 spec provides no defined behavior for both sample and
# flag values set. Since the setting of at least one implies
@@ -102,12 +118,13 @@
# header is set to allow.
if sampled in self._SAMPLE_PROPAGATE_VALUES or flags == "1":
options |= trace.TraceFlags.SAMPLED
+
return trace.set_span_in_context(
trace.DefaultSpan(
trace.SpanContext(
# trace an span ids are encoded in hex, so must be converted
- trace_id=int(trace_id, 16),
- span_id=int(span_id, 16),
+ trace_id=trace_id,
+ span_id=span_id,
is_remote=True,
trace_flags=trace.TraceFlags(options),
trace_state=trace.TraceState(),
| {"golden_diff": "diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py b/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py\n--- a/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py\n+++ b/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py\n@@ -13,9 +13,11 @@\n # limitations under the License.\n \n import typing\n+from re import compile as re_compile\n \n import opentelemetry.trace as trace\n from opentelemetry.context import Context\n+from opentelemetry.sdk.trace import generate_span_id, generate_trace_id\n from opentelemetry.trace.propagation.httptextformat import (\n Getter,\n HTTPTextFormat,\n@@ -37,6 +39,8 @@\n SAMPLED_KEY = \"x-b3-sampled\"\n FLAGS_KEY = \"x-b3-flags\"\n _SAMPLE_PROPAGATE_VALUES = set([\"1\", \"True\", \"true\", \"d\"])\n+ _trace_id_regex = re_compile(r\"[\\da-fA-F]{16}|[\\da-fA-F]{32}\")\n+ _span_id_regex = re_compile(r\"[\\da-fA-F]{16}\")\n \n def extract(\n self,\n@@ -95,6 +99,18 @@\n or flags\n )\n \n+ if (\n+ self._trace_id_regex.fullmatch(trace_id) is None\n+ or self._span_id_regex.fullmatch(span_id) is None\n+ ):\n+ trace_id = generate_trace_id()\n+ span_id = generate_span_id()\n+ sampled = \"0\"\n+\n+ else:\n+ trace_id = int(trace_id, 16)\n+ span_id = int(span_id, 16)\n+\n options = 0\n # The b3 spec provides no defined behavior for both sample and\n # flag values set. Since the setting of at least one implies\n@@ -102,12 +118,13 @@\n # header is set to allow.\n if sampled in self._SAMPLE_PROPAGATE_VALUES or flags == \"1\":\n options |= trace.TraceFlags.SAMPLED\n+\n return trace.set_span_in_context(\n trace.DefaultSpan(\n trace.SpanContext(\n # trace an span ids are encoded in hex, so must be converted\n- trace_id=int(trace_id, 16),\n- span_id=int(span_id, 16),\n+ trace_id=trace_id,\n+ span_id=span_id,\n is_remote=True,\n trace_flags=trace.TraceFlags(options),\n trace_state=trace.TraceState(),\n", "issue": "B3 trace_id and span_id not handled correctly\nThese fields are not being handled correctly when an invalid value is passed for one or both of them. Fix that.\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport typing\n\nimport opentelemetry.trace as trace\nfrom opentelemetry.context import Context\nfrom opentelemetry.trace.propagation.httptextformat import (\n Getter,\n HTTPTextFormat,\n HTTPTextFormatT,\n Setter,\n)\n\n\nclass B3Format(HTTPTextFormat):\n \"\"\"Propagator for the B3 HTTP header format.\n\n See: https://github.com/openzipkin/b3-propagation\n \"\"\"\n\n SINGLE_HEADER_KEY = \"b3\"\n TRACE_ID_KEY = \"x-b3-traceid\"\n SPAN_ID_KEY = \"x-b3-spanid\"\n PARENT_SPAN_ID_KEY = \"x-b3-parentspanid\"\n SAMPLED_KEY = \"x-b3-sampled\"\n FLAGS_KEY = \"x-b3-flags\"\n _SAMPLE_PROPAGATE_VALUES = set([\"1\", \"True\", \"true\", \"d\"])\n\n def extract(\n self,\n get_from_carrier: Getter[HTTPTextFormatT],\n carrier: HTTPTextFormatT,\n context: typing.Optional[Context] = None,\n ) -> Context:\n trace_id = format_trace_id(trace.INVALID_TRACE_ID)\n span_id = format_span_id(trace.INVALID_SPAN_ID)\n sampled = \"0\"\n flags = None\n\n single_header = _extract_first_element(\n get_from_carrier(carrier, self.SINGLE_HEADER_KEY)\n )\n if single_header:\n # The b3 spec calls for the sampling state to be\n # \"deferred\", which is unspecified. This concept does not\n # translate to SpanContext, so we set it as recorded.\n sampled = \"1\"\n fields = single_header.split(\"-\", 4)\n\n if len(fields) == 1:\n sampled = fields[0]\n elif len(fields) == 2:\n trace_id, span_id = fields\n elif len(fields) == 3:\n trace_id, span_id, sampled = fields\n elif len(fields) == 4:\n trace_id, span_id, sampled, _ = fields\n else:\n return trace.set_span_in_context(trace.INVALID_SPAN)\n else:\n trace_id = (\n _extract_first_element(\n get_from_carrier(carrier, self.TRACE_ID_KEY)\n )\n or trace_id\n )\n span_id = (\n _extract_first_element(\n get_from_carrier(carrier, self.SPAN_ID_KEY)\n )\n or span_id\n )\n sampled = (\n _extract_first_element(\n get_from_carrier(carrier, self.SAMPLED_KEY)\n )\n or sampled\n )\n flags = (\n _extract_first_element(\n get_from_carrier(carrier, self.FLAGS_KEY)\n )\n or flags\n )\n\n options = 0\n # The b3 spec provides no defined behavior for both sample and\n # flag values set. Since the setting of at least one implies\n # the desire for some form of sampling, propagate if either\n # header is set to allow.\n if sampled in self._SAMPLE_PROPAGATE_VALUES or flags == \"1\":\n options |= trace.TraceFlags.SAMPLED\n return trace.set_span_in_context(\n trace.DefaultSpan(\n trace.SpanContext(\n # trace an span ids are encoded in hex, so must be converted\n trace_id=int(trace_id, 16),\n span_id=int(span_id, 16),\n is_remote=True,\n trace_flags=trace.TraceFlags(options),\n trace_state=trace.TraceState(),\n )\n )\n )\n\n def inject(\n self,\n set_in_carrier: Setter[HTTPTextFormatT],\n carrier: HTTPTextFormatT,\n context: typing.Optional[Context] = None,\n ) -> None:\n span = trace.get_current_span(context=context)\n\n if span.get_context() == trace.INVALID_SPAN_CONTEXT:\n return\n\n sampled = (trace.TraceFlags.SAMPLED & span.context.trace_flags) != 0\n set_in_carrier(\n carrier, self.TRACE_ID_KEY, format_trace_id(span.context.trace_id),\n )\n set_in_carrier(\n carrier, self.SPAN_ID_KEY, format_span_id(span.context.span_id)\n )\n if span.parent is not None:\n set_in_carrier(\n carrier,\n self.PARENT_SPAN_ID_KEY,\n format_span_id(span.parent.span_id),\n )\n set_in_carrier(carrier, self.SAMPLED_KEY, \"1\" if sampled else \"0\")\n\n\ndef format_trace_id(trace_id: int) -> str:\n \"\"\"Format the trace id according to b3 specification.\"\"\"\n return format(trace_id, \"032x\")\n\n\ndef format_span_id(span_id: int) -> str:\n \"\"\"Format the span id according to b3 specification.\"\"\"\n return format(span_id, \"016x\")\n\n\ndef _extract_first_element(\n items: typing.Iterable[HTTPTextFormatT],\n) -> typing.Optional[HTTPTextFormatT]:\n if items is None:\n return None\n return next(iter(items), None)\n", "path": "opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport typing\nfrom re import compile as re_compile\n\nimport opentelemetry.trace as trace\nfrom opentelemetry.context import Context\nfrom opentelemetry.sdk.trace import generate_span_id, generate_trace_id\nfrom opentelemetry.trace.propagation.httptextformat import (\n Getter,\n HTTPTextFormat,\n HTTPTextFormatT,\n Setter,\n)\n\n\nclass B3Format(HTTPTextFormat):\n \"\"\"Propagator for the B3 HTTP header format.\n\n See: https://github.com/openzipkin/b3-propagation\n \"\"\"\n\n SINGLE_HEADER_KEY = \"b3\"\n TRACE_ID_KEY = \"x-b3-traceid\"\n SPAN_ID_KEY = \"x-b3-spanid\"\n PARENT_SPAN_ID_KEY = \"x-b3-parentspanid\"\n SAMPLED_KEY = \"x-b3-sampled\"\n FLAGS_KEY = \"x-b3-flags\"\n _SAMPLE_PROPAGATE_VALUES = set([\"1\", \"True\", \"true\", \"d\"])\n _trace_id_regex = re_compile(r\"[\\da-fA-F]{16}|[\\da-fA-F]{32}\")\n _span_id_regex = re_compile(r\"[\\da-fA-F]{16}\")\n\n def extract(\n self,\n get_from_carrier: Getter[HTTPTextFormatT],\n carrier: HTTPTextFormatT,\n context: typing.Optional[Context] = None,\n ) -> Context:\n trace_id = format_trace_id(trace.INVALID_TRACE_ID)\n span_id = format_span_id(trace.INVALID_SPAN_ID)\n sampled = \"0\"\n flags = None\n\n single_header = _extract_first_element(\n get_from_carrier(carrier, self.SINGLE_HEADER_KEY)\n )\n if single_header:\n # The b3 spec calls for the sampling state to be\n # \"deferred\", which is unspecified. This concept does not\n # translate to SpanContext, so we set it as recorded.\n sampled = \"1\"\n fields = single_header.split(\"-\", 4)\n\n if len(fields) == 1:\n sampled = fields[0]\n elif len(fields) == 2:\n trace_id, span_id = fields\n elif len(fields) == 3:\n trace_id, span_id, sampled = fields\n elif len(fields) == 4:\n trace_id, span_id, sampled, _ = fields\n else:\n return trace.set_span_in_context(trace.INVALID_SPAN)\n else:\n trace_id = (\n _extract_first_element(\n get_from_carrier(carrier, self.TRACE_ID_KEY)\n )\n or trace_id\n )\n span_id = (\n _extract_first_element(\n get_from_carrier(carrier, self.SPAN_ID_KEY)\n )\n or span_id\n )\n sampled = (\n _extract_first_element(\n get_from_carrier(carrier, self.SAMPLED_KEY)\n )\n or sampled\n )\n flags = (\n _extract_first_element(\n get_from_carrier(carrier, self.FLAGS_KEY)\n )\n or flags\n )\n\n if (\n self._trace_id_regex.fullmatch(trace_id) is None\n or self._span_id_regex.fullmatch(span_id) is None\n ):\n trace_id = generate_trace_id()\n span_id = generate_span_id()\n sampled = \"0\"\n\n else:\n trace_id = int(trace_id, 16)\n span_id = int(span_id, 16)\n\n options = 0\n # The b3 spec provides no defined behavior for both sample and\n # flag values set. Since the setting of at least one implies\n # the desire for some form of sampling, propagate if either\n # header is set to allow.\n if sampled in self._SAMPLE_PROPAGATE_VALUES or flags == \"1\":\n options |= trace.TraceFlags.SAMPLED\n\n return trace.set_span_in_context(\n trace.DefaultSpan(\n trace.SpanContext(\n # trace an span ids are encoded in hex, so must be converted\n trace_id=trace_id,\n span_id=span_id,\n is_remote=True,\n trace_flags=trace.TraceFlags(options),\n trace_state=trace.TraceState(),\n )\n )\n )\n\n def inject(\n self,\n set_in_carrier: Setter[HTTPTextFormatT],\n carrier: HTTPTextFormatT,\n context: typing.Optional[Context] = None,\n ) -> None:\n span = trace.get_current_span(context=context)\n\n if span.get_context() == trace.INVALID_SPAN_CONTEXT:\n return\n\n sampled = (trace.TraceFlags.SAMPLED & span.context.trace_flags) != 0\n set_in_carrier(\n carrier, self.TRACE_ID_KEY, format_trace_id(span.context.trace_id),\n )\n set_in_carrier(\n carrier, self.SPAN_ID_KEY, format_span_id(span.context.span_id)\n )\n if span.parent is not None:\n set_in_carrier(\n carrier,\n self.PARENT_SPAN_ID_KEY,\n format_span_id(span.parent.span_id),\n )\n set_in_carrier(carrier, self.SAMPLED_KEY, \"1\" if sampled else \"0\")\n\n\ndef format_trace_id(trace_id: int) -> str:\n \"\"\"Format the trace id according to b3 specification.\"\"\"\n return format(trace_id, \"032x\")\n\n\ndef format_span_id(span_id: int) -> str:\n \"\"\"Format the span id according to b3 specification.\"\"\"\n return format(span_id, \"016x\")\n\n\ndef _extract_first_element(\n items: typing.Iterable[HTTPTextFormatT],\n) -> typing.Optional[HTTPTextFormatT]:\n if items is None:\n return None\n return next(iter(items), None)\n", "path": "opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py"}]} | 1,893 | 582 |
gh_patches_debug_25471 | rasdani/github-patches | git_diff | StackStorm__st2-5383 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Trigger name collision workaround
This addresses the jinja trigger name collision noted in issue #4641
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `contrib/core/actions/inject_trigger.py`
Content:
```
1 # Copyright 2020 The StackStorm Authors.
2 # Copyright 2019 Extreme Networks, Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 from __future__ import absolute_import
17
18 from st2common.runners.base_action import Action
19
20 __all__ = ["InjectTriggerAction"]
21
22
23 class InjectTriggerAction(Action):
24 def run(self, trigger, payload=None, trace_tag=None):
25 payload = payload or {}
26
27 datastore_service = self.action_service.datastore_service
28 client = datastore_service.get_api_client()
29
30 # Dispatch the trigger using the /webhooks/st2 API endpoint
31 # NOTE: Webhooks API endpoint is asynchronous so we don't know if the actual injection
32 # results in a TriggerInstanceDB database object creation or not. The object is created
33 # inside rulesengine service and could fail due to the user providing an invalid trigger
34 # reference or similar.
35 self.logger.debug(
36 'Injecting trigger "%s" with payload="%s"' % (trigger, str(payload))
37 )
38 result = client.webhooks.post_generic_webhook(
39 trigger=trigger, payload=payload, trace_tag=trace_tag
40 )
41
42 return result
43
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/contrib/core/actions/inject_trigger.py b/contrib/core/actions/inject_trigger.py
--- a/contrib/core/actions/inject_trigger.py
+++ b/contrib/core/actions/inject_trigger.py
@@ -21,7 +21,7 @@
class InjectTriggerAction(Action):
- def run(self, trigger, payload=None, trace_tag=None):
+ def run(self, trigger=None, trigger_name=None, payload=None, trace_tag=None):
payload = payload or {}
datastore_service = self.action_service.datastore_service
@@ -32,6 +32,18 @@
# results in a TriggerInstanceDB database object creation or not. The object is created
# inside rulesengine service and could fail due to the user providing an invalid trigger
# reference or similar.
+
+ # Raise an error if both trigger and trigger_name are specified
+ if trigger and trigger_name:
+ raise ValueError(
+ "Parameters `trigger` and `trigger_name` are mutually exclusive."
+ )
+
+ # Raise an error if neither trigger nor trigger_name are specified
+ if not trigger and not trigger_name:
+ raise ValueError("You must include the `trigger_name` parameter.")
+
+ trigger = trigger if trigger else trigger_name
self.logger.debug(
'Injecting trigger "%s" with payload="%s"' % (trigger, str(payload))
)
| {"golden_diff": "diff --git a/contrib/core/actions/inject_trigger.py b/contrib/core/actions/inject_trigger.py\n--- a/contrib/core/actions/inject_trigger.py\n+++ b/contrib/core/actions/inject_trigger.py\n@@ -21,7 +21,7 @@\n \n \n class InjectTriggerAction(Action):\n- def run(self, trigger, payload=None, trace_tag=None):\n+ def run(self, trigger=None, trigger_name=None, payload=None, trace_tag=None):\n payload = payload or {}\n \n datastore_service = self.action_service.datastore_service\n@@ -32,6 +32,18 @@\n # results in a TriggerInstanceDB database object creation or not. The object is created\n # inside rulesengine service and could fail due to the user providing an invalid trigger\n # reference or similar.\n+\n+ # Raise an error if both trigger and trigger_name are specified\n+ if trigger and trigger_name:\n+ raise ValueError(\n+ \"Parameters `trigger` and `trigger_name` are mutually exclusive.\"\n+ )\n+\n+ # Raise an error if neither trigger nor trigger_name are specified\n+ if not trigger and not trigger_name:\n+ raise ValueError(\"You must include the `trigger_name` parameter.\")\n+\n+ trigger = trigger if trigger else trigger_name\n self.logger.debug(\n 'Injecting trigger \"%s\" with payload=\"%s\"' % (trigger, str(payload))\n )\n", "issue": "Trigger name collision workaround\nThis addresses the jinja trigger name collision noted in issue #4641\n", "before_files": [{"content": "# Copyright 2020 The StackStorm Authors.\n# Copyright 2019 Extreme Networks, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\n\nfrom st2common.runners.base_action import Action\n\n__all__ = [\"InjectTriggerAction\"]\n\n\nclass InjectTriggerAction(Action):\n def run(self, trigger, payload=None, trace_tag=None):\n payload = payload or {}\n\n datastore_service = self.action_service.datastore_service\n client = datastore_service.get_api_client()\n\n # Dispatch the trigger using the /webhooks/st2 API endpoint\n # NOTE: Webhooks API endpoint is asynchronous so we don't know if the actual injection\n # results in a TriggerInstanceDB database object creation or not. The object is created\n # inside rulesengine service and could fail due to the user providing an invalid trigger\n # reference or similar.\n self.logger.debug(\n 'Injecting trigger \"%s\" with payload=\"%s\"' % (trigger, str(payload))\n )\n result = client.webhooks.post_generic_webhook(\n trigger=trigger, payload=payload, trace_tag=trace_tag\n )\n\n return result\n", "path": "contrib/core/actions/inject_trigger.py"}], "after_files": [{"content": "# Copyright 2020 The StackStorm Authors.\n# Copyright 2019 Extreme Networks, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\n\nfrom st2common.runners.base_action import Action\n\n__all__ = [\"InjectTriggerAction\"]\n\n\nclass InjectTriggerAction(Action):\n def run(self, trigger=None, trigger_name=None, payload=None, trace_tag=None):\n payload = payload or {}\n\n datastore_service = self.action_service.datastore_service\n client = datastore_service.get_api_client()\n\n # Dispatch the trigger using the /webhooks/st2 API endpoint\n # NOTE: Webhooks API endpoint is asynchronous so we don't know if the actual injection\n # results in a TriggerInstanceDB database object creation or not. The object is created\n # inside rulesengine service and could fail due to the user providing an invalid trigger\n # reference or similar.\n\n # Raise an error if both trigger and trigger_name are specified\n if trigger and trigger_name:\n raise ValueError(\n \"Parameters `trigger` and `trigger_name` are mutually exclusive.\"\n )\n\n # Raise an error if neither trigger nor trigger_name are specified\n if not trigger and not trigger_name:\n raise ValueError(\"You must include the `trigger_name` parameter.\")\n\n trigger = trigger if trigger else trigger_name\n self.logger.debug(\n 'Injecting trigger \"%s\" with payload=\"%s\"' % (trigger, str(payload))\n )\n result = client.webhooks.post_generic_webhook(\n trigger=trigger, payload=payload, trace_tag=trace_tag\n )\n\n return result\n", "path": "contrib/core/actions/inject_trigger.py"}]} | 722 | 300 |
gh_patches_debug_40558 | rasdani/github-patches | git_diff | docker__docker-py-3112 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Timeouts don't work on windows
Currently the windows npipe implementation doesn't honour timeouts. Regardless of which api endpoint you use or pretty much anything else this leads to bugs where the docker api waits until the docker daemon finishes instead of timing out properly.
For example, if there is a dockerfile containing at `timeout/`
```
FROM alpine
RUN sleep 1000
```
and you run
```python
from docker import DockerClient
DockerClient.from_env().images.build(path="timeout/", timeout=3)
```
python will hang for the full 1000 seconds instead of raising an error after 3.
Version info:
docker-py: 6.0.1
python: 3.11.3
docker:
Client:
Cloud integration: v1.0.24
Version: 20.10.14
API version: 1.41
Go version: go1.16.15
Git commit: a224086
Built: Thu Mar 24 01:53:11 2022
OS/Arch: windows/amd64
Context: default
Experimental: true
Server: Docker Desktop 4.8.1 (78998)
Engine:
Version: 20.10.14
API version: 1.41 (minimum version 1.12)
Go version: go1.16.15
Git commit: 87a90dc
Built: Thu Mar 24 01:46:14 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.5.11
GitCommit: 3df54a852345ae127d1fa3092b95168e4a88e2f8
runc:
Version: 1.0.3
GitCommit: v1.0.3-0-gf46b6ba
docker-init:
Version: 0.19.0
GitCommit: de40ad0
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docker/transport/npipesocket.py`
Content:
```
1 import functools
2 import time
3 import io
4
5 import win32file
6 import win32pipe
7
8 cERROR_PIPE_BUSY = 0xe7
9 cSECURITY_SQOS_PRESENT = 0x100000
10 cSECURITY_ANONYMOUS = 0
11
12 MAXIMUM_RETRY_COUNT = 10
13
14
15 def check_closed(f):
16 @functools.wraps(f)
17 def wrapped(self, *args, **kwargs):
18 if self._closed:
19 raise RuntimeError(
20 'Can not reuse socket after connection was closed.'
21 )
22 return f(self, *args, **kwargs)
23 return wrapped
24
25
26 class NpipeSocket:
27 """ Partial implementation of the socket API over windows named pipes.
28 This implementation is only designed to be used as a client socket,
29 and server-specific methods (bind, listen, accept...) are not
30 implemented.
31 """
32
33 def __init__(self, handle=None):
34 self._timeout = win32pipe.NMPWAIT_USE_DEFAULT_WAIT
35 self._handle = handle
36 self._closed = False
37
38 def accept(self):
39 raise NotImplementedError()
40
41 def bind(self, address):
42 raise NotImplementedError()
43
44 def close(self):
45 self._handle.Close()
46 self._closed = True
47
48 @check_closed
49 def connect(self, address, retry_count=0):
50 try:
51 handle = win32file.CreateFile(
52 address,
53 win32file.GENERIC_READ | win32file.GENERIC_WRITE,
54 0,
55 None,
56 win32file.OPEN_EXISTING,
57 cSECURITY_ANONYMOUS | cSECURITY_SQOS_PRESENT,
58 0
59 )
60 except win32pipe.error as e:
61 # See Remarks:
62 # https://msdn.microsoft.com/en-us/library/aa365800.aspx
63 if e.winerror == cERROR_PIPE_BUSY:
64 # Another program or thread has grabbed our pipe instance
65 # before we got to it. Wait for availability and attempt to
66 # connect again.
67 retry_count = retry_count + 1
68 if (retry_count < MAXIMUM_RETRY_COUNT):
69 time.sleep(1)
70 return self.connect(address, retry_count)
71 raise e
72
73 self.flags = win32pipe.GetNamedPipeInfo(handle)[0]
74
75 self._handle = handle
76 self._address = address
77
78 @check_closed
79 def connect_ex(self, address):
80 return self.connect(address)
81
82 @check_closed
83 def detach(self):
84 self._closed = True
85 return self._handle
86
87 @check_closed
88 def dup(self):
89 return NpipeSocket(self._handle)
90
91 def getpeername(self):
92 return self._address
93
94 def getsockname(self):
95 return self._address
96
97 def getsockopt(self, level, optname, buflen=None):
98 raise NotImplementedError()
99
100 def ioctl(self, control, option):
101 raise NotImplementedError()
102
103 def listen(self, backlog):
104 raise NotImplementedError()
105
106 def makefile(self, mode=None, bufsize=None):
107 if mode.strip('b') != 'r':
108 raise NotImplementedError()
109 rawio = NpipeFileIOBase(self)
110 if bufsize is None or bufsize <= 0:
111 bufsize = io.DEFAULT_BUFFER_SIZE
112 return io.BufferedReader(rawio, buffer_size=bufsize)
113
114 @check_closed
115 def recv(self, bufsize, flags=0):
116 err, data = win32file.ReadFile(self._handle, bufsize)
117 return data
118
119 @check_closed
120 def recvfrom(self, bufsize, flags=0):
121 data = self.recv(bufsize, flags)
122 return (data, self._address)
123
124 @check_closed
125 def recvfrom_into(self, buf, nbytes=0, flags=0):
126 return self.recv_into(buf, nbytes, flags), self._address
127
128 @check_closed
129 def recv_into(self, buf, nbytes=0):
130 readbuf = buf
131 if not isinstance(buf, memoryview):
132 readbuf = memoryview(buf)
133
134 err, data = win32file.ReadFile(
135 self._handle,
136 readbuf[:nbytes] if nbytes else readbuf
137 )
138 return len(data)
139
140 def _recv_into_py2(self, buf, nbytes):
141 err, data = win32file.ReadFile(self._handle, nbytes or len(buf))
142 n = len(data)
143 buf[:n] = data
144 return n
145
146 @check_closed
147 def send(self, string, flags=0):
148 err, nbytes = win32file.WriteFile(self._handle, string)
149 return nbytes
150
151 @check_closed
152 def sendall(self, string, flags=0):
153 return self.send(string, flags)
154
155 @check_closed
156 def sendto(self, string, address):
157 self.connect(address)
158 return self.send(string)
159
160 def setblocking(self, flag):
161 if flag:
162 return self.settimeout(None)
163 return self.settimeout(0)
164
165 def settimeout(self, value):
166 if value is None:
167 # Blocking mode
168 self._timeout = win32pipe.NMPWAIT_WAIT_FOREVER
169 elif not isinstance(value, (float, int)) or value < 0:
170 raise ValueError('Timeout value out of range')
171 elif value == 0:
172 # Non-blocking mode
173 self._timeout = win32pipe.NMPWAIT_NO_WAIT
174 else:
175 # Timeout mode - Value converted to milliseconds
176 self._timeout = value * 1000
177
178 def gettimeout(self):
179 return self._timeout
180
181 def setsockopt(self, level, optname, value):
182 raise NotImplementedError()
183
184 @check_closed
185 def shutdown(self, how):
186 return self.close()
187
188
189 class NpipeFileIOBase(io.RawIOBase):
190 def __init__(self, npipe_socket):
191 self.sock = npipe_socket
192
193 def close(self):
194 super().close()
195 self.sock = None
196
197 def fileno(self):
198 return self.sock.fileno()
199
200 def isatty(self):
201 return False
202
203 def readable(self):
204 return True
205
206 def readinto(self, buf):
207 return self.sock.recv_into(buf)
208
209 def seekable(self):
210 return False
211
212 def writable(self):
213 return False
214
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docker/transport/npipesocket.py b/docker/transport/npipesocket.py
--- a/docker/transport/npipesocket.py
+++ b/docker/transport/npipesocket.py
@@ -4,6 +4,9 @@
import win32file
import win32pipe
+import pywintypes
+import win32event
+import win32api
cERROR_PIPE_BUSY = 0xe7
cSECURITY_SQOS_PRESENT = 0x100000
@@ -54,7 +57,9 @@
0,
None,
win32file.OPEN_EXISTING,
- cSECURITY_ANONYMOUS | cSECURITY_SQOS_PRESENT,
+ (cSECURITY_ANONYMOUS
+ | cSECURITY_SQOS_PRESENT
+ | win32file.FILE_FLAG_OVERLAPPED),
0
)
except win32pipe.error as e:
@@ -131,22 +136,37 @@
if not isinstance(buf, memoryview):
readbuf = memoryview(buf)
- err, data = win32file.ReadFile(
- self._handle,
- readbuf[:nbytes] if nbytes else readbuf
- )
- return len(data)
-
- def _recv_into_py2(self, buf, nbytes):
- err, data = win32file.ReadFile(self._handle, nbytes or len(buf))
- n = len(data)
- buf[:n] = data
- return n
+ event = win32event.CreateEvent(None, True, True, None)
+ try:
+ overlapped = pywintypes.OVERLAPPED()
+ overlapped.hEvent = event
+ err, data = win32file.ReadFile(
+ self._handle,
+ readbuf[:nbytes] if nbytes else readbuf,
+ overlapped
+ )
+ wait_result = win32event.WaitForSingleObject(event, self._timeout)
+ if wait_result == win32event.WAIT_TIMEOUT:
+ win32file.CancelIo(self._handle)
+ raise TimeoutError
+ return win32file.GetOverlappedResult(self._handle, overlapped, 0)
+ finally:
+ win32api.CloseHandle(event)
@check_closed
def send(self, string, flags=0):
- err, nbytes = win32file.WriteFile(self._handle, string)
- return nbytes
+ event = win32event.CreateEvent(None, True, True, None)
+ try:
+ overlapped = pywintypes.OVERLAPPED()
+ overlapped.hEvent = event
+ win32file.WriteFile(self._handle, string, overlapped)
+ wait_result = win32event.WaitForSingleObject(event, self._timeout)
+ if wait_result == win32event.WAIT_TIMEOUT:
+ win32file.CancelIo(self._handle)
+ raise TimeoutError
+ return win32file.GetOverlappedResult(self._handle, overlapped, 0)
+ finally:
+ win32api.CloseHandle(event)
@check_closed
def sendall(self, string, flags=0):
@@ -165,15 +185,12 @@
def settimeout(self, value):
if value is None:
# Blocking mode
- self._timeout = win32pipe.NMPWAIT_WAIT_FOREVER
+ self._timeout = win32event.INFINITE
elif not isinstance(value, (float, int)) or value < 0:
raise ValueError('Timeout value out of range')
- elif value == 0:
- # Non-blocking mode
- self._timeout = win32pipe.NMPWAIT_NO_WAIT
else:
# Timeout mode - Value converted to milliseconds
- self._timeout = value * 1000
+ self._timeout = int(value * 1000)
def gettimeout(self):
return self._timeout
| {"golden_diff": "diff --git a/docker/transport/npipesocket.py b/docker/transport/npipesocket.py\n--- a/docker/transport/npipesocket.py\n+++ b/docker/transport/npipesocket.py\n@@ -4,6 +4,9 @@\n \n import win32file\n import win32pipe\n+import pywintypes\n+import win32event\n+import win32api\n \n cERROR_PIPE_BUSY = 0xe7\n cSECURITY_SQOS_PRESENT = 0x100000\n@@ -54,7 +57,9 @@\n 0,\n None,\n win32file.OPEN_EXISTING,\n- cSECURITY_ANONYMOUS | cSECURITY_SQOS_PRESENT,\n+ (cSECURITY_ANONYMOUS\n+ | cSECURITY_SQOS_PRESENT\n+ | win32file.FILE_FLAG_OVERLAPPED),\n 0\n )\n except win32pipe.error as e:\n@@ -131,22 +136,37 @@\n if not isinstance(buf, memoryview):\n readbuf = memoryview(buf)\n \n- err, data = win32file.ReadFile(\n- self._handle,\n- readbuf[:nbytes] if nbytes else readbuf\n- )\n- return len(data)\n-\n- def _recv_into_py2(self, buf, nbytes):\n- err, data = win32file.ReadFile(self._handle, nbytes or len(buf))\n- n = len(data)\n- buf[:n] = data\n- return n\n+ event = win32event.CreateEvent(None, True, True, None)\n+ try:\n+ overlapped = pywintypes.OVERLAPPED()\n+ overlapped.hEvent = event\n+ err, data = win32file.ReadFile(\n+ self._handle,\n+ readbuf[:nbytes] if nbytes else readbuf,\n+ overlapped\n+ )\n+ wait_result = win32event.WaitForSingleObject(event, self._timeout)\n+ if wait_result == win32event.WAIT_TIMEOUT:\n+ win32file.CancelIo(self._handle)\n+ raise TimeoutError\n+ return win32file.GetOverlappedResult(self._handle, overlapped, 0)\n+ finally:\n+ win32api.CloseHandle(event)\n \n @check_closed\n def send(self, string, flags=0):\n- err, nbytes = win32file.WriteFile(self._handle, string)\n- return nbytes\n+ event = win32event.CreateEvent(None, True, True, None)\n+ try:\n+ overlapped = pywintypes.OVERLAPPED()\n+ overlapped.hEvent = event\n+ win32file.WriteFile(self._handle, string, overlapped)\n+ wait_result = win32event.WaitForSingleObject(event, self._timeout)\n+ if wait_result == win32event.WAIT_TIMEOUT:\n+ win32file.CancelIo(self._handle)\n+ raise TimeoutError\n+ return win32file.GetOverlappedResult(self._handle, overlapped, 0)\n+ finally:\n+ win32api.CloseHandle(event)\n \n @check_closed\n def sendall(self, string, flags=0):\n@@ -165,15 +185,12 @@\n def settimeout(self, value):\n if value is None:\n # Blocking mode\n- self._timeout = win32pipe.NMPWAIT_WAIT_FOREVER\n+ self._timeout = win32event.INFINITE\n elif not isinstance(value, (float, int)) or value < 0:\n raise ValueError('Timeout value out of range')\n- elif value == 0:\n- # Non-blocking mode\n- self._timeout = win32pipe.NMPWAIT_NO_WAIT\n else:\n # Timeout mode - Value converted to milliseconds\n- self._timeout = value * 1000\n+ self._timeout = int(value * 1000)\n \n def gettimeout(self):\n return self._timeout\n", "issue": "Timeouts don't work on windows\nCurrently the windows npipe implementation doesn't honour timeouts. Regardless of which api endpoint you use or pretty much anything else this leads to bugs where the docker api waits until the docker daemon finishes instead of timing out properly.\r\n\r\nFor example, if there is a dockerfile containing at `timeout/`\r\n```\r\nFROM alpine\r\n\r\nRUN sleep 1000\r\n```\r\nand you run\r\n```python\r\nfrom docker import DockerClient\r\n\r\nDockerClient.from_env().images.build(path=\"timeout/\", timeout=3)\r\n```\r\npython will hang for the full 1000 seconds instead of raising an error after 3.\r\n\r\nVersion info: \r\ndocker-py: 6.0.1 \r\npython: 3.11.3 \r\ndocker:\r\nClient:\r\n Cloud integration: v1.0.24\r\n Version: 20.10.14\r\n API version: 1.41\r\n Go version: go1.16.15\r\n Git commit: a224086\r\n Built: Thu Mar 24 01:53:11 2022 \r\n OS/Arch: windows/amd64\r\n Context: default\r\n Experimental: true\r\n\r\nServer: Docker Desktop 4.8.1 (78998)\r\n Engine:\r\n Version: 20.10.14\r\n API version: 1.41 (minimum version 1.12) \r\n Go version: go1.16.15\r\n Git commit: 87a90dc\r\n Built: Thu Mar 24 01:46:14 2022 \r\n OS/Arch: linux/amd64\r\n Experimental: false\r\n containerd:\r\n Version: 1.5.11\r\n GitCommit: 3df54a852345ae127d1fa3092b95168e4a88e2f8\r\n runc:\r\n Version: 1.0.3\r\n GitCommit: v1.0.3-0-gf46b6ba\r\n docker-init:\r\n Version: 0.19.0\r\n GitCommit: de40ad0\n", "before_files": [{"content": "import functools\nimport time\nimport io\n\nimport win32file\nimport win32pipe\n\ncERROR_PIPE_BUSY = 0xe7\ncSECURITY_SQOS_PRESENT = 0x100000\ncSECURITY_ANONYMOUS = 0\n\nMAXIMUM_RETRY_COUNT = 10\n\n\ndef check_closed(f):\n @functools.wraps(f)\n def wrapped(self, *args, **kwargs):\n if self._closed:\n raise RuntimeError(\n 'Can not reuse socket after connection was closed.'\n )\n return f(self, *args, **kwargs)\n return wrapped\n\n\nclass NpipeSocket:\n \"\"\" Partial implementation of the socket API over windows named pipes.\n This implementation is only designed to be used as a client socket,\n and server-specific methods (bind, listen, accept...) are not\n implemented.\n \"\"\"\n\n def __init__(self, handle=None):\n self._timeout = win32pipe.NMPWAIT_USE_DEFAULT_WAIT\n self._handle = handle\n self._closed = False\n\n def accept(self):\n raise NotImplementedError()\n\n def bind(self, address):\n raise NotImplementedError()\n\n def close(self):\n self._handle.Close()\n self._closed = True\n\n @check_closed\n def connect(self, address, retry_count=0):\n try:\n handle = win32file.CreateFile(\n address,\n win32file.GENERIC_READ | win32file.GENERIC_WRITE,\n 0,\n None,\n win32file.OPEN_EXISTING,\n cSECURITY_ANONYMOUS | cSECURITY_SQOS_PRESENT,\n 0\n )\n except win32pipe.error as e:\n # See Remarks:\n # https://msdn.microsoft.com/en-us/library/aa365800.aspx\n if e.winerror == cERROR_PIPE_BUSY:\n # Another program or thread has grabbed our pipe instance\n # before we got to it. Wait for availability and attempt to\n # connect again.\n retry_count = retry_count + 1\n if (retry_count < MAXIMUM_RETRY_COUNT):\n time.sleep(1)\n return self.connect(address, retry_count)\n raise e\n\n self.flags = win32pipe.GetNamedPipeInfo(handle)[0]\n\n self._handle = handle\n self._address = address\n\n @check_closed\n def connect_ex(self, address):\n return self.connect(address)\n\n @check_closed\n def detach(self):\n self._closed = True\n return self._handle\n\n @check_closed\n def dup(self):\n return NpipeSocket(self._handle)\n\n def getpeername(self):\n return self._address\n\n def getsockname(self):\n return self._address\n\n def getsockopt(self, level, optname, buflen=None):\n raise NotImplementedError()\n\n def ioctl(self, control, option):\n raise NotImplementedError()\n\n def listen(self, backlog):\n raise NotImplementedError()\n\n def makefile(self, mode=None, bufsize=None):\n if mode.strip('b') != 'r':\n raise NotImplementedError()\n rawio = NpipeFileIOBase(self)\n if bufsize is None or bufsize <= 0:\n bufsize = io.DEFAULT_BUFFER_SIZE\n return io.BufferedReader(rawio, buffer_size=bufsize)\n\n @check_closed\n def recv(self, bufsize, flags=0):\n err, data = win32file.ReadFile(self._handle, bufsize)\n return data\n\n @check_closed\n def recvfrom(self, bufsize, flags=0):\n data = self.recv(bufsize, flags)\n return (data, self._address)\n\n @check_closed\n def recvfrom_into(self, buf, nbytes=0, flags=0):\n return self.recv_into(buf, nbytes, flags), self._address\n\n @check_closed\n def recv_into(self, buf, nbytes=0):\n readbuf = buf\n if not isinstance(buf, memoryview):\n readbuf = memoryview(buf)\n\n err, data = win32file.ReadFile(\n self._handle,\n readbuf[:nbytes] if nbytes else readbuf\n )\n return len(data)\n\n def _recv_into_py2(self, buf, nbytes):\n err, data = win32file.ReadFile(self._handle, nbytes or len(buf))\n n = len(data)\n buf[:n] = data\n return n\n\n @check_closed\n def send(self, string, flags=0):\n err, nbytes = win32file.WriteFile(self._handle, string)\n return nbytes\n\n @check_closed\n def sendall(self, string, flags=0):\n return self.send(string, flags)\n\n @check_closed\n def sendto(self, string, address):\n self.connect(address)\n return self.send(string)\n\n def setblocking(self, flag):\n if flag:\n return self.settimeout(None)\n return self.settimeout(0)\n\n def settimeout(self, value):\n if value is None:\n # Blocking mode\n self._timeout = win32pipe.NMPWAIT_WAIT_FOREVER\n elif not isinstance(value, (float, int)) or value < 0:\n raise ValueError('Timeout value out of range')\n elif value == 0:\n # Non-blocking mode\n self._timeout = win32pipe.NMPWAIT_NO_WAIT\n else:\n # Timeout mode - Value converted to milliseconds\n self._timeout = value * 1000\n\n def gettimeout(self):\n return self._timeout\n\n def setsockopt(self, level, optname, value):\n raise NotImplementedError()\n\n @check_closed\n def shutdown(self, how):\n return self.close()\n\n\nclass NpipeFileIOBase(io.RawIOBase):\n def __init__(self, npipe_socket):\n self.sock = npipe_socket\n\n def close(self):\n super().close()\n self.sock = None\n\n def fileno(self):\n return self.sock.fileno()\n\n def isatty(self):\n return False\n\n def readable(self):\n return True\n\n def readinto(self, buf):\n return self.sock.recv_into(buf)\n\n def seekable(self):\n return False\n\n def writable(self):\n return False\n", "path": "docker/transport/npipesocket.py"}], "after_files": [{"content": "import functools\nimport time\nimport io\n\nimport win32file\nimport win32pipe\nimport pywintypes\nimport win32event\nimport win32api\n\ncERROR_PIPE_BUSY = 0xe7\ncSECURITY_SQOS_PRESENT = 0x100000\ncSECURITY_ANONYMOUS = 0\n\nMAXIMUM_RETRY_COUNT = 10\n\n\ndef check_closed(f):\n @functools.wraps(f)\n def wrapped(self, *args, **kwargs):\n if self._closed:\n raise RuntimeError(\n 'Can not reuse socket after connection was closed.'\n )\n return f(self, *args, **kwargs)\n return wrapped\n\n\nclass NpipeSocket:\n \"\"\" Partial implementation of the socket API over windows named pipes.\n This implementation is only designed to be used as a client socket,\n and server-specific methods (bind, listen, accept...) are not\n implemented.\n \"\"\"\n\n def __init__(self, handle=None):\n self._timeout = win32pipe.NMPWAIT_USE_DEFAULT_WAIT\n self._handle = handle\n self._closed = False\n\n def accept(self):\n raise NotImplementedError()\n\n def bind(self, address):\n raise NotImplementedError()\n\n def close(self):\n self._handle.Close()\n self._closed = True\n\n @check_closed\n def connect(self, address, retry_count=0):\n try:\n handle = win32file.CreateFile(\n address,\n win32file.GENERIC_READ | win32file.GENERIC_WRITE,\n 0,\n None,\n win32file.OPEN_EXISTING,\n (cSECURITY_ANONYMOUS\n | cSECURITY_SQOS_PRESENT\n | win32file.FILE_FLAG_OVERLAPPED),\n 0\n )\n except win32pipe.error as e:\n # See Remarks:\n # https://msdn.microsoft.com/en-us/library/aa365800.aspx\n if e.winerror == cERROR_PIPE_BUSY:\n # Another program or thread has grabbed our pipe instance\n # before we got to it. Wait for availability and attempt to\n # connect again.\n retry_count = retry_count + 1\n if (retry_count < MAXIMUM_RETRY_COUNT):\n time.sleep(1)\n return self.connect(address, retry_count)\n raise e\n\n self.flags = win32pipe.GetNamedPipeInfo(handle)[0]\n\n self._handle = handle\n self._address = address\n\n @check_closed\n def connect_ex(self, address):\n return self.connect(address)\n\n @check_closed\n def detach(self):\n self._closed = True\n return self._handle\n\n @check_closed\n def dup(self):\n return NpipeSocket(self._handle)\n\n def getpeername(self):\n return self._address\n\n def getsockname(self):\n return self._address\n\n def getsockopt(self, level, optname, buflen=None):\n raise NotImplementedError()\n\n def ioctl(self, control, option):\n raise NotImplementedError()\n\n def listen(self, backlog):\n raise NotImplementedError()\n\n def makefile(self, mode=None, bufsize=None):\n if mode.strip('b') != 'r':\n raise NotImplementedError()\n rawio = NpipeFileIOBase(self)\n if bufsize is None or bufsize <= 0:\n bufsize = io.DEFAULT_BUFFER_SIZE\n return io.BufferedReader(rawio, buffer_size=bufsize)\n\n @check_closed\n def recv(self, bufsize, flags=0):\n err, data = win32file.ReadFile(self._handle, bufsize)\n return data\n\n @check_closed\n def recvfrom(self, bufsize, flags=0):\n data = self.recv(bufsize, flags)\n return (data, self._address)\n\n @check_closed\n def recvfrom_into(self, buf, nbytes=0, flags=0):\n return self.recv_into(buf, nbytes, flags), self._address\n\n @check_closed\n def recv_into(self, buf, nbytes=0):\n readbuf = buf\n if not isinstance(buf, memoryview):\n readbuf = memoryview(buf)\n\n event = win32event.CreateEvent(None, True, True, None)\n try:\n overlapped = pywintypes.OVERLAPPED()\n overlapped.hEvent = event\n err, data = win32file.ReadFile(\n self._handle,\n readbuf[:nbytes] if nbytes else readbuf,\n overlapped\n )\n wait_result = win32event.WaitForSingleObject(event, self._timeout)\n if wait_result == win32event.WAIT_TIMEOUT:\n win32file.CancelIo(self._handle)\n raise TimeoutError\n return win32file.GetOverlappedResult(self._handle, overlapped, 0)\n finally:\n win32api.CloseHandle(event)\n\n @check_closed\n def send(self, string, flags=0):\n event = win32event.CreateEvent(None, True, True, None)\n try:\n overlapped = pywintypes.OVERLAPPED()\n overlapped.hEvent = event\n win32file.WriteFile(self._handle, string, overlapped)\n wait_result = win32event.WaitForSingleObject(event, self._timeout)\n if wait_result == win32event.WAIT_TIMEOUT:\n win32file.CancelIo(self._handle)\n raise TimeoutError\n return win32file.GetOverlappedResult(self._handle, overlapped, 0)\n finally:\n win32api.CloseHandle(event)\n\n @check_closed\n def sendall(self, string, flags=0):\n return self.send(string, flags)\n\n @check_closed\n def sendto(self, string, address):\n self.connect(address)\n return self.send(string)\n\n def setblocking(self, flag):\n if flag:\n return self.settimeout(None)\n return self.settimeout(0)\n\n def settimeout(self, value):\n if value is None:\n # Blocking mode\n self._timeout = win32event.INFINITE\n elif not isinstance(value, (float, int)) or value < 0:\n raise ValueError('Timeout value out of range')\n else:\n # Timeout mode - Value converted to milliseconds\n self._timeout = int(value * 1000)\n\n def gettimeout(self):\n return self._timeout\n\n def setsockopt(self, level, optname, value):\n raise NotImplementedError()\n\n @check_closed\n def shutdown(self, how):\n return self.close()\n\n\nclass NpipeFileIOBase(io.RawIOBase):\n def __init__(self, npipe_socket):\n self.sock = npipe_socket\n\n def close(self):\n super().close()\n self.sock = None\n\n def fileno(self):\n return self.sock.fileno()\n\n def isatty(self):\n return False\n\n def readable(self):\n return True\n\n def readinto(self, buf):\n return self.sock.recv_into(buf)\n\n def seekable(self):\n return False\n\n def writable(self):\n return False\n", "path": "docker/transport/npipesocket.py"}]} | 2,676 | 899 |
gh_patches_debug_38189 | rasdani/github-patches | git_diff | MycroftAI__mycroft-core-2881 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
mycroft.conf silently overwritten
**Describe the bug**
When there's an error in mycroft.conf, it is silently overwritten. This is bad because user settings should not be permanently deleted without consent. Instead, logs and/or the output of mycroft-start should show the error.
**To Reproduce**
Try the following mycroft.conf:
```
{
"max_allowed_core_version": 20.8,
"listener": {
"wake_word": "Lazarus",
"device_name": "default"
"energy_ratio": 1.5
},
"hotwords": {
"Lazarus": {
"module": "pocketsphinx",
"phonemes": "L AE Z ER AH S .",
}
}
}
```
Note the missing comma after "default" and incorrect use of the energy ratio parameter.
After running mycroft-start restart all, it is overwritten with the following:
```
{
"max_allowed_core_version": 20.8
}
```
**Expected behavior**
One of the following:
"Mycroft failed to start because of an error in mycroft.conf."
or
The config file is copied to `mycroft.conf.old` (or `mycroft.conf.old.1`, etc.) and `mycroft.conf` is overwritten with the following:
```
# The previous mycroft.conf contained errors and was moved to mycroft.conf.old.
{
"max_allowed_core_version": 20.8
}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mycroft/configuration/config.py`
Content:
```
1
2 # Copyright 2017 Mycroft AI Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 #
16
17 import json
18 import os
19 import re
20 from os.path import exists, isfile, join, dirname
21
22 import xdg.BaseDirectory
23 from requests import RequestException
24
25 from mycroft.util.combo_lock import ComboLock
26 from mycroft.util.file_utils import get_temp_path
27 from mycroft.util import camel_case_split
28 from mycroft.util.json_helper import load_commented_json, merge_dict
29 from mycroft.util.log import LOG
30
31 from .locations import (
32 DEFAULT_CONFIG,
33 OLD_USER_CONFIG,
34 SYSTEM_CONFIG,
35 USER_CONFIG
36 )
37
38
39 def is_remote_list(values):
40 """Check if list corresponds to a backend formatted collection of dicts
41 """
42 for v in values:
43 if not isinstance(v, dict):
44 return False
45 if "@type" not in v.keys():
46 return False
47 return True
48
49
50 def translate_remote(config, setting):
51 """Translate config names from server to equivalents for mycroft-core.
52
53 Args:
54 config: base config to populate
55 settings: remote settings to be translated
56 """
57 IGNORED_SETTINGS = ["uuid", "@type", "active", "user", "device"]
58
59 for k, v in setting.items():
60 if k not in IGNORED_SETTINGS:
61 # Translate the CamelCase values stored remotely into the
62 # Python-style names used within mycroft-core.
63 key = re.sub(r"Setting(s)?", "", k)
64 key = camel_case_split(key).replace(" ", "_").lower()
65 if isinstance(v, dict):
66 config[key] = config.get(key, {})
67 translate_remote(config[key], v)
68 elif isinstance(v, list):
69 if is_remote_list(v):
70 if key not in config:
71 config[key] = {}
72 translate_list(config[key], v)
73 else:
74 config[key] = v
75 else:
76 config[key] = v
77
78
79 def translate_list(config, values):
80 """Translate list formated by mycroft server.
81
82 Args:
83 config (dict): target config
84 values (list): list from mycroft server config
85 """
86 for v in values:
87 module = v["@type"]
88 if v.get("active"):
89 config["module"] = module
90 config[module] = config.get(module, {})
91 translate_remote(config[module], v)
92
93
94 class LocalConf(dict):
95 """Config dictionary from file."""
96 _lock = ComboLock(get_temp_path('local-conf.lock'))
97
98 def __init__(self, path):
99 super(LocalConf, self).__init__()
100 if path:
101 self.path = path
102 self.load_local(path)
103
104 def load_local(self, path):
105 """Load local json file into self.
106
107 Args:
108 path (str): file to load
109 """
110 if exists(path) and isfile(path):
111 try:
112 config = load_commented_json(path)
113 for key in config:
114 self.__setitem__(key, config[key])
115
116 LOG.debug("Configuration {} loaded".format(path))
117 except Exception as e:
118 LOG.error("Error loading configuration '{}'".format(path))
119 LOG.error(repr(e))
120 else:
121 LOG.debug("Configuration '{}' not defined, skipping".format(path))
122
123 def store(self, path=None):
124 """Cache the received settings locally.
125
126 The cache will be used if the remote is unreachable to load settings
127 that are as close to the user's as possible.
128 """
129 with self._lock:
130 path = path or self.path
131 config_dir = dirname(path)
132 if not exists(config_dir):
133 os.makedirs(config_dir)
134
135 with open(path, 'w') as f:
136 json.dump(self, f, indent=2)
137
138 def merge(self, conf):
139 merge_dict(self, conf)
140
141
142 class RemoteConf(LocalConf):
143 _lock = ComboLock(get_temp_path('remote-conf.lock'))
144 """Config dictionary fetched from mycroft.ai."""
145
146 def __init__(self, cache=None):
147 super(RemoteConf, self).__init__(None)
148
149 cache = cache or join(xdg.BaseDirectory.xdg_cache_home, 'mycroft',
150 'web_cache.json')
151 from mycroft.api import is_paired
152 if not is_paired():
153 self.load_local(cache)
154 return
155
156 try:
157 # Here to avoid cyclic import
158 from mycroft.api import DeviceApi
159 api = DeviceApi()
160 setting = api.get_settings()
161
162 location = None
163 try:
164 location = api.get_location()
165 except RequestException as e:
166 LOG.error("RequestException fetching remote location: {}"
167 .format(str(e)))
168 if exists(cache) and isfile(cache):
169 location = load_commented_json(cache).get('location')
170
171 if location:
172 setting["location"] = location
173 # Remove server specific entries
174 config = {}
175 translate_remote(config, setting)
176 for key in config:
177 self.__setitem__(key, config[key])
178 self.store(cache)
179
180 except RequestException as e:
181 LOG.error("RequestException fetching remote configuration: {}"
182 .format(str(e)))
183 self.load_local(cache)
184
185 except Exception as e:
186 LOG.error("Failed to fetch remote configuration: %s" % repr(e),
187 exc_info=True)
188 self.load_local(cache)
189
190
191 def _log_old_location_deprecation():
192 LOG.warning("\n ===============================================\n"
193 " == DEPRECATION WARNING ==\n"
194 " ===============================================\n"
195 f" You still have a config file at {OLD_USER_CONFIG}\n"
196 " Note that this location is deprecated and will"
197 " not be used in the future\n"
198 " Please move it to "
199 f"{join(xdg.BaseDirectory.xdg_config_home, 'mycroft')}")
200
201
202 class Configuration:
203 """Namespace for operations on the configuration singleton."""
204 __config = {} # Cached config
205 __patch = {} # Patch config that skills can update to override config
206
207 @staticmethod
208 def get(configs=None, cache=True, remote=True):
209 """Get configuration
210
211 Returns cached instance if available otherwise builds a new
212 configuration dict.
213
214 Args:
215 configs (list): List of configuration dicts
216 cache (boolean): True if the result should be cached
217 remote (boolean): False if the Remote settings shouldn't be loaded
218
219 Returns:
220 (dict) configuration dictionary.
221 """
222 if Configuration.__config:
223 return Configuration.__config
224 else:
225 return Configuration.load_config_stack(configs, cache, remote)
226
227 @staticmethod
228 def load_config_stack(configs=None, cache=False, remote=True):
229 """Load a stack of config dicts into a single dict
230
231 Args:
232 configs (list): list of dicts to load
233 cache (boolean): True if result should be cached
234 remote (boolean): False if the Mycroft Home settings shouldn't
235 be loaded
236 Returns:
237 (dict) merged dict of all configuration files
238 """
239 if not configs:
240 configs = []
241
242 # First use the patched config
243 configs.append(Configuration.__patch)
244
245 # Then use XDG config
246 # This includes both the user config and
247 # /etc/xdg/mycroft/mycroft.conf
248 for conf_dir in xdg.BaseDirectory.load_config_paths('mycroft'):
249 configs.append(LocalConf(join(conf_dir, 'mycroft.conf')))
250
251 # Then check the old user config
252 if isfile(OLD_USER_CONFIG):
253 _log_old_location_deprecation()
254 configs.append(LocalConf(OLD_USER_CONFIG))
255
256 # Then use the system config (/etc/mycroft/mycroft.conf)
257 configs.append(LocalConf(SYSTEM_CONFIG))
258
259 # Then use remote config
260 if remote:
261 configs.append(RemoteConf())
262
263 # Then use the config that comes with the package
264 configs.append(LocalConf(DEFAULT_CONFIG))
265
266 # Make sure we reverse the array, as merge_dict will put every new
267 # file on top of the previous one
268 configs = reversed(configs)
269 else:
270 # Handle strings in stack
271 for index, item in enumerate(configs):
272 if isinstance(item, str):
273 configs[index] = LocalConf(item)
274
275 # Merge all configs into one
276 base = {}
277 for c in configs:
278 merge_dict(base, c)
279
280 # copy into cache
281 if cache:
282 Configuration.__config.clear()
283 for key in base:
284 Configuration.__config[key] = base[key]
285 return Configuration.__config
286 else:
287 return base
288
289 @staticmethod
290 def set_config_update_handlers(bus):
291 """Setup websocket handlers to update config.
292
293 Args:
294 bus: Message bus client instance
295 """
296 bus.on("configuration.updated", Configuration.updated)
297 bus.on("configuration.patch", Configuration.patch)
298 bus.on("configuration.patch.clear", Configuration.patch_clear)
299
300 @staticmethod
301 def updated(message):
302 """Handler for configuration.updated,
303
304 Triggers an update of cached config.
305 """
306 Configuration.load_config_stack(cache=True)
307
308 @staticmethod
309 def patch(message):
310 """Patch the volatile dict usable by skills
311
312 Args:
313 message: Messagebus message should contain a config
314 in the data payload.
315 """
316 config = message.data.get("config", {})
317 merge_dict(Configuration.__patch, config)
318 Configuration.load_config_stack(cache=True)
319
320 @staticmethod
321 def patch_clear(message):
322 """Clear the config patch space.
323
324 Args:
325 message: Messagebus message should contain a config
326 in the data payload.
327 """
328 Configuration.__patch = {}
329 Configuration.load_config_stack(cache=True)
330
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mycroft/configuration/config.py b/mycroft/configuration/config.py
--- a/mycroft/configuration/config.py
+++ b/mycroft/configuration/config.py
@@ -97,6 +97,7 @@
def __init__(self, path):
super(LocalConf, self).__init__()
+ self.is_valid = True # is loaded json valid, updated when load occurs
if path:
self.path = path
self.load_local(path)
@@ -117,23 +118,41 @@
except Exception as e:
LOG.error("Error loading configuration '{}'".format(path))
LOG.error(repr(e))
+ self.is_valid = False
else:
LOG.debug("Configuration '{}' not defined, skipping".format(path))
- def store(self, path=None):
- """Cache the received settings locally.
+ def store(self, path=None, force=False):
+ """Save config to disk.
The cache will be used if the remote is unreachable to load settings
that are as close to the user's as possible.
+
+ path (str): path to store file to, if missing will use the path from
+ where the config was loaded.
+ force (bool): Set to True if writing should occur despite the original
+ was malformed.
+
+ Returns:
+ (bool) True if save was successful, else False.
"""
+ result = False
with self._lock:
path = path or self.path
config_dir = dirname(path)
if not exists(config_dir):
os.makedirs(config_dir)
- with open(path, 'w') as f:
- json.dump(self, f, indent=2)
+ if self.is_valid or force:
+ with open(path, 'w') as f:
+ json.dump(self, f, indent=2)
+ result = True
+ else:
+ LOG.warning((f'"{path}" was not a valid config file when '
+ 'loaded, will not save config. Please correct '
+ 'the json or remove it to allow updates.'))
+ result = False
+ return result
def merge(self, conf):
merge_dict(self, conf)
@@ -175,7 +194,7 @@
translate_remote(config, setting)
for key in config:
self.__setitem__(key, config[key])
- self.store(cache)
+ self.store(cache, force=True)
except RequestException as e:
LOG.error("RequestException fetching remote configuration: {}"
| {"golden_diff": "diff --git a/mycroft/configuration/config.py b/mycroft/configuration/config.py\n--- a/mycroft/configuration/config.py\n+++ b/mycroft/configuration/config.py\n@@ -97,6 +97,7 @@\n \n def __init__(self, path):\n super(LocalConf, self).__init__()\n+ self.is_valid = True # is loaded json valid, updated when load occurs\n if path:\n self.path = path\n self.load_local(path)\n@@ -117,23 +118,41 @@\n except Exception as e:\n LOG.error(\"Error loading configuration '{}'\".format(path))\n LOG.error(repr(e))\n+ self.is_valid = False\n else:\n LOG.debug(\"Configuration '{}' not defined, skipping\".format(path))\n \n- def store(self, path=None):\n- \"\"\"Cache the received settings locally.\n+ def store(self, path=None, force=False):\n+ \"\"\"Save config to disk.\n \n The cache will be used if the remote is unreachable to load settings\n that are as close to the user's as possible.\n+\n+ path (str): path to store file to, if missing will use the path from\n+ where the config was loaded.\n+ force (bool): Set to True if writing should occur despite the original\n+ was malformed.\n+\n+ Returns:\n+ (bool) True if save was successful, else False.\n \"\"\"\n+ result = False\n with self._lock:\n path = path or self.path\n config_dir = dirname(path)\n if not exists(config_dir):\n os.makedirs(config_dir)\n \n- with open(path, 'w') as f:\n- json.dump(self, f, indent=2)\n+ if self.is_valid or force:\n+ with open(path, 'w') as f:\n+ json.dump(self, f, indent=2)\n+ result = True\n+ else:\n+ LOG.warning((f'\"{path}\" was not a valid config file when '\n+ 'loaded, will not save config. Please correct '\n+ 'the json or remove it to allow updates.'))\n+ result = False\n+ return result\n \n def merge(self, conf):\n merge_dict(self, conf)\n@@ -175,7 +194,7 @@\n translate_remote(config, setting)\n for key in config:\n self.__setitem__(key, config[key])\n- self.store(cache)\n+ self.store(cache, force=True)\n \n except RequestException as e:\n LOG.error(\"RequestException fetching remote configuration: {}\"\n", "issue": "mycroft.conf silently overwritten\n**Describe the bug**\r\nWhen there's an error in mycroft.conf, it is silently overwritten. This is bad because user settings should not be permanently deleted without consent. Instead, logs and/or the output of mycroft-start should show the error.\r\n\r\n**To Reproduce**\r\nTry the following mycroft.conf:\r\n```\r\n{\r\n \"max_allowed_core_version\": 20.8,\r\n \"listener\": {\r\n \"wake_word\": \"Lazarus\",\r\n \"device_name\": \"default\"\r\n \"energy_ratio\": 1.5\r\n },\r\n \"hotwords\": {\r\n \"Lazarus\": {\r\n \"module\": \"pocketsphinx\",\r\n \"phonemes\": \"L AE Z ER AH S .\",\r\n }\r\n }\r\n}\r\n```\r\n\r\nNote the missing comma after \"default\" and incorrect use of the energy ratio parameter.\r\n\r\nAfter running mycroft-start restart all, it is overwritten with the following:\r\n\r\n```\r\n{\r\n \"max_allowed_core_version\": 20.8\r\n}\r\n```\r\n\r\n**Expected behavior**\r\nOne of the following:\r\n\"Mycroft failed to start because of an error in mycroft.conf.\"\r\n\r\nor\r\n\r\nThe config file is copied to `mycroft.conf.old` (or `mycroft.conf.old.1`, etc.) and `mycroft.conf` is overwritten with the following:\r\n```\r\n# The previous mycroft.conf contained errors and was moved to mycroft.conf.old.\r\n{\r\n \"max_allowed_core_version\": 20.8\r\n}\r\n```\n", "before_files": [{"content": "\n# Copyright 2017 Mycroft AI Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\nimport json\nimport os\nimport re\nfrom os.path import exists, isfile, join, dirname\n\nimport xdg.BaseDirectory\nfrom requests import RequestException\n\nfrom mycroft.util.combo_lock import ComboLock\nfrom mycroft.util.file_utils import get_temp_path\nfrom mycroft.util import camel_case_split\nfrom mycroft.util.json_helper import load_commented_json, merge_dict\nfrom mycroft.util.log import LOG\n\nfrom .locations import (\n DEFAULT_CONFIG,\n OLD_USER_CONFIG,\n SYSTEM_CONFIG,\n USER_CONFIG\n)\n\n\ndef is_remote_list(values):\n \"\"\"Check if list corresponds to a backend formatted collection of dicts\n \"\"\"\n for v in values:\n if not isinstance(v, dict):\n return False\n if \"@type\" not in v.keys():\n return False\n return True\n\n\ndef translate_remote(config, setting):\n \"\"\"Translate config names from server to equivalents for mycroft-core.\n\n Args:\n config: base config to populate\n settings: remote settings to be translated\n \"\"\"\n IGNORED_SETTINGS = [\"uuid\", \"@type\", \"active\", \"user\", \"device\"]\n\n for k, v in setting.items():\n if k not in IGNORED_SETTINGS:\n # Translate the CamelCase values stored remotely into the\n # Python-style names used within mycroft-core.\n key = re.sub(r\"Setting(s)?\", \"\", k)\n key = camel_case_split(key).replace(\" \", \"_\").lower()\n if isinstance(v, dict):\n config[key] = config.get(key, {})\n translate_remote(config[key], v)\n elif isinstance(v, list):\n if is_remote_list(v):\n if key not in config:\n config[key] = {}\n translate_list(config[key], v)\n else:\n config[key] = v\n else:\n config[key] = v\n\n\ndef translate_list(config, values):\n \"\"\"Translate list formated by mycroft server.\n\n Args:\n config (dict): target config\n values (list): list from mycroft server config\n \"\"\"\n for v in values:\n module = v[\"@type\"]\n if v.get(\"active\"):\n config[\"module\"] = module\n config[module] = config.get(module, {})\n translate_remote(config[module], v)\n\n\nclass LocalConf(dict):\n \"\"\"Config dictionary from file.\"\"\"\n _lock = ComboLock(get_temp_path('local-conf.lock'))\n\n def __init__(self, path):\n super(LocalConf, self).__init__()\n if path:\n self.path = path\n self.load_local(path)\n\n def load_local(self, path):\n \"\"\"Load local json file into self.\n\n Args:\n path (str): file to load\n \"\"\"\n if exists(path) and isfile(path):\n try:\n config = load_commented_json(path)\n for key in config:\n self.__setitem__(key, config[key])\n\n LOG.debug(\"Configuration {} loaded\".format(path))\n except Exception as e:\n LOG.error(\"Error loading configuration '{}'\".format(path))\n LOG.error(repr(e))\n else:\n LOG.debug(\"Configuration '{}' not defined, skipping\".format(path))\n\n def store(self, path=None):\n \"\"\"Cache the received settings locally.\n\n The cache will be used if the remote is unreachable to load settings\n that are as close to the user's as possible.\n \"\"\"\n with self._lock:\n path = path or self.path\n config_dir = dirname(path)\n if not exists(config_dir):\n os.makedirs(config_dir)\n\n with open(path, 'w') as f:\n json.dump(self, f, indent=2)\n\n def merge(self, conf):\n merge_dict(self, conf)\n\n\nclass RemoteConf(LocalConf):\n _lock = ComboLock(get_temp_path('remote-conf.lock'))\n \"\"\"Config dictionary fetched from mycroft.ai.\"\"\"\n\n def __init__(self, cache=None):\n super(RemoteConf, self).__init__(None)\n\n cache = cache or join(xdg.BaseDirectory.xdg_cache_home, 'mycroft',\n 'web_cache.json')\n from mycroft.api import is_paired\n if not is_paired():\n self.load_local(cache)\n return\n\n try:\n # Here to avoid cyclic import\n from mycroft.api import DeviceApi\n api = DeviceApi()\n setting = api.get_settings()\n\n location = None\n try:\n location = api.get_location()\n except RequestException as e:\n LOG.error(\"RequestException fetching remote location: {}\"\n .format(str(e)))\n if exists(cache) and isfile(cache):\n location = load_commented_json(cache).get('location')\n\n if location:\n setting[\"location\"] = location\n # Remove server specific entries\n config = {}\n translate_remote(config, setting)\n for key in config:\n self.__setitem__(key, config[key])\n self.store(cache)\n\n except RequestException as e:\n LOG.error(\"RequestException fetching remote configuration: {}\"\n .format(str(e)))\n self.load_local(cache)\n\n except Exception as e:\n LOG.error(\"Failed to fetch remote configuration: %s\" % repr(e),\n exc_info=True)\n self.load_local(cache)\n\n\ndef _log_old_location_deprecation():\n LOG.warning(\"\\n ===============================================\\n\"\n \" == DEPRECATION WARNING ==\\n\"\n \" ===============================================\\n\"\n f\" You still have a config file at {OLD_USER_CONFIG}\\n\"\n \" Note that this location is deprecated and will\"\n \" not be used in the future\\n\"\n \" Please move it to \"\n f\"{join(xdg.BaseDirectory.xdg_config_home, 'mycroft')}\")\n\n\nclass Configuration:\n \"\"\"Namespace for operations on the configuration singleton.\"\"\"\n __config = {} # Cached config\n __patch = {} # Patch config that skills can update to override config\n\n @staticmethod\n def get(configs=None, cache=True, remote=True):\n \"\"\"Get configuration\n\n Returns cached instance if available otherwise builds a new\n configuration dict.\n\n Args:\n configs (list): List of configuration dicts\n cache (boolean): True if the result should be cached\n remote (boolean): False if the Remote settings shouldn't be loaded\n\n Returns:\n (dict) configuration dictionary.\n \"\"\"\n if Configuration.__config:\n return Configuration.__config\n else:\n return Configuration.load_config_stack(configs, cache, remote)\n\n @staticmethod\n def load_config_stack(configs=None, cache=False, remote=True):\n \"\"\"Load a stack of config dicts into a single dict\n\n Args:\n configs (list): list of dicts to load\n cache (boolean): True if result should be cached\n remote (boolean): False if the Mycroft Home settings shouldn't\n be loaded\n Returns:\n (dict) merged dict of all configuration files\n \"\"\"\n if not configs:\n configs = []\n\n # First use the patched config\n configs.append(Configuration.__patch)\n\n # Then use XDG config\n # This includes both the user config and\n # /etc/xdg/mycroft/mycroft.conf\n for conf_dir in xdg.BaseDirectory.load_config_paths('mycroft'):\n configs.append(LocalConf(join(conf_dir, 'mycroft.conf')))\n\n # Then check the old user config\n if isfile(OLD_USER_CONFIG):\n _log_old_location_deprecation()\n configs.append(LocalConf(OLD_USER_CONFIG))\n\n # Then use the system config (/etc/mycroft/mycroft.conf)\n configs.append(LocalConf(SYSTEM_CONFIG))\n\n # Then use remote config\n if remote:\n configs.append(RemoteConf())\n\n # Then use the config that comes with the package\n configs.append(LocalConf(DEFAULT_CONFIG))\n\n # Make sure we reverse the array, as merge_dict will put every new\n # file on top of the previous one\n configs = reversed(configs)\n else:\n # Handle strings in stack\n for index, item in enumerate(configs):\n if isinstance(item, str):\n configs[index] = LocalConf(item)\n\n # Merge all configs into one\n base = {}\n for c in configs:\n merge_dict(base, c)\n\n # copy into cache\n if cache:\n Configuration.__config.clear()\n for key in base:\n Configuration.__config[key] = base[key]\n return Configuration.__config\n else:\n return base\n\n @staticmethod\n def set_config_update_handlers(bus):\n \"\"\"Setup websocket handlers to update config.\n\n Args:\n bus: Message bus client instance\n \"\"\"\n bus.on(\"configuration.updated\", Configuration.updated)\n bus.on(\"configuration.patch\", Configuration.patch)\n bus.on(\"configuration.patch.clear\", Configuration.patch_clear)\n\n @staticmethod\n def updated(message):\n \"\"\"Handler for configuration.updated,\n\n Triggers an update of cached config.\n \"\"\"\n Configuration.load_config_stack(cache=True)\n\n @staticmethod\n def patch(message):\n \"\"\"Patch the volatile dict usable by skills\n\n Args:\n message: Messagebus message should contain a config\n in the data payload.\n \"\"\"\n config = message.data.get(\"config\", {})\n merge_dict(Configuration.__patch, config)\n Configuration.load_config_stack(cache=True)\n\n @staticmethod\n def patch_clear(message):\n \"\"\"Clear the config patch space.\n\n Args:\n message: Messagebus message should contain a config\n in the data payload.\n \"\"\"\n Configuration.__patch = {}\n Configuration.load_config_stack(cache=True)\n", "path": "mycroft/configuration/config.py"}], "after_files": [{"content": "\n# Copyright 2017 Mycroft AI Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\nimport json\nimport os\nimport re\nfrom os.path import exists, isfile, join, dirname\n\nimport xdg.BaseDirectory\nfrom requests import RequestException\n\nfrom mycroft.util.combo_lock import ComboLock\nfrom mycroft.util.file_utils import get_temp_path\nfrom mycroft.util import camel_case_split\nfrom mycroft.util.json_helper import load_commented_json, merge_dict\nfrom mycroft.util.log import LOG\n\nfrom .locations import (\n DEFAULT_CONFIG,\n OLD_USER_CONFIG,\n SYSTEM_CONFIG,\n USER_CONFIG\n)\n\n\ndef is_remote_list(values):\n \"\"\"Check if list corresponds to a backend formatted collection of dicts\n \"\"\"\n for v in values:\n if not isinstance(v, dict):\n return False\n if \"@type\" not in v.keys():\n return False\n return True\n\n\ndef translate_remote(config, setting):\n \"\"\"Translate config names from server to equivalents for mycroft-core.\n\n Args:\n config: base config to populate\n settings: remote settings to be translated\n \"\"\"\n IGNORED_SETTINGS = [\"uuid\", \"@type\", \"active\", \"user\", \"device\"]\n\n for k, v in setting.items():\n if k not in IGNORED_SETTINGS:\n # Translate the CamelCase values stored remotely into the\n # Python-style names used within mycroft-core.\n key = re.sub(r\"Setting(s)?\", \"\", k)\n key = camel_case_split(key).replace(\" \", \"_\").lower()\n if isinstance(v, dict):\n config[key] = config.get(key, {})\n translate_remote(config[key], v)\n elif isinstance(v, list):\n if is_remote_list(v):\n if key not in config:\n config[key] = {}\n translate_list(config[key], v)\n else:\n config[key] = v\n else:\n config[key] = v\n\n\ndef translate_list(config, values):\n \"\"\"Translate list formated by mycroft server.\n\n Args:\n config (dict): target config\n values (list): list from mycroft server config\n \"\"\"\n for v in values:\n module = v[\"@type\"]\n if v.get(\"active\"):\n config[\"module\"] = module\n config[module] = config.get(module, {})\n translate_remote(config[module], v)\n\n\nclass LocalConf(dict):\n \"\"\"Config dictionary from file.\"\"\"\n _lock = ComboLock(get_temp_path('local-conf.lock'))\n\n def __init__(self, path):\n super(LocalConf, self).__init__()\n self.is_valid = True # is loaded json valid, updated when load occurs\n if path:\n self.path = path\n self.load_local(path)\n\n def load_local(self, path):\n \"\"\"Load local json file into self.\n\n Args:\n path (str): file to load\n \"\"\"\n if exists(path) and isfile(path):\n try:\n config = load_commented_json(path)\n for key in config:\n self.__setitem__(key, config[key])\n\n LOG.debug(\"Configuration {} loaded\".format(path))\n except Exception as e:\n LOG.error(\"Error loading configuration '{}'\".format(path))\n LOG.error(repr(e))\n self.is_valid = False\n else:\n LOG.debug(\"Configuration '{}' not defined, skipping\".format(path))\n\n def store(self, path=None, force=False):\n \"\"\"Save config to disk.\n\n The cache will be used if the remote is unreachable to load settings\n that are as close to the user's as possible.\n\n path (str): path to store file to, if missing will use the path from\n where the config was loaded.\n force (bool): Set to True if writing should occur despite the original\n was malformed.\n\n Returns:\n (bool) True if save was successful, else False.\n \"\"\"\n result = False\n with self._lock:\n path = path or self.path\n config_dir = dirname(path)\n if not exists(config_dir):\n os.makedirs(config_dir)\n\n if self.is_valid or force:\n with open(path, 'w') as f:\n json.dump(self, f, indent=2)\n result = True\n else:\n LOG.warning((f'\"{path}\" was not a valid config file when '\n 'loaded, will not save config. Please correct '\n 'the json or remove it to allow updates.'))\n result = False\n return result\n\n def merge(self, conf):\n merge_dict(self, conf)\n\n\nclass RemoteConf(LocalConf):\n _lock = ComboLock(get_temp_path('remote-conf.lock'))\n \"\"\"Config dictionary fetched from mycroft.ai.\"\"\"\n\n def __init__(self, cache=None):\n super(RemoteConf, self).__init__(None)\n\n cache = cache or join(xdg.BaseDirectory.xdg_cache_home, 'mycroft',\n 'web_cache.json')\n from mycroft.api import is_paired\n if not is_paired():\n self.load_local(cache)\n return\n\n try:\n # Here to avoid cyclic import\n from mycroft.api import DeviceApi\n api = DeviceApi()\n setting = api.get_settings()\n\n location = None\n try:\n location = api.get_location()\n except RequestException as e:\n LOG.error(\"RequestException fetching remote location: {}\"\n .format(str(e)))\n if exists(cache) and isfile(cache):\n location = load_commented_json(cache).get('location')\n\n if location:\n setting[\"location\"] = location\n # Remove server specific entries\n config = {}\n translate_remote(config, setting)\n for key in config:\n self.__setitem__(key, config[key])\n self.store(cache, force=True)\n\n except RequestException as e:\n LOG.error(\"RequestException fetching remote configuration: {}\"\n .format(str(e)))\n self.load_local(cache)\n\n except Exception as e:\n LOG.error(\"Failed to fetch remote configuration: %s\" % repr(e),\n exc_info=True)\n self.load_local(cache)\n\n\ndef _log_old_location_deprecation():\n LOG.warning(\"\\n ===============================================\\n\"\n \" == DEPRECATION WARNING ==\\n\"\n \" ===============================================\\n\"\n f\" You still have a config file at {OLD_USER_CONFIG}\\n\"\n \" Note that this location is deprecated and will\"\n \" not be used in the future\\n\"\n \" Please move it to \"\n f\"{join(xdg.BaseDirectory.xdg_config_home, 'mycroft')}\")\n\n\nclass Configuration:\n \"\"\"Namespace for operations on the configuration singleton.\"\"\"\n __config = {} # Cached config\n __patch = {} # Patch config that skills can update to override config\n\n @staticmethod\n def get(configs=None, cache=True, remote=True):\n \"\"\"Get configuration\n\n Returns cached instance if available otherwise builds a new\n configuration dict.\n\n Args:\n configs (list): List of configuration dicts\n cache (boolean): True if the result should be cached\n remote (boolean): False if the Remote settings shouldn't be loaded\n\n Returns:\n (dict) configuration dictionary.\n \"\"\"\n if Configuration.__config:\n return Configuration.__config\n else:\n return Configuration.load_config_stack(configs, cache, remote)\n\n @staticmethod\n def load_config_stack(configs=None, cache=False, remote=True):\n \"\"\"Load a stack of config dicts into a single dict\n\n Args:\n configs (list): list of dicts to load\n cache (boolean): True if result should be cached\n remote (boolean): False if the Mycroft Home settings shouldn't\n be loaded\n Returns:\n (dict) merged dict of all configuration files\n \"\"\"\n if not configs:\n configs = []\n\n # First use the patched config\n configs.append(Configuration.__patch)\n\n # Then use XDG config\n # This includes both the user config and\n # /etc/xdg/mycroft/mycroft.conf\n for conf_dir in xdg.BaseDirectory.load_config_paths('mycroft'):\n configs.append(LocalConf(join(conf_dir, 'mycroft.conf')))\n\n # Then check the old user config\n if isfile(OLD_USER_CONFIG):\n _log_old_location_deprecation()\n configs.append(LocalConf(OLD_USER_CONFIG))\n\n # Then use the system config (/etc/mycroft/mycroft.conf)\n configs.append(LocalConf(SYSTEM_CONFIG))\n\n # Then use remote config\n if remote:\n configs.append(RemoteConf())\n\n # Then use the config that comes with the package\n configs.append(LocalConf(DEFAULT_CONFIG))\n\n # Make sure we reverse the array, as merge_dict will put every new\n # file on top of the previous one\n configs = reversed(configs)\n else:\n # Handle strings in stack\n for index, item in enumerate(configs):\n if isinstance(item, str):\n configs[index] = LocalConf(item)\n\n # Merge all configs into one\n base = {}\n for c in configs:\n merge_dict(base, c)\n\n # copy into cache\n if cache:\n Configuration.__config.clear()\n for key in base:\n Configuration.__config[key] = base[key]\n return Configuration.__config\n else:\n return base\n\n @staticmethod\n def set_config_update_handlers(bus):\n \"\"\"Setup websocket handlers to update config.\n\n Args:\n bus: Message bus client instance\n \"\"\"\n bus.on(\"configuration.updated\", Configuration.updated)\n bus.on(\"configuration.patch\", Configuration.patch)\n bus.on(\"configuration.patch.clear\", Configuration.patch_clear)\n\n @staticmethod\n def updated(message):\n \"\"\"Handler for configuration.updated,\n\n Triggers an update of cached config.\n \"\"\"\n Configuration.load_config_stack(cache=True)\n\n @staticmethod\n def patch(message):\n \"\"\"Patch the volatile dict usable by skills\n\n Args:\n message: Messagebus message should contain a config\n in the data payload.\n \"\"\"\n config = message.data.get(\"config\", {})\n merge_dict(Configuration.__patch, config)\n Configuration.load_config_stack(cache=True)\n\n @staticmethod\n def patch_clear(message):\n \"\"\"Clear the config patch space.\n\n Args:\n message: Messagebus message should contain a config\n in the data payload.\n \"\"\"\n Configuration.__patch = {}\n Configuration.load_config_stack(cache=True)\n", "path": "mycroft/configuration/config.py"}]} | 3,637 | 553 |
gh_patches_debug_22621 | rasdani/github-patches | git_diff | aws__serverless-application-model-1582 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cognito User Pool SMS configuration problem
**Description:**
When trying to create a Cognito user pool using SAM templates, SAM throws the error
> Failed to create the changeset: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state Status: FAILED. Reason: Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless Application Specification document. Number of errors found: 1. Resource with id [CognitoUserPool] is invalid. Type of property 'SmsConfiguration' is invalid.
when specifying [SmsConfiguration](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cognito-userpool.html#cfn-cognito-userpool-smsconfiguration) property.
In the template, there is also a Lambda trigger that has Cognito configured as an event source.
After looking through the project and doing some tests, I believe the error could appear in the samtranslator module:
`'SmsConfiguration': PropertyType(False, list_of(dict)),`
From the CloudFormation docs, [SmsConfiguration](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cognito-userpool.html#cfn-cognito-userpool-smsconfiguration) seems to be a simple dict, but in the code snippet above, it is validated as a list of dicts.
Indeed, if I modify the corresponding part of the template from a mapping to a YAML list consisting of a single object, validation passes, but when the stack is created by CloudFormation, it fails with
> Property validation failure: [Value of property {/SmsConfiguration} does not match type {Object}]
which is consistent with the type of the property specified in the CloudFormation docs.
**Steps to reproduce the issue:**
1. Create a SAM template with a Cognito user pool configured to use SMS MFA and a Lambda trigger associated.
```yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
Example YAML.
Globals:
Function:
Timeout: 3
Handler: lambda_function.lambda_handler
Runtime: python3.6
MemorySize: 128
Resources:
PreSignupValidationLambda:
Type: AWS::Serverless::Function
Properties:
CodeUri: src/pre_signup_validation/
Events:
CognitoTrigger:
Type: Cognito
Properties:
UserPool: !Ref CognitoUserPool
Trigger: PreSignUp
CognitoUserPool:
Type: 'AWS::Cognito::UserPool'
Properties:
AutoVerifiedAttributes:
- phone_number
MfaConfiguration: OPTIONAL
Schema:
- AttributeDataType: String
DeveloperOnlyAttribute: false
Mutable: false
Name: sub
Required: true
StringAttributeConstraints:
MaxLength: 2048
MinLength: 0
- AttributeDataType: String
DeveloperOnlyAttribute: false
Mutable: true
Name: email
Required: true
StringAttributeConstraints:
MaxLength: 2048
MinLength: 0
- AttributeDataType: String
DeveloperOnlyAttribute: false
Mutable: true
Name: phone_number
Required: true
StringAttributeConstraints:
MaxLength: 2048
MinLength: 0
SmsConfiguration:
ExternalId: 'xxx-xxx-xxx'
SnsCallerArn: !GetAtt CognitoSMSRole.Arn
UsernameAttributes:
- email
- phone_number
UserPoolName: Customers
CognitoSMSRole:
Type: 'AWS::IAM::Role'
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service: 'cognito-idp.amazonaws.com'
Action:
- 'sts:AssumeRole'
Condition:
StringEquals:
'sts:ExternalId': 'xxx-xxx-xxx'
Policies:
- PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- 'sns:Publish'
Resource:
- '*'
PolicyName: CognitoSendSMS
RoleName: CognitoSMSRole
```
2. Write a basic Lambda function in ```<template_location>/src/pre_signup_validation/lambda_function.py```
```python
def lambda_handler(event: dict, context: dict):
return event
```
3. Run (Commands from the AWS Toolkit for PyCharm when trying to deploy application)
```bash
sam build --template template.yaml --build-dir build --use-container
```
```bash
sam package --template-file build/template.yaml --output-template-file build/packaged-template.yaml --s3-bucket <your_s3_bucket>
```
```bash
sam deploy --template-file build/packaged-template.yaml --stack-name test --no-execute-changeset
```
**Observed result:**
SAM validates the SmsConfiguration parameter of Cognito user pools as a list of type dict.
**Expected result:**
Validation should be consistent with CloudFormation specification.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `samtranslator/parser/parser.py`
Content:
```
1 import logging
2
3 from samtranslator.model.exceptions import InvalidDocumentException, InvalidTemplateException, InvalidResourceException
4 from samtranslator.validator.validator import SamTemplateValidator
5 from samtranslator.plugins import LifeCycleEvents
6 from samtranslator.public.sdk.template import SamTemplate
7
8 LOG = logging.getLogger(__name__)
9
10
11 class Parser:
12 def __init__(self):
13 pass
14
15 def parse(self, sam_template, parameter_values, sam_plugins):
16 self._validate(sam_template, parameter_values)
17 sam_plugins.act(LifeCycleEvents.before_transform_template, sam_template)
18
19 @staticmethod
20 def validate_datatypes(sam_template):
21 """Validates the datatype within the template """
22 if (
23 "Resources" not in sam_template
24 or not isinstance(sam_template["Resources"], dict)
25 or not sam_template["Resources"]
26 ):
27 raise InvalidDocumentException([InvalidTemplateException("'Resources' section is required")])
28
29 if not all(isinstance(sam_resource, dict) for sam_resource in sam_template["Resources"].values()):
30 raise InvalidDocumentException(
31 [
32 InvalidTemplateException(
33 "All 'Resources' must be Objects. If you're using YAML, this may be an " "indentation issue."
34 )
35 ]
36 )
37
38 sam_template_instance = SamTemplate(sam_template)
39
40 for resource_logical_id, sam_resource in sam_template_instance.iterate():
41 # NOTE: Properties isn't required for SimpleTable, so we can't check
42 # `not isinstance(sam_resources.get("Properties"), dict)` as this would be a breaking change.
43 # sam_resource.properties defaults to {} in SamTemplate init
44 if not isinstance(sam_resource.properties, dict):
45 raise InvalidDocumentException(
46 [
47 InvalidResourceException(
48 resource_logical_id,
49 "All 'Resources' must be Objects and have a 'Properties' Object. If "
50 "you're using YAML, this may be an indentation issue.",
51 )
52 ]
53 )
54
55 # private methods
56 def _validate(self, sam_template, parameter_values):
57 """Validates the template and parameter values and raises exceptions if there's an issue
58
59 :param dict sam_template: SAM template
60 :param dict parameter_values: Dictionary of parameter values provided by the user
61 """
62 if parameter_values is None:
63 raise ValueError("`parameter_values` argument is required")
64
65 Parser.validate_datatypes(sam_template)
66
67 try:
68 validator = SamTemplateValidator()
69 validation_errors = validator.validate(sam_template)
70 if validation_errors:
71 LOG.warning("Template schema validation reported the following errors: %s", validation_errors)
72 except Exception as e:
73 # Catching any exception and not re-raising to make sure any validation process won't break transform
74 LOG.exception("Exception from SamTemplateValidator: %s", e)
75
```
Path: `samtranslator/model/cognito.py`
Content:
```
1 from samtranslator.model import PropertyType, Resource
2 from samtranslator.model.types import is_type, list_of, is_str
3 from samtranslator.model.intrinsics import fnGetAtt, ref
4
5
6 class CognitoUserPool(Resource):
7 resource_type = "AWS::Cognito::UserPool"
8 property_types = {
9 "AccountRecoverySetting": PropertyType(False, is_type(dict)),
10 "AdminCreateUserConfig": PropertyType(False, is_type(dict)),
11 "AliasAttributes": PropertyType(False, list_of(is_str())),
12 "AutoVerifiedAttributes": PropertyType(False, list_of(is_str())),
13 "DeviceConfiguration": PropertyType(False, is_type(dict)),
14 "EmailConfiguration": PropertyType(False, is_type(dict)),
15 "EmailVerificationMessage": PropertyType(False, is_str()),
16 "EmailVerificationSubject": PropertyType(False, is_str()),
17 "EnabledMfas": PropertyType(False, list_of(is_str())),
18 "LambdaConfig": PropertyType(False, is_type(dict)),
19 "MfaConfiguration": PropertyType(False, is_str()),
20 "Policies": PropertyType(False, is_type(dict)),
21 "Schema": PropertyType(False, list_of(dict)),
22 "SmsAuthenticationMessage": PropertyType(False, is_str()),
23 "SmsConfiguration": PropertyType(False, list_of(dict)),
24 "SmsVerificationMessage": PropertyType(False, is_str()),
25 "UsernameAttributes": PropertyType(False, list_of(is_str())),
26 "UsernameConfiguration": PropertyType(False, is_type(dict)),
27 "UserPoolAddOns": PropertyType(False, list_of(dict)),
28 "UserPoolName": PropertyType(False, is_str()),
29 "UserPoolTags": PropertyType(False, is_type(dict)),
30 "VerificationMessageTemplate": PropertyType(False, is_type(dict)),
31 }
32
33 runtime_attrs = {
34 "name": lambda self: ref(self.logical_id),
35 "arn": lambda self: fnGetAtt(self.logical_id, "Arn"),
36 "provider_name": lambda self: fnGetAtt(self.logical_id, "ProviderName"),
37 "provider_url": lambda self: fnGetAtt(self.logical_id, "ProviderURL"),
38 }
39
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/samtranslator/model/cognito.py b/samtranslator/model/cognito.py
--- a/samtranslator/model/cognito.py
+++ b/samtranslator/model/cognito.py
@@ -20,7 +20,7 @@
"Policies": PropertyType(False, is_type(dict)),
"Schema": PropertyType(False, list_of(dict)),
"SmsAuthenticationMessage": PropertyType(False, is_str()),
- "SmsConfiguration": PropertyType(False, list_of(dict)),
+ "SmsConfiguration": PropertyType(False, is_type(dict)),
"SmsVerificationMessage": PropertyType(False, is_str()),
"UsernameAttributes": PropertyType(False, list_of(is_str())),
"UsernameConfiguration": PropertyType(False, is_type(dict)),
diff --git a/samtranslator/parser/parser.py b/samtranslator/parser/parser.py
--- a/samtranslator/parser/parser.py
+++ b/samtranslator/parser/parser.py
@@ -18,7 +18,7 @@
@staticmethod
def validate_datatypes(sam_template):
- """Validates the datatype within the template """
+ """Validates the datatype within the template"""
if (
"Resources" not in sam_template
or not isinstance(sam_template["Resources"], dict)
| {"golden_diff": "diff --git a/samtranslator/model/cognito.py b/samtranslator/model/cognito.py\n--- a/samtranslator/model/cognito.py\n+++ b/samtranslator/model/cognito.py\n@@ -20,7 +20,7 @@\n \"Policies\": PropertyType(False, is_type(dict)),\n \"Schema\": PropertyType(False, list_of(dict)),\n \"SmsAuthenticationMessage\": PropertyType(False, is_str()),\n- \"SmsConfiguration\": PropertyType(False, list_of(dict)),\n+ \"SmsConfiguration\": PropertyType(False, is_type(dict)),\n \"SmsVerificationMessage\": PropertyType(False, is_str()),\n \"UsernameAttributes\": PropertyType(False, list_of(is_str())),\n \"UsernameConfiguration\": PropertyType(False, is_type(dict)),\ndiff --git a/samtranslator/parser/parser.py b/samtranslator/parser/parser.py\n--- a/samtranslator/parser/parser.py\n+++ b/samtranslator/parser/parser.py\n@@ -18,7 +18,7 @@\n \n @staticmethod\n def validate_datatypes(sam_template):\n- \"\"\"Validates the datatype within the template \"\"\"\n+ \"\"\"Validates the datatype within the template\"\"\"\n if (\n \"Resources\" not in sam_template\n or not isinstance(sam_template[\"Resources\"], dict)\n", "issue": "Cognito User Pool SMS configuration problem\n**Description:**\r\nWhen trying to create a Cognito user pool using SAM templates, SAM throws the error\r\n\r\n> Failed to create the changeset: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state Status: FAILED. Reason: Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless Application Specification document. Number of errors found: 1. Resource with id [CognitoUserPool] is invalid. Type of property 'SmsConfiguration' is invalid.\r\n\r\nwhen specifying [SmsConfiguration](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cognito-userpool.html#cfn-cognito-userpool-smsconfiguration) property.\r\nIn the template, there is also a Lambda trigger that has Cognito configured as an event source.\r\nAfter looking through the project and doing some tests, I believe the error could appear in the samtranslator module:\r\n`'SmsConfiguration': PropertyType(False, list_of(dict)),`\r\nFrom the CloudFormation docs, [SmsConfiguration](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cognito-userpool.html#cfn-cognito-userpool-smsconfiguration) seems to be a simple dict, but in the code snippet above, it is validated as a list of dicts.\r\nIndeed, if I modify the corresponding part of the template from a mapping to a YAML list consisting of a single object, validation passes, but when the stack is created by CloudFormation, it fails with \r\n> Property validation failure: [Value of property {/SmsConfiguration} does not match type {Object}]\r\n\r\nwhich is consistent with the type of the property specified in the CloudFormation docs.\r\n\r\n**Steps to reproduce the issue:**\r\n1. Create a SAM template with a Cognito user pool configured to use SMS MFA and a Lambda trigger associated.\r\n```yaml\r\nAWSTemplateFormatVersion: '2010-09-09'\r\nTransform: AWS::Serverless-2016-10-31\r\nDescription: >\r\n Example YAML.\r\nGlobals:\r\n Function:\r\n Timeout: 3\r\n Handler: lambda_function.lambda_handler\r\n Runtime: python3.6\r\n MemorySize: 128\r\nResources:\r\n PreSignupValidationLambda:\r\n Type: AWS::Serverless::Function\r\n Properties:\r\n CodeUri: src/pre_signup_validation/\r\n Events:\r\n CognitoTrigger:\r\n Type: Cognito\r\n Properties:\r\n UserPool: !Ref CognitoUserPool\r\n Trigger: PreSignUp\r\n CognitoUserPool:\r\n Type: 'AWS::Cognito::UserPool'\r\n Properties:\r\n AutoVerifiedAttributes:\r\n - phone_number\r\n MfaConfiguration: OPTIONAL\r\n Schema:\r\n - AttributeDataType: String\r\n DeveloperOnlyAttribute: false\r\n Mutable: false\r\n Name: sub\r\n Required: true\r\n StringAttributeConstraints:\r\n MaxLength: 2048\r\n MinLength: 0\r\n - AttributeDataType: String\r\n DeveloperOnlyAttribute: false\r\n Mutable: true\r\n Name: email\r\n Required: true\r\n StringAttributeConstraints:\r\n MaxLength: 2048\r\n MinLength: 0\r\n - AttributeDataType: String\r\n DeveloperOnlyAttribute: false\r\n Mutable: true\r\n Name: phone_number\r\n Required: true\r\n StringAttributeConstraints:\r\n MaxLength: 2048\r\n MinLength: 0\r\n SmsConfiguration:\r\n ExternalId: 'xxx-xxx-xxx'\r\n SnsCallerArn: !GetAtt CognitoSMSRole.Arn\r\n UsernameAttributes:\r\n - email\r\n - phone_number\r\n UserPoolName: Customers\r\n CognitoSMSRole:\r\n Type: 'AWS::IAM::Role'\r\n Properties:\r\n AssumeRolePolicyDocument:\r\n Version: 2012-10-17\r\n Statement:\r\n - Effect: Allow\r\n Principal:\r\n Service: 'cognito-idp.amazonaws.com'\r\n Action:\r\n - 'sts:AssumeRole'\r\n Condition:\r\n StringEquals:\r\n 'sts:ExternalId': 'xxx-xxx-xxx'\r\n Policies:\r\n - PolicyDocument:\r\n Version: 2012-10-17\r\n Statement:\r\n - Effect: Allow\r\n Action:\r\n - 'sns:Publish'\r\n Resource:\r\n - '*'\r\n PolicyName: CognitoSendSMS\r\n RoleName: CognitoSMSRole\r\n```\r\n2. Write a basic Lambda function in ```<template_location>/src/pre_signup_validation/lambda_function.py```\r\n```python\r\ndef lambda_handler(event: dict, context: dict):\r\n return event\r\n```\r\n3. Run (Commands from the AWS Toolkit for PyCharm when trying to deploy application)\r\n```bash\r\nsam build --template template.yaml --build-dir build --use-container\r\n```\r\n```bash\r\nsam package --template-file build/template.yaml --output-template-file build/packaged-template.yaml --s3-bucket <your_s3_bucket>\r\n```\r\n```bash\r\nsam deploy --template-file build/packaged-template.yaml --stack-name test --no-execute-changeset\r\n```\r\n\r\n**Observed result:**\r\nSAM validates the SmsConfiguration parameter of Cognito user pools as a list of type dict.\r\n**Expected result:**\r\nValidation should be consistent with CloudFormation specification.\n", "before_files": [{"content": "import logging\n\nfrom samtranslator.model.exceptions import InvalidDocumentException, InvalidTemplateException, InvalidResourceException\nfrom samtranslator.validator.validator import SamTemplateValidator\nfrom samtranslator.plugins import LifeCycleEvents\nfrom samtranslator.public.sdk.template import SamTemplate\n\nLOG = logging.getLogger(__name__)\n\n\nclass Parser:\n def __init__(self):\n pass\n\n def parse(self, sam_template, parameter_values, sam_plugins):\n self._validate(sam_template, parameter_values)\n sam_plugins.act(LifeCycleEvents.before_transform_template, sam_template)\n\n @staticmethod\n def validate_datatypes(sam_template):\n \"\"\"Validates the datatype within the template \"\"\"\n if (\n \"Resources\" not in sam_template\n or not isinstance(sam_template[\"Resources\"], dict)\n or not sam_template[\"Resources\"]\n ):\n raise InvalidDocumentException([InvalidTemplateException(\"'Resources' section is required\")])\n\n if not all(isinstance(sam_resource, dict) for sam_resource in sam_template[\"Resources\"].values()):\n raise InvalidDocumentException(\n [\n InvalidTemplateException(\n \"All 'Resources' must be Objects. If you're using YAML, this may be an \" \"indentation issue.\"\n )\n ]\n )\n\n sam_template_instance = SamTemplate(sam_template)\n\n for resource_logical_id, sam_resource in sam_template_instance.iterate():\n # NOTE: Properties isn't required for SimpleTable, so we can't check\n # `not isinstance(sam_resources.get(\"Properties\"), dict)` as this would be a breaking change.\n # sam_resource.properties defaults to {} in SamTemplate init\n if not isinstance(sam_resource.properties, dict):\n raise InvalidDocumentException(\n [\n InvalidResourceException(\n resource_logical_id,\n \"All 'Resources' must be Objects and have a 'Properties' Object. If \"\n \"you're using YAML, this may be an indentation issue.\",\n )\n ]\n )\n\n # private methods\n def _validate(self, sam_template, parameter_values):\n \"\"\"Validates the template and parameter values and raises exceptions if there's an issue\n\n :param dict sam_template: SAM template\n :param dict parameter_values: Dictionary of parameter values provided by the user\n \"\"\"\n if parameter_values is None:\n raise ValueError(\"`parameter_values` argument is required\")\n\n Parser.validate_datatypes(sam_template)\n\n try:\n validator = SamTemplateValidator()\n validation_errors = validator.validate(sam_template)\n if validation_errors:\n LOG.warning(\"Template schema validation reported the following errors: %s\", validation_errors)\n except Exception as e:\n # Catching any exception and not re-raising to make sure any validation process won't break transform\n LOG.exception(\"Exception from SamTemplateValidator: %s\", e)\n", "path": "samtranslator/parser/parser.py"}, {"content": "from samtranslator.model import PropertyType, Resource\nfrom samtranslator.model.types import is_type, list_of, is_str\nfrom samtranslator.model.intrinsics import fnGetAtt, ref\n\n\nclass CognitoUserPool(Resource):\n resource_type = \"AWS::Cognito::UserPool\"\n property_types = {\n \"AccountRecoverySetting\": PropertyType(False, is_type(dict)),\n \"AdminCreateUserConfig\": PropertyType(False, is_type(dict)),\n \"AliasAttributes\": PropertyType(False, list_of(is_str())),\n \"AutoVerifiedAttributes\": PropertyType(False, list_of(is_str())),\n \"DeviceConfiguration\": PropertyType(False, is_type(dict)),\n \"EmailConfiguration\": PropertyType(False, is_type(dict)),\n \"EmailVerificationMessage\": PropertyType(False, is_str()),\n \"EmailVerificationSubject\": PropertyType(False, is_str()),\n \"EnabledMfas\": PropertyType(False, list_of(is_str())),\n \"LambdaConfig\": PropertyType(False, is_type(dict)),\n \"MfaConfiguration\": PropertyType(False, is_str()),\n \"Policies\": PropertyType(False, is_type(dict)),\n \"Schema\": PropertyType(False, list_of(dict)),\n \"SmsAuthenticationMessage\": PropertyType(False, is_str()),\n \"SmsConfiguration\": PropertyType(False, list_of(dict)),\n \"SmsVerificationMessage\": PropertyType(False, is_str()),\n \"UsernameAttributes\": PropertyType(False, list_of(is_str())),\n \"UsernameConfiguration\": PropertyType(False, is_type(dict)),\n \"UserPoolAddOns\": PropertyType(False, list_of(dict)),\n \"UserPoolName\": PropertyType(False, is_str()),\n \"UserPoolTags\": PropertyType(False, is_type(dict)),\n \"VerificationMessageTemplate\": PropertyType(False, is_type(dict)),\n }\n\n runtime_attrs = {\n \"name\": lambda self: ref(self.logical_id),\n \"arn\": lambda self: fnGetAtt(self.logical_id, \"Arn\"),\n \"provider_name\": lambda self: fnGetAtt(self.logical_id, \"ProviderName\"),\n \"provider_url\": lambda self: fnGetAtt(self.logical_id, \"ProviderURL\"),\n }\n", "path": "samtranslator/model/cognito.py"}], "after_files": [{"content": "import logging\n\nfrom samtranslator.model.exceptions import InvalidDocumentException, InvalidTemplateException, InvalidResourceException\nfrom samtranslator.validator.validator import SamTemplateValidator\nfrom samtranslator.plugins import LifeCycleEvents\nfrom samtranslator.public.sdk.template import SamTemplate\n\nLOG = logging.getLogger(__name__)\n\n\nclass Parser:\n def __init__(self):\n pass\n\n def parse(self, sam_template, parameter_values, sam_plugins):\n self._validate(sam_template, parameter_values)\n sam_plugins.act(LifeCycleEvents.before_transform_template, sam_template)\n\n @staticmethod\n def validate_datatypes(sam_template):\n \"\"\"Validates the datatype within the template\"\"\"\n if (\n \"Resources\" not in sam_template\n or not isinstance(sam_template[\"Resources\"], dict)\n or not sam_template[\"Resources\"]\n ):\n raise InvalidDocumentException([InvalidTemplateException(\"'Resources' section is required\")])\n\n if not all(isinstance(sam_resource, dict) for sam_resource in sam_template[\"Resources\"].values()):\n raise InvalidDocumentException(\n [\n InvalidTemplateException(\n \"All 'Resources' must be Objects. If you're using YAML, this may be an \" \"indentation issue.\"\n )\n ]\n )\n\n sam_template_instance = SamTemplate(sam_template)\n\n for resource_logical_id, sam_resource in sam_template_instance.iterate():\n # NOTE: Properties isn't required for SimpleTable, so we can't check\n # `not isinstance(sam_resources.get(\"Properties\"), dict)` as this would be a breaking change.\n # sam_resource.properties defaults to {} in SamTemplate init\n if not isinstance(sam_resource.properties, dict):\n raise InvalidDocumentException(\n [\n InvalidResourceException(\n resource_logical_id,\n \"All 'Resources' must be Objects and have a 'Properties' Object. If \"\n \"you're using YAML, this may be an indentation issue.\",\n )\n ]\n )\n\n # private methods\n def _validate(self, sam_template, parameter_values):\n \"\"\"Validates the template and parameter values and raises exceptions if there's an issue\n\n :param dict sam_template: SAM template\n :param dict parameter_values: Dictionary of parameter values provided by the user\n \"\"\"\n if parameter_values is None:\n raise ValueError(\"`parameter_values` argument is required\")\n\n Parser.validate_datatypes(sam_template)\n\n try:\n validator = SamTemplateValidator()\n validation_errors = validator.validate(sam_template)\n if validation_errors:\n LOG.warning(\"Template schema validation reported the following errors: %s\", validation_errors)\n except Exception as e:\n # Catching any exception and not re-raising to make sure any validation process won't break transform\n LOG.exception(\"Exception from SamTemplateValidator: %s\", e)\n", "path": "samtranslator/parser/parser.py"}, {"content": "from samtranslator.model import PropertyType, Resource\nfrom samtranslator.model.types import is_type, list_of, is_str\nfrom samtranslator.model.intrinsics import fnGetAtt, ref\n\n\nclass CognitoUserPool(Resource):\n resource_type = \"AWS::Cognito::UserPool\"\n property_types = {\n \"AccountRecoverySetting\": PropertyType(False, is_type(dict)),\n \"AdminCreateUserConfig\": PropertyType(False, is_type(dict)),\n \"AliasAttributes\": PropertyType(False, list_of(is_str())),\n \"AutoVerifiedAttributes\": PropertyType(False, list_of(is_str())),\n \"DeviceConfiguration\": PropertyType(False, is_type(dict)),\n \"EmailConfiguration\": PropertyType(False, is_type(dict)),\n \"EmailVerificationMessage\": PropertyType(False, is_str()),\n \"EmailVerificationSubject\": PropertyType(False, is_str()),\n \"EnabledMfas\": PropertyType(False, list_of(is_str())),\n \"LambdaConfig\": PropertyType(False, is_type(dict)),\n \"MfaConfiguration\": PropertyType(False, is_str()),\n \"Policies\": PropertyType(False, is_type(dict)),\n \"Schema\": PropertyType(False, list_of(dict)),\n \"SmsAuthenticationMessage\": PropertyType(False, is_str()),\n \"SmsConfiguration\": PropertyType(False, is_type(dict)),\n \"SmsVerificationMessage\": PropertyType(False, is_str()),\n \"UsernameAttributes\": PropertyType(False, list_of(is_str())),\n \"UsernameConfiguration\": PropertyType(False, is_type(dict)),\n \"UserPoolAddOns\": PropertyType(False, list_of(dict)),\n \"UserPoolName\": PropertyType(False, is_str()),\n \"UserPoolTags\": PropertyType(False, is_type(dict)),\n \"VerificationMessageTemplate\": PropertyType(False, is_type(dict)),\n }\n\n runtime_attrs = {\n \"name\": lambda self: ref(self.logical_id),\n \"arn\": lambda self: fnGetAtt(self.logical_id, \"Arn\"),\n \"provider_name\": lambda self: fnGetAtt(self.logical_id, \"ProviderName\"),\n \"provider_url\": lambda self: fnGetAtt(self.logical_id, \"ProviderURL\"),\n }\n", "path": "samtranslator/model/cognito.py"}]} | 2,645 | 270 |
gh_patches_debug_38651 | rasdani/github-patches | git_diff | conan-io__conan-center-index-505 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[package] msys2/20190524: PKG_CONFIG_PATH environment variable is not passed
The `PKG_CONFIG_PATH` environment variable is not passed tot the msys2 environment.
This causes the `pkg_config` generator not to work.
The `PKG_CONFIG_PATH` environment variable is always `/mingw64/lib/pkgconfig:/mingw64/share/pkgconfig`
Is this a bug or am I missing something?
### Package and Environment Details (include every applicable attribute)
* Package Name/Version: **msys2/20190524**
* Operating System+version: **Windows 10**
* Conan version: **conan 1.21.0**
* Python version: **Python 3.8.0**
### Steps to reproduce (Include if Applicable)
In Windows 10, build the following recipe:
```
from conans import ConanFile, tools
import os
class DummyConan(ConanFile):
name = "dummy"
version = "0.1"
requires = ""
def build_requirements(self):
if tools.os_info.is_windows and not "CONAN_BASH_PATH" in os.environ:
self.build_requires("msys2/20190524")
# self.build_requires("msys2/20161025")
def build(self):
env = {
"PKG_CONFIG_PATH": "PKG_CONFIG_PATH from conan",
"DUMMY_ENV": "DUMMY_ENV from conan",
}
with tools.environment_append(env):
self.run("echo $PKG_CONFIG_PATH", win_bash=tools.os_info.is_windows)
self.run("echo $DUMMY_ENV", win_bash=tools.os_info.is_windows)
```
(the behavior is the same for `msys2/20161025`)
This prints ` /mingw64/lib/pkgconfig:/mingw64/share/pkgconfig` for `PKG_CONFIG_PATH`.
And `DUMMY_ENV from conan` for `DUMMY_ENV`.
### Logs (Include/Attach if Applicable)
<details><summary>Click to expand log</summary>
```
dummy/0.1: Calling build()
dummy/0.1: run_in_windows_bash: C:\.conan\1982c6\1\bin\usr\bin\bash.exe --login -c ^"cd \^"/c/users/maarten/.conan/data/dummy/0.1/_/_/build/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\^" ^&^& PATH=\^"/c/.conan/1982c6/1/bin/usr/bin:$PATH\^" ^&^& echo $PKG_CONFIG_PATH ^"
dummy/0.1:
----Running------
> C:\.conan\1982c6\1\bin\usr\bin\bash.exe --login -c ^"cd \^"/c/users/maarten/.conan/data/dummy/0.1/_/_/build/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\^" ^&^& PATH=\^"/c/.conan/1982c6/1/bin/usr/bin:$PATH\^" ^&^& echo $PKG_CONFIG_PATH ^"
-----------------
dummy/0.1: /mingw64/lib/pkgconfig:/mingw64/share/pkgconfig
dummy/0.1: run_in_windows_bash: C:\.conan\1982c6\1\bin\usr\bin\bash.exe --login -c ^"cd \^"/c/users/maarten/.conan/data/dummy/0.1/_/_/build/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\^" ^&^& PATH=\^"/c/.conan/1982c6/1/bin/usr/bin:$PATH\^" ^&^& echo $DUMMY_ENV ^"
dummy/0.1:
----Running------
> C:\.conan\1982c6\1\bin\usr\bin\bash.exe --login -c ^"cd \^"/c/users/maarten/.conan/data/dummy/0.1/_/_/build/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\^" ^&^& PATH=\^"/c/.conan/1982c6/1/bin/usr/bin:$PATH\^" ^&^& echo $DUMMY_ENV ^"
-----------------
dummy/0.1: DUMMY_ENV from conan
dummy/0.1: Package '5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9' built
dummy/0.1: Build folder C:\Users\maarten\.conan\data\dummy\0.1\_\_\build\5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9
dummy/0.1: Generated conaninfo.txt
dummy/0.1: Generated conanbuildinfo.txt
dummy/0.1: Generating the package
dummy/0.1: Package folder C:\Users\maarten\.conan\data\dummy\0.1\_\_\package\5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9
```
</details>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `recipes/msys2/all/conanfile.py`
Content:
```
1 from conans import ConanFile, tools
2 from conans.errors import ConanInvalidConfiguration
3 import os
4 import shutil
5
6
7 class MSYS2Conan(ConanFile):
8 name = "msys2"
9 description = "MSYS2 is a software distro and building platform for Windows"
10 url = "https://github.com/conan-io/conan-center-index"
11 homepage = "http://www.msys2.org"
12 license = "MSYS license"
13 topics = ("conan", "msys", "unix", "subsystem")
14 build_requires = "7zip/19.00"
15 short_paths = True
16 options = {"exclude_files": "ANY", # Comma separated list of file patterns to exclude from the package
17 "packages": "ANY", # Comma separated
18 "additional_packages": "ANY"} # Comma separated
19 default_options = {"exclude_files": "*/link.exe",
20 "packages": "base-devel,binutils,gcc",
21 "additional_packages": None}
22 settings = "os_build", "arch_build"
23
24 def configure(self):
25 if self.settings.os_build != "Windows":
26 raise ConanInvalidConfiguration("Only Windows supported")
27
28 def source(self):
29 # build tools have to download files in build method when the
30 # source files downloaded will be different based on architecture or OS
31 pass
32
33 def _download(self, url, sha256):
34 from six.moves.urllib.parse import urlparse
35 filename = os.path.basename(urlparse(url).path)
36 tools.download(url, filename)
37 tools.check_sha256(filename, sha256)
38 return filename
39
40 def build(self):
41 arch = 0 if self.settings.arch_build == "x86" else 1 # index in the sources list
42 url = self.conan_data["sources"][self.version][arch]["url"]
43 sha256 = self.conan_data["sources"][self.version][arch]["sha256"]
44 filename = self._download(**self.conan_data["sources"][self.version][arch])
45 tar_name = filename.replace(".xz", "")
46 self.run("7z.exe x {0}".format(filename))
47 self.run("7z.exe x {0}".format(tar_name))
48 os.unlink(filename)
49 os.unlink(tar_name)
50
51 msys_dir = "msys64" if self.settings.arch_build == "x86_64" else "msys32"
52
53 packages = []
54 if self.options.packages:
55 packages.extend(str(self.options.packages).split(","))
56 if self.options.additional_packages:
57 packages.extend(str(self.options.additional_packages).split(","))
58
59 with tools.chdir(os.path.join(msys_dir, "usr", "bin")):
60 for package in packages:
61 self.run('bash -l -c "pacman -S %s --noconfirm"' % package)
62
63 # create /tmp dir in order to avoid
64 # bash.exe: warning: could not find /tmp, please create!
65 tmp_dir = os.path.join(msys_dir, 'tmp')
66 if not os.path.isdir(tmp_dir):
67 os.makedirs(tmp_dir)
68 tmp_name = os.path.join(tmp_dir, 'dummy')
69 with open(tmp_name, 'a'):
70 os.utime(tmp_name, None)
71
72 def package(self):
73 msys_dir = "msys64" if self.settings.arch_build == "x86_64" else "msys32"
74 excludes = None
75 if self.options.exclude_files:
76 excludes = tuple(str(self.options.exclude_files).split(","))
77 self.copy("*", dst="bin", src=msys_dir, excludes=excludes)
78 shutil.copytree(os.path.join(self.package_folder, "bin", "usr", "share", "licenses"),
79 os.path.join(self.package_folder, "licenses"))
80
81
82 def package_info(self):
83 msys_root = os.path.join(self.package_folder, "bin")
84 msys_bin = os.path.join(msys_root, "usr", "bin")
85
86 self.output.info("Creating MSYS_ROOT env var : %s" % msys_root)
87 self.env_info.MSYS_ROOT = msys_root
88
89 self.output.info("Creating MSYS_BIN env var : %s" % msys_bin)
90 self.env_info.MSYS_BIN = msys_bin
91
92 self.output.info("Appending PATH env var with : " + msys_bin)
93 self.env_info.path.append(msys_bin)
94
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/recipes/msys2/all/conanfile.py b/recipes/msys2/all/conanfile.py
--- a/recipes/msys2/all/conanfile.py
+++ b/recipes/msys2/all/conanfile.py
@@ -37,6 +37,10 @@
tools.check_sha256(filename, sha256)
return filename
+ @property
+ def _msys_dir(self):
+ return "msys64" if self.settings.arch_build == "x86_64" else "msys32"
+
def build(self):
arch = 0 if self.settings.arch_build == "x86" else 1 # index in the sources list
url = self.conan_data["sources"][self.version][arch]["url"]
@@ -48,33 +52,34 @@
os.unlink(filename)
os.unlink(tar_name)
- msys_dir = "msys64" if self.settings.arch_build == "x86_64" else "msys32"
-
packages = []
if self.options.packages:
packages.extend(str(self.options.packages).split(","))
if self.options.additional_packages:
packages.extend(str(self.options.additional_packages).split(","))
- with tools.chdir(os.path.join(msys_dir, "usr", "bin")):
+ with tools.chdir(os.path.join(self._msys_dir, "usr", "bin")):
for package in packages:
self.run('bash -l -c "pacman -S %s --noconfirm"' % package)
# create /tmp dir in order to avoid
# bash.exe: warning: could not find /tmp, please create!
- tmp_dir = os.path.join(msys_dir, 'tmp')
+ tmp_dir = os.path.join(self._msys_dir, 'tmp')
if not os.path.isdir(tmp_dir):
os.makedirs(tmp_dir)
tmp_name = os.path.join(tmp_dir, 'dummy')
with open(tmp_name, 'a'):
os.utime(tmp_name, None)
+ # Prepend the PKG_CONFIG_PATH environment variable with an eventual PKG_CONFIG_PATH environment variable
+ tools.replace_in_file(os.path.join(self._msys_dir, "etc", "profile"),
+ 'PKG_CONFIG_PATH="', 'PKG_CONFIG_PATH="$PKG_CONFIG_PATH:')
+
def package(self):
- msys_dir = "msys64" if self.settings.arch_build == "x86_64" else "msys32"
excludes = None
if self.options.exclude_files:
excludes = tuple(str(self.options.exclude_files).split(","))
- self.copy("*", dst="bin", src=msys_dir, excludes=excludes)
+ self.copy("*", dst="bin", src=self._msys_dir, excludes=excludes)
shutil.copytree(os.path.join(self.package_folder, "bin", "usr", "share", "licenses"),
os.path.join(self.package_folder, "licenses"))
| {"golden_diff": "diff --git a/recipes/msys2/all/conanfile.py b/recipes/msys2/all/conanfile.py\n--- a/recipes/msys2/all/conanfile.py\n+++ b/recipes/msys2/all/conanfile.py\n@@ -37,6 +37,10 @@\n tools.check_sha256(filename, sha256)\n return filename\n \n+ @property\n+ def _msys_dir(self):\n+ return \"msys64\" if self.settings.arch_build == \"x86_64\" else \"msys32\"\n+\n def build(self):\n arch = 0 if self.settings.arch_build == \"x86\" else 1 # index in the sources list\n url = self.conan_data[\"sources\"][self.version][arch][\"url\"]\n@@ -48,33 +52,34 @@\n os.unlink(filename)\n os.unlink(tar_name)\n \n- msys_dir = \"msys64\" if self.settings.arch_build == \"x86_64\" else \"msys32\"\n-\n packages = []\n if self.options.packages:\n packages.extend(str(self.options.packages).split(\",\"))\n if self.options.additional_packages:\n packages.extend(str(self.options.additional_packages).split(\",\"))\n \n- with tools.chdir(os.path.join(msys_dir, \"usr\", \"bin\")):\n+ with tools.chdir(os.path.join(self._msys_dir, \"usr\", \"bin\")):\n for package in packages:\n self.run('bash -l -c \"pacman -S %s --noconfirm\"' % package)\n \n # create /tmp dir in order to avoid\n # bash.exe: warning: could not find /tmp, please create!\n- tmp_dir = os.path.join(msys_dir, 'tmp')\n+ tmp_dir = os.path.join(self._msys_dir, 'tmp')\n if not os.path.isdir(tmp_dir):\n os.makedirs(tmp_dir)\n tmp_name = os.path.join(tmp_dir, 'dummy')\n with open(tmp_name, 'a'):\n os.utime(tmp_name, None)\n \n+ # Prepend the PKG_CONFIG_PATH environment variable with an eventual PKG_CONFIG_PATH environment variable\n+ tools.replace_in_file(os.path.join(self._msys_dir, \"etc\", \"profile\"),\n+ 'PKG_CONFIG_PATH=\"', 'PKG_CONFIG_PATH=\"$PKG_CONFIG_PATH:')\n+\n def package(self):\n- msys_dir = \"msys64\" if self.settings.arch_build == \"x86_64\" else \"msys32\"\n excludes = None\n if self.options.exclude_files:\n excludes = tuple(str(self.options.exclude_files).split(\",\"))\n- self.copy(\"*\", dst=\"bin\", src=msys_dir, excludes=excludes)\n+ self.copy(\"*\", dst=\"bin\", src=self._msys_dir, excludes=excludes)\n shutil.copytree(os.path.join(self.package_folder, \"bin\", \"usr\", \"share\", \"licenses\"),\n os.path.join(self.package_folder, \"licenses\"))\n", "issue": "[package] msys2/20190524: PKG_CONFIG_PATH environment variable is not passed\nThe `PKG_CONFIG_PATH` environment variable is not passed tot the msys2 environment.\r\nThis causes the `pkg_config` generator not to work.\r\n\r\nThe `PKG_CONFIG_PATH` environment variable is always `/mingw64/lib/pkgconfig:/mingw64/share/pkgconfig`\r\n\r\nIs this a bug or am I missing something?\r\n\r\n### Package and Environment Details (include every applicable attribute)\r\n * Package Name/Version: **msys2/20190524**\r\n * Operating System+version: **Windows 10**\r\n * Conan version: **conan 1.21.0**\r\n * Python version: **Python 3.8.0**\r\n\r\n### Steps to reproduce (Include if Applicable)\r\nIn Windows 10, build the following recipe:\r\n```\r\nfrom conans import ConanFile, tools\r\nimport os\r\n\r\n\r\nclass DummyConan(ConanFile):\r\n name = \"dummy\"\r\n version = \"0.1\"\r\n\r\n requires = \"\"\r\n\r\n def build_requirements(self):\r\n if tools.os_info.is_windows and not \"CONAN_BASH_PATH\" in os.environ:\r\n self.build_requires(\"msys2/20190524\")\r\n # self.build_requires(\"msys2/20161025\")\r\n\r\n def build(self):\r\n env = {\r\n \"PKG_CONFIG_PATH\": \"PKG_CONFIG_PATH from conan\",\r\n \"DUMMY_ENV\": \"DUMMY_ENV from conan\",\r\n }\r\n with tools.environment_append(env):\r\n self.run(\"echo $PKG_CONFIG_PATH\", win_bash=tools.os_info.is_windows)\r\n self.run(\"echo $DUMMY_ENV\", win_bash=tools.os_info.is_windows)\r\n```\r\n(the behavior is the same for `msys2/20161025`)\r\n\r\nThis prints ` /mingw64/lib/pkgconfig:/mingw64/share/pkgconfig` for `PKG_CONFIG_PATH`.\r\nAnd `DUMMY_ENV from conan` for `DUMMY_ENV`.\r\n\r\n\r\n### Logs (Include/Attach if Applicable)\r\n<details><summary>Click to expand log</summary>\r\n\r\n```\r\ndummy/0.1: Calling build()\r\ndummy/0.1: run_in_windows_bash: C:\\.conan\\1982c6\\1\\bin\\usr\\bin\\bash.exe --login -c ^\"cd \\^\"/c/users/maarten/.conan/data/dummy/0.1/_/_/build/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\\^\" ^&^& PATH=\\^\"/c/.conan/1982c6/1/bin/usr/bin:$PATH\\^\" ^&^& echo $PKG_CONFIG_PATH ^\"\r\ndummy/0.1:\r\n----Running------\r\n> C:\\.conan\\1982c6\\1\\bin\\usr\\bin\\bash.exe --login -c ^\"cd \\^\"/c/users/maarten/.conan/data/dummy/0.1/_/_/build/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\\^\" ^&^& PATH=\\^\"/c/.conan/1982c6/1/bin/usr/bin:$PATH\\^\" ^&^& echo $PKG_CONFIG_PATH ^\"\r\n-----------------\r\ndummy/0.1: /mingw64/lib/pkgconfig:/mingw64/share/pkgconfig\r\ndummy/0.1: run_in_windows_bash: C:\\.conan\\1982c6\\1\\bin\\usr\\bin\\bash.exe --login -c ^\"cd \\^\"/c/users/maarten/.conan/data/dummy/0.1/_/_/build/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\\^\" ^&^& PATH=\\^\"/c/.conan/1982c6/1/bin/usr/bin:$PATH\\^\" ^&^& echo $DUMMY_ENV ^\"\r\ndummy/0.1:\r\n----Running------\r\n> C:\\.conan\\1982c6\\1\\bin\\usr\\bin\\bash.exe --login -c ^\"cd \\^\"/c/users/maarten/.conan/data/dummy/0.1/_/_/build/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\\^\" ^&^& PATH=\\^\"/c/.conan/1982c6/1/bin/usr/bin:$PATH\\^\" ^&^& echo $DUMMY_ENV ^\"\r\n-----------------\r\ndummy/0.1: DUMMY_ENV from conan\r\ndummy/0.1: Package '5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9' built\r\ndummy/0.1: Build folder C:\\Users\\maarten\\.conan\\data\\dummy\\0.1\\_\\_\\build\\5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\r\ndummy/0.1: Generated conaninfo.txt\r\ndummy/0.1: Generated conanbuildinfo.txt\r\ndummy/0.1: Generating the package\r\ndummy/0.1: Package folder C:\\Users\\maarten\\.conan\\data\\dummy\\0.1\\_\\_\\package\\5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\r\n```\r\n\r\n</details>\r\n\n", "before_files": [{"content": "from conans import ConanFile, tools\nfrom conans.errors import ConanInvalidConfiguration\nimport os\nimport shutil\n\n\nclass MSYS2Conan(ConanFile):\n name = \"msys2\"\n description = \"MSYS2 is a software distro and building platform for Windows\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"http://www.msys2.org\"\n license = \"MSYS license\"\n topics = (\"conan\", \"msys\", \"unix\", \"subsystem\")\n build_requires = \"7zip/19.00\"\n short_paths = True\n options = {\"exclude_files\": \"ANY\", # Comma separated list of file patterns to exclude from the package\n \"packages\": \"ANY\", # Comma separated\n \"additional_packages\": \"ANY\"} # Comma separated\n default_options = {\"exclude_files\": \"*/link.exe\",\n \"packages\": \"base-devel,binutils,gcc\",\n \"additional_packages\": None}\n settings = \"os_build\", \"arch_build\"\n\n def configure(self):\n if self.settings.os_build != \"Windows\":\n raise ConanInvalidConfiguration(\"Only Windows supported\")\n\n def source(self):\n # build tools have to download files in build method when the\n # source files downloaded will be different based on architecture or OS\n pass\n\n def _download(self, url, sha256):\n from six.moves.urllib.parse import urlparse\n filename = os.path.basename(urlparse(url).path)\n tools.download(url, filename)\n tools.check_sha256(filename, sha256)\n return filename\n\n def build(self):\n arch = 0 if self.settings.arch_build == \"x86\" else 1 # index in the sources list\n url = self.conan_data[\"sources\"][self.version][arch][\"url\"]\n sha256 = self.conan_data[\"sources\"][self.version][arch][\"sha256\"]\n filename = self._download(**self.conan_data[\"sources\"][self.version][arch])\n tar_name = filename.replace(\".xz\", \"\")\n self.run(\"7z.exe x {0}\".format(filename))\n self.run(\"7z.exe x {0}\".format(tar_name))\n os.unlink(filename)\n os.unlink(tar_name)\n\n msys_dir = \"msys64\" if self.settings.arch_build == \"x86_64\" else \"msys32\"\n\n packages = []\n if self.options.packages:\n packages.extend(str(self.options.packages).split(\",\"))\n if self.options.additional_packages:\n packages.extend(str(self.options.additional_packages).split(\",\"))\n\n with tools.chdir(os.path.join(msys_dir, \"usr\", \"bin\")):\n for package in packages:\n self.run('bash -l -c \"pacman -S %s --noconfirm\"' % package)\n\n # create /tmp dir in order to avoid\n # bash.exe: warning: could not find /tmp, please create!\n tmp_dir = os.path.join(msys_dir, 'tmp')\n if not os.path.isdir(tmp_dir):\n os.makedirs(tmp_dir)\n tmp_name = os.path.join(tmp_dir, 'dummy')\n with open(tmp_name, 'a'):\n os.utime(tmp_name, None)\n\n def package(self):\n msys_dir = \"msys64\" if self.settings.arch_build == \"x86_64\" else \"msys32\"\n excludes = None\n if self.options.exclude_files:\n excludes = tuple(str(self.options.exclude_files).split(\",\"))\n self.copy(\"*\", dst=\"bin\", src=msys_dir, excludes=excludes)\n shutil.copytree(os.path.join(self.package_folder, \"bin\", \"usr\", \"share\", \"licenses\"),\n os.path.join(self.package_folder, \"licenses\"))\n\n\n def package_info(self):\n msys_root = os.path.join(self.package_folder, \"bin\")\n msys_bin = os.path.join(msys_root, \"usr\", \"bin\")\n\n self.output.info(\"Creating MSYS_ROOT env var : %s\" % msys_root)\n self.env_info.MSYS_ROOT = msys_root\n\n self.output.info(\"Creating MSYS_BIN env var : %s\" % msys_bin)\n self.env_info.MSYS_BIN = msys_bin\n\n self.output.info(\"Appending PATH env var with : \" + msys_bin)\n self.env_info.path.append(msys_bin)\n", "path": "recipes/msys2/all/conanfile.py"}], "after_files": [{"content": "from conans import ConanFile, tools\nfrom conans.errors import ConanInvalidConfiguration\nimport os\nimport shutil\n\n\nclass MSYS2Conan(ConanFile):\n name = \"msys2\"\n description = \"MSYS2 is a software distro and building platform for Windows\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"http://www.msys2.org\"\n license = \"MSYS license\"\n topics = (\"conan\", \"msys\", \"unix\", \"subsystem\")\n build_requires = \"7zip/19.00\"\n short_paths = True\n options = {\"exclude_files\": \"ANY\", # Comma separated list of file patterns to exclude from the package\n \"packages\": \"ANY\", # Comma separated\n \"additional_packages\": \"ANY\"} # Comma separated\n default_options = {\"exclude_files\": \"*/link.exe\",\n \"packages\": \"base-devel,binutils,gcc\",\n \"additional_packages\": None}\n settings = \"os_build\", \"arch_build\"\n\n def configure(self):\n if self.settings.os_build != \"Windows\":\n raise ConanInvalidConfiguration(\"Only Windows supported\")\n\n def source(self):\n # build tools have to download files in build method when the\n # source files downloaded will be different based on architecture or OS\n pass\n\n def _download(self, url, sha256):\n from six.moves.urllib.parse import urlparse\n filename = os.path.basename(urlparse(url).path)\n tools.download(url, filename)\n tools.check_sha256(filename, sha256)\n return filename\n\n @property\n def _msys_dir(self):\n return \"msys64\" if self.settings.arch_build == \"x86_64\" else \"msys32\"\n\n def build(self):\n arch = 0 if self.settings.arch_build == \"x86\" else 1 # index in the sources list\n url = self.conan_data[\"sources\"][self.version][arch][\"url\"]\n sha256 = self.conan_data[\"sources\"][self.version][arch][\"sha256\"]\n filename = self._download(**self.conan_data[\"sources\"][self.version][arch])\n tar_name = filename.replace(\".xz\", \"\")\n self.run(\"7z.exe x {0}\".format(filename))\n self.run(\"7z.exe x {0}\".format(tar_name))\n os.unlink(filename)\n os.unlink(tar_name)\n\n packages = []\n if self.options.packages:\n packages.extend(str(self.options.packages).split(\",\"))\n if self.options.additional_packages:\n packages.extend(str(self.options.additional_packages).split(\",\"))\n\n with tools.chdir(os.path.join(self._msys_dir, \"usr\", \"bin\")):\n for package in packages:\n self.run('bash -l -c \"pacman -S %s --noconfirm\"' % package)\n\n # create /tmp dir in order to avoid\n # bash.exe: warning: could not find /tmp, please create!\n tmp_dir = os.path.join(self._msys_dir, 'tmp')\n if not os.path.isdir(tmp_dir):\n os.makedirs(tmp_dir)\n tmp_name = os.path.join(tmp_dir, 'dummy')\n with open(tmp_name, 'a'):\n os.utime(tmp_name, None)\n\n # Prepend the PKG_CONFIG_PATH environment variable with an eventual PKG_CONFIG_PATH environment variable\n tools.replace_in_file(os.path.join(self._msys_dir, \"etc\", \"profile\"),\n 'PKG_CONFIG_PATH=\"', 'PKG_CONFIG_PATH=\"$PKG_CONFIG_PATH:')\n\n def package(self):\n excludes = None\n if self.options.exclude_files:\n excludes = tuple(str(self.options.exclude_files).split(\",\"))\n self.copy(\"*\", dst=\"bin\", src=self._msys_dir, excludes=excludes)\n shutil.copytree(os.path.join(self.package_folder, \"bin\", \"usr\", \"share\", \"licenses\"),\n os.path.join(self.package_folder, \"licenses\"))\n\n\n def package_info(self):\n msys_root = os.path.join(self.package_folder, \"bin\")\n msys_bin = os.path.join(msys_root, \"usr\", \"bin\")\n\n self.output.info(\"Creating MSYS_ROOT env var : %s\" % msys_root)\n self.env_info.MSYS_ROOT = msys_root\n\n self.output.info(\"Creating MSYS_BIN env var : %s\" % msys_bin)\n self.env_info.MSYS_BIN = msys_bin\n\n self.output.info(\"Appending PATH env var with : \" + msys_bin)\n self.env_info.path.append(msys_bin)\n", "path": "recipes/msys2/all/conanfile.py"}]} | 2,711 | 658 |
gh_patches_debug_26953 | rasdani/github-patches | git_diff | mdn__kuma-2072 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
restore django-debug-toolbar
We disabled django-debug-toolbar before we upgraded to django 1.4. Now that we're on it we should be able to restore it in `settings_local.py`.
restore django-debug-toolbar
We disabled django-debug-toolbar before we upgraded to django 1.4. Now that we're on it we should be able to restore it in `settings_local.py`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `puppet/files/vagrant/settings_local.py`
Content:
```
1 from settings import *
2 import logging
3
4 INTERNAL_IPS = ('127.0.0.1', '192.168.10.1',)
5
6 DEBUG = True
7 DEV = True
8 TEMPLATE_DEBUG = DEBUG
9 SERVE_MEDIA = DEBUG
10
11 SESSION_COOKIE_SECURE = True
12
13 DEMO_UPLOADS_ROOT = '/home/vagrant/uploads/demos'
14 DEMO_UPLOADS_URL = '/media/uploads/demos/'
15
16 PROD_DETAILS_DIR = '/home/vagrant/product_details_json'
17 MDC_PAGES_DIR = '/home/vagrant/mdc_pages'
18
19 GOOGLE_MAPS_API_KEY = "ABQIAAAANRj9BHQi5ireVluCwVy0yRSrufPN8BjQWjkoRva24PCQEXS2OhSXu2BEgUH5PmGOmW71r2-tEuOVuQ"
20
21 RECAPTCHA_USE_SSL = True
22 RECAPTCHA_PUBLIC_KEY = '6LdX8cISAAAAAA9HRXmzrcRSFsUoIK9u0nWpvGS_'
23 RECAPTCHA_PRIVATE_KEY = '6LdX8cISAAAAACkC1kqYmpeSf-1geTmLzrLnq0t6'
24
25 BITLY_USERNAME = 'lmorchard'
26 BITLY_API_KEY = "R_2653e6351e31d02988b3da31dac6e2c0"
27
28 EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
29 #EMAIL_BACKEND = 'django.core.mail.backends.filebased.EmailBackend'
30 #EMAIL_FILE_PATH = '/home/vagrant/logs/kuma-email.log'
31
32 # Uncomment to enable a real celery queue
33 CELERY_ALWAYS_EAGER = False
34
35 INSTALLED_APPS = INSTALLED_APPS + (
36 "django_extensions",
37 # TODO: re-enable after django 1.4
38 # "debug_toolbar",
39 "devserver",
40 )
41
42 MIDDLEWARE_CLASSES = (
43 # TODO: re-enable after django 1.4
44 # "debug_toolbar.middleware.DebugToolbarMiddleware",
45 ) + MIDDLEWARE_CLASSES
46
47 DEBUG_TOOLBAR_CONFIG = {
48 "INTERCEPT_REDIRECTS": False,
49 }
50
51 DEBUG_TOOLBAR_PANELS = (
52 'debug_toolbar.panels.version.VersionDebugPanel',
53 'debug_toolbar.panels.timer.TimerDebugPanel',
54 'debug_toolbar.panels.settings_vars.SettingsVarsDebugPanel',
55 'debug_toolbar.panels.headers.HeaderDebugPanel',
56 'debug_toolbar.panels.request_vars.RequestVarsDebugPanel',
57 'debug_toolbar.panels.template.TemplateDebugPanel',
58 #'cache_panel.CachePanel',
59 'debug_toolbar.panels.sql.SQLDebugPanel',
60 'debug_toolbar.panels.signals.SignalDebugPanel',
61 'debug_toolbar.panels.logger.LoggingPanel',
62 )
63
64 DEVSERVER_MODULES = (
65 # sql modules interfere with saving some KumaScript templates
66 #'devserver.modules.sql.SQLRealTimeModule',
67 #'devserver.modules.sql.SQLSummaryModule',
68 'devserver.modules.profile.ProfileSummaryModule',
69
70 # Modules not enabled by default
71 #'devserver.modules.ajax.AjaxDumpModule',
72 #'devserver.modules.profile.MemoryUseModule',
73 #'devserver.modules.cache.CacheSummaryModule',
74 #'devserver.modules.profile.LineProfilerModule',
75 )
76
77 # The default database should point to the master.
78 DATABASES = {
79 'default': {
80 'NAME': 'kuma',
81 'ENGINE': 'django.db.backends.mysql',
82 'HOST': 'localhost',
83 'USER': 'kuma',
84 'PASSWORD': 'kuma',
85 'OPTIONS': {'init_command': 'SET storage_engine=InnoDB'},
86 },
87 }
88
89 MIGRATION_DATABASES = {
90 'wikidb': {
91 'NAME': 'wikidb',
92 'ENGINE': 'django.db.backends.mysql',
93 'HOST': 'localhost',
94 'USER': 'wikiuser',
95 'PASSWORD': '2yeOr7ByBUMBiB4z',
96 },
97 }
98
99 CACHES = {
100 'default': {
101 # HACK: We currently have 'default' memcache disabled in production.
102 # This reflects that in local dev.
103 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
104 #'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
105 #'LOCATION': [
106 # '127.0.0.1:11211',
107 #],
108 'TIMEOUT': 3600,
109 'KEY_PREFIX': 'kuma',
110 },
111 'secondary': {
112 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
113 'LOCATION': [
114 '127.0.0.1:11211',
115 ],
116 'TIMEOUT': 3600,
117 'KEY_PREFIX': 'kuma',
118 }
119 }
120
121 # TODO: Switch this to 'default' when main cache issues are resolved
122 SECONDARY_CACHE_ALIAS = 'secondary'
123
124 # Use IP:PORT pairs separated by semicolons.
125 CACHE_BACKEND = 'memcached://localhost:11211?timeout=60'
126 CONSTANCE_DATABASE_CACHE_BACKEND = CACHE_BACKEND
127
128 # This is used to hash some things in Django.
129 SECRET_KEY = 'jenny8675309'
130
131 DEBUG_PROPAGATE_EXCEPTIONS = DEBUG
132
133 LOG_LEVEL = logging.DEBUG
134
135 SITE_URL = 'https://developer-local.allizom.org'
136 PROTOCOL = 'https://'
137 DOMAIN = 'developer-local.allizom.org'
138
139 # See: https://github.com/mozilla/django-browserid/issues/8 (TODO)
140 BROWSERID_DISABLE_CERT_CHECK = True
141 BROWSERID_CACERT_FILE = None
142
143 LOGIN_REDIRECT_URL = '/'
144 LOGIN_REDIRECT_URL_FAILURE = '/'
145
146 KUMASCRIPT_URL_TEMPLATE = 'http://localhost:9080/docs/{path}'
147
148 ATTACHMENT_HOST = 'mdn-local.mozillademos.org'
149
150 ES_DISABLED = False
151 ES_URLS = ['http://127.0.0.1:9200']
152 ES_INDEXES = {'default': 'main_index'}
153 ES_INDEX_PREFIX = 'mdn'
154 ES_LIVE_INDEX = True
155 ES_INDEXING_TIMEOUT = 30
156
157 # See https://mana.mozilla.org/wiki/display/websites/Developer+Cluster#DeveloperCluster-Sentry
158 SENTRY_DSN = ''
159
160 if SENTRY_DSN:
161 INSTALLED_APPS = INSTALLED_APPS + (
162 'raven.contrib.django.raven_compat',
163 )
164
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/puppet/files/vagrant/settings_local.py b/puppet/files/vagrant/settings_local.py
--- a/puppet/files/vagrant/settings_local.py
+++ b/puppet/files/vagrant/settings_local.py
@@ -34,31 +34,30 @@
INSTALLED_APPS = INSTALLED_APPS + (
"django_extensions",
- # TODO: re-enable after django 1.4
- # "debug_toolbar",
+ "debug_toolbar",
"devserver",
)
-MIDDLEWARE_CLASSES = (
- # TODO: re-enable after django 1.4
- # "debug_toolbar.middleware.DebugToolbarMiddleware",
-) + MIDDLEWARE_CLASSES
+JINGO_EXCLUDE_APPS = JINGO_EXCLUDE_APPS + (
+ 'debug_toolbar',
+)
DEBUG_TOOLBAR_CONFIG = {
"INTERCEPT_REDIRECTS": False,
}
DEBUG_TOOLBAR_PANELS = (
- 'debug_toolbar.panels.version.VersionDebugPanel',
- 'debug_toolbar.panels.timer.TimerDebugPanel',
- 'debug_toolbar.panels.settings_vars.SettingsVarsDebugPanel',
- 'debug_toolbar.panels.headers.HeaderDebugPanel',
- 'debug_toolbar.panels.request_vars.RequestVarsDebugPanel',
- 'debug_toolbar.panels.template.TemplateDebugPanel',
- #'cache_panel.CachePanel',
- 'debug_toolbar.panels.sql.SQLDebugPanel',
- 'debug_toolbar.panels.signals.SignalDebugPanel',
- 'debug_toolbar.panels.logger.LoggingPanel',
+ 'debug_toolbar.panels.versions.VersionsPanel',
+ 'debug_toolbar.panels.timer.TimerPanel',
+ 'debug_toolbar.panels.settings.SettingsPanel',
+ 'debug_toolbar.panels.headers.HeadersPanel',
+ 'debug_toolbar.panels.request.RequestPanel',
+ 'debug_toolbar.panels.templates.TemplatesPanel',
+ 'debug_toolbar.panels.cache.CachePanel',
+ 'debug_toolbar.panels.sql.SQLPanel',
+ 'debug_toolbar.panels.signals.SignalsPanel',
+ 'debug_toolbar.panels.logging.LoggingPanel',
+ 'debug_toolbar.panels.redirects.RedirectsPanel',
)
DEVSERVER_MODULES = (
| {"golden_diff": "diff --git a/puppet/files/vagrant/settings_local.py b/puppet/files/vagrant/settings_local.py\n--- a/puppet/files/vagrant/settings_local.py\n+++ b/puppet/files/vagrant/settings_local.py\n@@ -34,31 +34,30 @@\n \n INSTALLED_APPS = INSTALLED_APPS + (\n \"django_extensions\",\n- # TODO: re-enable after django 1.4\n- # \"debug_toolbar\",\n+ \"debug_toolbar\",\n \"devserver\",\n )\n \n-MIDDLEWARE_CLASSES = (\n- # TODO: re-enable after django 1.4\n- # \"debug_toolbar.middleware.DebugToolbarMiddleware\",\n-) + MIDDLEWARE_CLASSES\n+JINGO_EXCLUDE_APPS = JINGO_EXCLUDE_APPS + (\n+ 'debug_toolbar',\n+)\n \n DEBUG_TOOLBAR_CONFIG = {\n \"INTERCEPT_REDIRECTS\": False,\n }\n \n DEBUG_TOOLBAR_PANELS = (\n- 'debug_toolbar.panels.version.VersionDebugPanel',\n- 'debug_toolbar.panels.timer.TimerDebugPanel',\n- 'debug_toolbar.panels.settings_vars.SettingsVarsDebugPanel',\n- 'debug_toolbar.panels.headers.HeaderDebugPanel',\n- 'debug_toolbar.panels.request_vars.RequestVarsDebugPanel',\n- 'debug_toolbar.panels.template.TemplateDebugPanel',\n- #'cache_panel.CachePanel',\n- 'debug_toolbar.panels.sql.SQLDebugPanel',\n- 'debug_toolbar.panels.signals.SignalDebugPanel',\n- 'debug_toolbar.panels.logger.LoggingPanel',\n+ 'debug_toolbar.panels.versions.VersionsPanel',\n+ 'debug_toolbar.panels.timer.TimerPanel',\n+ 'debug_toolbar.panels.settings.SettingsPanel',\n+ 'debug_toolbar.panels.headers.HeadersPanel',\n+ 'debug_toolbar.panels.request.RequestPanel',\n+ 'debug_toolbar.panels.templates.TemplatesPanel',\n+ 'debug_toolbar.panels.cache.CachePanel',\n+ 'debug_toolbar.panels.sql.SQLPanel',\n+ 'debug_toolbar.panels.signals.SignalsPanel',\n+ 'debug_toolbar.panels.logging.LoggingPanel',\n+ 'debug_toolbar.panels.redirects.RedirectsPanel',\n )\n \n DEVSERVER_MODULES = (\n", "issue": "restore django-debug-toolbar\nWe disabled django-debug-toolbar before we upgraded to django 1.4. Now that we're on it we should be able to restore it in `settings_local.py`.\n\nrestore django-debug-toolbar\nWe disabled django-debug-toolbar before we upgraded to django 1.4. Now that we're on it we should be able to restore it in `settings_local.py`.\n\n", "before_files": [{"content": "from settings import *\nimport logging\n\nINTERNAL_IPS = ('127.0.0.1', '192.168.10.1',)\n\nDEBUG = True\nDEV = True\nTEMPLATE_DEBUG = DEBUG\nSERVE_MEDIA = DEBUG\n\nSESSION_COOKIE_SECURE = True\n\nDEMO_UPLOADS_ROOT = '/home/vagrant/uploads/demos'\nDEMO_UPLOADS_URL = '/media/uploads/demos/'\n\nPROD_DETAILS_DIR = '/home/vagrant/product_details_json'\nMDC_PAGES_DIR = '/home/vagrant/mdc_pages'\n\nGOOGLE_MAPS_API_KEY = \"ABQIAAAANRj9BHQi5ireVluCwVy0yRSrufPN8BjQWjkoRva24PCQEXS2OhSXu2BEgUH5PmGOmW71r2-tEuOVuQ\"\n\nRECAPTCHA_USE_SSL = True\nRECAPTCHA_PUBLIC_KEY = '6LdX8cISAAAAAA9HRXmzrcRSFsUoIK9u0nWpvGS_'\nRECAPTCHA_PRIVATE_KEY = '6LdX8cISAAAAACkC1kqYmpeSf-1geTmLzrLnq0t6'\n\nBITLY_USERNAME = 'lmorchard'\nBITLY_API_KEY = \"R_2653e6351e31d02988b3da31dac6e2c0\"\n\nEMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'\n#EMAIL_BACKEND = 'django.core.mail.backends.filebased.EmailBackend'\n#EMAIL_FILE_PATH = '/home/vagrant/logs/kuma-email.log'\n\n# Uncomment to enable a real celery queue\nCELERY_ALWAYS_EAGER = False\n\nINSTALLED_APPS = INSTALLED_APPS + (\n \"django_extensions\",\n # TODO: re-enable after django 1.4\n # \"debug_toolbar\",\n \"devserver\",\n)\n\nMIDDLEWARE_CLASSES = (\n # TODO: re-enable after django 1.4\n # \"debug_toolbar.middleware.DebugToolbarMiddleware\",\n) + MIDDLEWARE_CLASSES\n\nDEBUG_TOOLBAR_CONFIG = {\n \"INTERCEPT_REDIRECTS\": False,\n}\n\nDEBUG_TOOLBAR_PANELS = (\n 'debug_toolbar.panels.version.VersionDebugPanel',\n 'debug_toolbar.panels.timer.TimerDebugPanel',\n 'debug_toolbar.panels.settings_vars.SettingsVarsDebugPanel',\n 'debug_toolbar.panels.headers.HeaderDebugPanel',\n 'debug_toolbar.panels.request_vars.RequestVarsDebugPanel',\n 'debug_toolbar.panels.template.TemplateDebugPanel',\n #'cache_panel.CachePanel',\n 'debug_toolbar.panels.sql.SQLDebugPanel',\n 'debug_toolbar.panels.signals.SignalDebugPanel',\n 'debug_toolbar.panels.logger.LoggingPanel',\n)\n\nDEVSERVER_MODULES = (\n # sql modules interfere with saving some KumaScript templates\n #'devserver.modules.sql.SQLRealTimeModule',\n #'devserver.modules.sql.SQLSummaryModule',\n 'devserver.modules.profile.ProfileSummaryModule',\n\n # Modules not enabled by default\n #'devserver.modules.ajax.AjaxDumpModule',\n #'devserver.modules.profile.MemoryUseModule',\n #'devserver.modules.cache.CacheSummaryModule',\n #'devserver.modules.profile.LineProfilerModule',\n)\n\n# The default database should point to the master.\nDATABASES = {\n 'default': {\n 'NAME': 'kuma',\n 'ENGINE': 'django.db.backends.mysql',\n 'HOST': 'localhost',\n 'USER': 'kuma',\n 'PASSWORD': 'kuma',\n 'OPTIONS': {'init_command': 'SET storage_engine=InnoDB'},\n },\n}\n\nMIGRATION_DATABASES = {\n 'wikidb': {\n 'NAME': 'wikidb',\n 'ENGINE': 'django.db.backends.mysql',\n 'HOST': 'localhost',\n 'USER': 'wikiuser',\n 'PASSWORD': '2yeOr7ByBUMBiB4z',\n },\n}\n\nCACHES = {\n 'default': {\n # HACK: We currently have 'default' memcache disabled in production.\n # This reflects that in local dev.\n 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',\n #'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',\n #'LOCATION': [\n # '127.0.0.1:11211',\n #],\n 'TIMEOUT': 3600,\n 'KEY_PREFIX': 'kuma',\n },\n 'secondary': {\n 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',\n 'LOCATION': [\n '127.0.0.1:11211',\n ],\n 'TIMEOUT': 3600,\n 'KEY_PREFIX': 'kuma',\n }\n}\n\n# TODO: Switch this to 'default' when main cache issues are resolved\nSECONDARY_CACHE_ALIAS = 'secondary'\n\n# Use IP:PORT pairs separated by semicolons.\nCACHE_BACKEND = 'memcached://localhost:11211?timeout=60'\nCONSTANCE_DATABASE_CACHE_BACKEND = CACHE_BACKEND\n\n# This is used to hash some things in Django.\nSECRET_KEY = 'jenny8675309'\n\nDEBUG_PROPAGATE_EXCEPTIONS = DEBUG\n\nLOG_LEVEL = logging.DEBUG\n\nSITE_URL = 'https://developer-local.allizom.org'\nPROTOCOL = 'https://'\nDOMAIN = 'developer-local.allizom.org'\n\n# See: https://github.com/mozilla/django-browserid/issues/8 (TODO)\nBROWSERID_DISABLE_CERT_CHECK = True\nBROWSERID_CACERT_FILE = None\n\nLOGIN_REDIRECT_URL = '/'\nLOGIN_REDIRECT_URL_FAILURE = '/'\n\nKUMASCRIPT_URL_TEMPLATE = 'http://localhost:9080/docs/{path}'\n\nATTACHMENT_HOST = 'mdn-local.mozillademos.org'\n\nES_DISABLED = False\nES_URLS = ['http://127.0.0.1:9200']\nES_INDEXES = {'default': 'main_index'}\nES_INDEX_PREFIX = 'mdn'\nES_LIVE_INDEX = True\nES_INDEXING_TIMEOUT = 30\n\n# See https://mana.mozilla.org/wiki/display/websites/Developer+Cluster#DeveloperCluster-Sentry\nSENTRY_DSN = ''\n\nif SENTRY_DSN:\n INSTALLED_APPS = INSTALLED_APPS + (\n 'raven.contrib.django.raven_compat',\n )\n", "path": "puppet/files/vagrant/settings_local.py"}], "after_files": [{"content": "from settings import *\nimport logging\n\nINTERNAL_IPS = ('127.0.0.1', '192.168.10.1',)\n\nDEBUG = True\nDEV = True\nTEMPLATE_DEBUG = DEBUG\nSERVE_MEDIA = DEBUG\n\nSESSION_COOKIE_SECURE = True\n\nDEMO_UPLOADS_ROOT = '/home/vagrant/uploads/demos'\nDEMO_UPLOADS_URL = '/media/uploads/demos/'\n\nPROD_DETAILS_DIR = '/home/vagrant/product_details_json'\nMDC_PAGES_DIR = '/home/vagrant/mdc_pages'\n\nGOOGLE_MAPS_API_KEY = \"ABQIAAAANRj9BHQi5ireVluCwVy0yRSrufPN8BjQWjkoRva24PCQEXS2OhSXu2BEgUH5PmGOmW71r2-tEuOVuQ\"\n\nRECAPTCHA_USE_SSL = True\nRECAPTCHA_PUBLIC_KEY = '6LdX8cISAAAAAA9HRXmzrcRSFsUoIK9u0nWpvGS_'\nRECAPTCHA_PRIVATE_KEY = '6LdX8cISAAAAACkC1kqYmpeSf-1geTmLzrLnq0t6'\n\nBITLY_USERNAME = 'lmorchard'\nBITLY_API_KEY = \"R_2653e6351e31d02988b3da31dac6e2c0\"\n\nEMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'\n#EMAIL_BACKEND = 'django.core.mail.backends.filebased.EmailBackend'\n#EMAIL_FILE_PATH = '/home/vagrant/logs/kuma-email.log'\n\n# Uncomment to enable a real celery queue\nCELERY_ALWAYS_EAGER = False\n\nINSTALLED_APPS = INSTALLED_APPS + (\n \"django_extensions\",\n \"debug_toolbar\",\n \"devserver\",\n)\n\nJINGO_EXCLUDE_APPS = JINGO_EXCLUDE_APPS + (\n 'debug_toolbar',\n)\n\nDEBUG_TOOLBAR_CONFIG = {\n \"INTERCEPT_REDIRECTS\": False,\n}\n\nDEBUG_TOOLBAR_PANELS = (\n 'debug_toolbar.panels.versions.VersionsPanel',\n 'debug_toolbar.panels.timer.TimerPanel',\n 'debug_toolbar.panels.settings.SettingsPanel',\n 'debug_toolbar.panels.headers.HeadersPanel',\n 'debug_toolbar.panels.request.RequestPanel',\n 'debug_toolbar.panels.templates.TemplatesPanel',\n 'debug_toolbar.panels.cache.CachePanel',\n 'debug_toolbar.panels.sql.SQLPanel',\n 'debug_toolbar.panels.signals.SignalsPanel',\n 'debug_toolbar.panels.logging.LoggingPanel',\n 'debug_toolbar.panels.redirects.RedirectsPanel',\n)\n\nDEVSERVER_MODULES = (\n # sql modules interfere with saving some KumaScript templates\n #'devserver.modules.sql.SQLRealTimeModule',\n #'devserver.modules.sql.SQLSummaryModule',\n 'devserver.modules.profile.ProfileSummaryModule',\n\n # Modules not enabled by default\n #'devserver.modules.ajax.AjaxDumpModule',\n #'devserver.modules.profile.MemoryUseModule',\n #'devserver.modules.cache.CacheSummaryModule',\n #'devserver.modules.profile.LineProfilerModule',\n)\n\n# The default database should point to the master.\nDATABASES = {\n 'default': {\n 'NAME': 'kuma',\n 'ENGINE': 'django.db.backends.mysql',\n 'HOST': 'localhost',\n 'USER': 'kuma',\n 'PASSWORD': 'kuma',\n 'OPTIONS': {'init_command': 'SET storage_engine=InnoDB'},\n },\n}\n\nMIGRATION_DATABASES = {\n 'wikidb': {\n 'NAME': 'wikidb',\n 'ENGINE': 'django.db.backends.mysql',\n 'HOST': 'localhost',\n 'USER': 'wikiuser',\n 'PASSWORD': '2yeOr7ByBUMBiB4z',\n },\n}\n\nCACHES = {\n 'default': {\n # HACK: We currently have 'default' memcache disabled in production.\n # This reflects that in local dev.\n 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',\n #'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',\n #'LOCATION': [\n # '127.0.0.1:11211',\n #],\n 'TIMEOUT': 3600,\n 'KEY_PREFIX': 'kuma',\n },\n 'secondary': {\n 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',\n 'LOCATION': [\n '127.0.0.1:11211',\n ],\n 'TIMEOUT': 3600,\n 'KEY_PREFIX': 'kuma',\n }\n}\n\n# TODO: Switch this to 'default' when main cache issues are resolved\nSECONDARY_CACHE_ALIAS = 'secondary'\n\n# Use IP:PORT pairs separated by semicolons.\nCACHE_BACKEND = 'memcached://localhost:11211?timeout=60'\nCONSTANCE_DATABASE_CACHE_BACKEND = CACHE_BACKEND\n\n# This is used to hash some things in Django.\nSECRET_KEY = 'jenny8675309'\n\nDEBUG_PROPAGATE_EXCEPTIONS = DEBUG\n\nLOG_LEVEL = logging.DEBUG\n\nSITE_URL = 'https://developer-local.allizom.org'\nPROTOCOL = 'https://'\nDOMAIN = 'developer-local.allizom.org'\n\n# See: https://github.com/mozilla/django-browserid/issues/8 (TODO)\nBROWSERID_DISABLE_CERT_CHECK = True\nBROWSERID_CACERT_FILE = None\n\nLOGIN_REDIRECT_URL = '/'\nLOGIN_REDIRECT_URL_FAILURE = '/'\n\nKUMASCRIPT_URL_TEMPLATE = 'http://localhost:9080/docs/{path}'\n\nATTACHMENT_HOST = 'mdn-local.mozillademos.org'\n\nES_DISABLED = False\nES_URLS = ['http://127.0.0.1:9200']\nES_INDEXES = {'default': 'main_index'}\nES_INDEX_PREFIX = 'mdn'\nES_LIVE_INDEX = True\nES_INDEXING_TIMEOUT = 30\n\n# See https://mana.mozilla.org/wiki/display/websites/Developer+Cluster#DeveloperCluster-Sentry\nSENTRY_DSN = ''\n\nif SENTRY_DSN:\n INSTALLED_APPS = INSTALLED_APPS + (\n 'raven.contrib.django.raven_compat',\n )\n", "path": "puppet/files/vagrant/settings_local.py"}]} | 2,111 | 447 |
gh_patches_debug_22363 | rasdani/github-patches | git_diff | Mailu__Mailu-2791 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
mailu front fails with KeyError: 'LD_PRELOAD'
<!--
Thank you for opening an issue with Mailu. Please understand that issues are meant for bugs and enhancement-requests.
For **user-support questions**, reach out to us on [matrix](https://matrix.to/#/#mailu:tedomum.net).
To be able to help you best, we need some more information.
Before you open your issue
- Check if no issue or pull-request for this already exists.
- Check [documentation](https://mailu.io/master/) and [FAQ](https://mailu.io/master/faq.html). (Tip, use the search function on the documentation page)
- You understand `Mailu` is made by volunteers in their **free time** — be concise, civil and accept that delays can occur.
- The title of the issue should be short and simple. It should contain specific terms related to the actual issue. Be specific while writing the title.
Please put your text outside of the comment blocks to be visible. You can use the button "Preview" above to check.
-->
## Environment & Version
### Environment
- [X] docker compose
- [ ] kubernetes
- [ ] docker swarm
### Version
- Version: `2.0`
<!--
To find your version, get the image name of a mailu container and read the version from the tag (example for version 1.7).
$> docker ps -a | grep mailu
140b09d4b09c mailu/roundcube:1.7 "docker-php-entrypoi…" 2 weeks ago Up 2 days (healthy) 80/tcp
$> grep MAILU_VERSION docker-compose.yml mailu.env
-->
## Description
<!--
Further explain the bug in a few words. It should be clear what the unexpected behaviour is. Share it in an easy-to-understand language.
-->
Pulled the image today to create a new server. The nginx fails with the following error.
## Replication Steps
<!--
Steps for replicating your issue
-->
* docker-compose up -d
* docker shows unhealthy front container
* docker logs mailu_front_1
## Observed behaviour
<!--
Explain or paste the result you received.
-->
## Expected behaviour
<!--
Explain what results you expected - be as specific as possible.
Just saying "it doesn’t work as expected" is not useful. It's also helpful to describe what you actually experienced.
-->
## Logs
<!--
Often it is very useful to include log fragments of the involved component.
You can get the logs via `docker logs <container name> --tail 1000`.
For example for the admin container: `docker logs mailu_admin_1 --tail 1000`
or using docker compose `docker compose -f /mailu/docker-compose.yml logs --tail 1000 admin`
If you can find the relevant section, please share only the parts that seem relevant. If you have any logs, please enclose them in code tags, like so:
-->
```
# docker logs mailu_front_1
Traceback (most recent call last):
File "/config.py", line 8, in <module>
args = system.set_env()
File "/app/venv/lib/python3.10/site-packages/socrate/system.py", line 80, in set_env
del os.environ['LD_PRELOAD']
File "/usr/lib/python3.10/os.py", line 696, in __delitem__
raise KeyError(key) from None
KeyError: 'LD_PRELOAD'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/base/libs/socrate/socrate/system.py`
Content:
```
1 import hmac
2 import logging as log
3 import os
4 import sys
5 import re
6 from pwd import getpwnam
7 import socket
8 import tenacity
9
10 @tenacity.retry(stop=tenacity.stop_after_attempt(100),
11 wait=tenacity.wait_random(min=2, max=5))
12 def resolve_hostname(hostname):
13 """ This function uses system DNS to resolve a hostname.
14 It is capable of retrying in case the host is not immediately available
15 """
16 try:
17 return sorted(socket.getaddrinfo(hostname, None, socket.AF_UNSPEC, socket.SOCK_STREAM, 0, socket.AI_PASSIVE), key=lambda s:s[0])[0][4][0]
18 except Exception as e:
19 log.warn("Unable to lookup '%s': %s",hostname,e)
20 raise e
21
22 def _coerce_value(value):
23 if isinstance(value, str) and value.lower() in ('true','yes'):
24 return True
25 elif isinstance(value, str) and value.lower() in ('false', 'no'):
26 return False
27 return value
28
29 class LogFilter(object):
30 def __init__(self, stream, re_patterns, log_file):
31 self.stream = stream
32 if isinstance(re_patterns, list):
33 self.pattern = re.compile('|'.join([f'(?:{pattern})' for pattern in re_patterns]))
34 elif isinstance(re_patterns, str):
35 self.pattern = re.compile(re_patterns)
36 else:
37 self.pattern = re_patterns
38 self.found = False
39 self.log_file = log_file
40
41 def __getattr__(self, attr_name):
42 return getattr(self.stream, attr_name)
43
44 def write(self, data):
45 if data == '\n' and self.found:
46 self.found = False
47 else:
48 if not self.pattern.search(data):
49 self.stream.write(data)
50 self.stream.flush()
51 if self.log_file:
52 try:
53 with open(self.log_file, 'a', encoding='utf-8') as l:
54 l.write(data)
55 except:
56 pass
57 else:
58 # caught bad pattern
59 self.found = True
60
61 def flush(self):
62 self.stream.flush()
63
64 def _is_compatible_with_hardened_malloc():
65 with open('/proc/cpuinfo', 'r') as f:
66 lines = f.readlines()
67 for line in lines:
68 # See #2764, we need vmovdqu
69 if line.startswith('flags') and ' avx ' not in line:
70 return False
71 return True
72
73 def set_env(required_secrets=[], log_filters=[], log_file=None):
74 if log_filters:
75 sys.stdout = LogFilter(sys.stdout, log_filters, log_file)
76 sys.stderr = LogFilter(sys.stderr, log_filters, log_file)
77 log.basicConfig(stream=sys.stderr, level=os.environ.get("LOG_LEVEL", 'WARNING'))
78
79 if not _is_compatible_with_hardened_malloc():
80 del os.environ['LD_PRELOAD']
81
82 """ This will set all the environment variables and retains only the secrets we need """
83 if 'SECRET_KEY_FILE' in os.environ:
84 try:
85 secret_key = open(os.environ.get("SECRET_KEY_FILE"), "r").read().strip()
86 except Exception as exc:
87 log.error(f"Can't read SECRET_KEY from file: {exc}")
88 raise exc
89 else:
90 secret_key = os.environ.get('SECRET_KEY')
91 clean_env()
92 # derive the keys we need
93 for secret in required_secrets:
94 os.environ[f'{secret}_KEY'] = hmac.new(bytearray(secret_key, 'utf-8'), bytearray(secret, 'utf-8'), 'sha256').hexdigest()
95
96 return {
97 key: _coerce_value(os.environ.get(key, value))
98 for key, value in os.environ.items()
99 }
100
101 def clean_env():
102 """ remove all secret keys """
103 [os.environ.pop(key, None) for key in os.environ.keys() if key.endswith("_KEY")]
104
105 def drop_privs_to(username='mailu'):
106 pwnam = getpwnam(username)
107 os.setgroups([])
108 os.setgid(pwnam.pw_gid)
109 os.setuid(pwnam.pw_uid)
110 os.environ['HOME'] = pwnam.pw_dir
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/base/libs/socrate/socrate/system.py b/core/base/libs/socrate/socrate/system.py
--- a/core/base/libs/socrate/socrate/system.py
+++ b/core/base/libs/socrate/socrate/system.py
@@ -68,6 +68,9 @@
# See #2764, we need vmovdqu
if line.startswith('flags') and ' avx ' not in line:
return False
+ # See #2541
+ if line.startswith('Features') and ' lrcpc ' not in line:
+ return False
return True
def set_env(required_secrets=[], log_filters=[], log_file=None):
@@ -76,7 +79,8 @@
sys.stderr = LogFilter(sys.stderr, log_filters, log_file)
log.basicConfig(stream=sys.stderr, level=os.environ.get("LOG_LEVEL", 'WARNING'))
- if not _is_compatible_with_hardened_malloc():
+ if 'LD_PRELOAD' in os.environ and not _is_compatible_with_hardened_malloc():
+ log.warning('Disabling hardened-malloc on this CPU')
del os.environ['LD_PRELOAD']
""" This will set all the environment variables and retains only the secrets we need """
| {"golden_diff": "diff --git a/core/base/libs/socrate/socrate/system.py b/core/base/libs/socrate/socrate/system.py\n--- a/core/base/libs/socrate/socrate/system.py\n+++ b/core/base/libs/socrate/socrate/system.py\n@@ -68,6 +68,9 @@\n # See #2764, we need vmovdqu\n if line.startswith('flags') and ' avx ' not in line:\n return False\n+ # See #2541\n+ if line.startswith('Features') and ' lrcpc ' not in line:\n+ return False\n return True\n \n def set_env(required_secrets=[], log_filters=[], log_file=None):\n@@ -76,7 +79,8 @@\n sys.stderr = LogFilter(sys.stderr, log_filters, log_file)\n log.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", 'WARNING'))\n \n- if not _is_compatible_with_hardened_malloc():\n+ if 'LD_PRELOAD' in os.environ and not _is_compatible_with_hardened_malloc():\n+ log.warning('Disabling hardened-malloc on this CPU')\n del os.environ['LD_PRELOAD']\n \n \"\"\" This will set all the environment variables and retains only the secrets we need \"\"\"\n", "issue": "mailu front fails with KeyError: 'LD_PRELOAD'\n<!--\r\n\r\nThank you for opening an issue with Mailu. Please understand that issues are meant for bugs and enhancement-requests.\r\nFor **user-support questions**, reach out to us on [matrix](https://matrix.to/#/#mailu:tedomum.net).\r\n\r\nTo be able to help you best, we need some more information.\r\n\r\nBefore you open your issue\r\n- Check if no issue or pull-request for this already exists.\r\n- Check [documentation](https://mailu.io/master/) and [FAQ](https://mailu.io/master/faq.html). (Tip, use the search function on the documentation page)\r\n- You understand `Mailu` is made by volunteers in their **free time** \u2014 be concise, civil and accept that delays can occur.\r\n- The title of the issue should be short and simple. It should contain specific terms related to the actual issue. Be specific while writing the title.\r\n\r\nPlease put your text outside of the comment blocks to be visible. You can use the button \"Preview\" above to check.\r\n\r\n-->\r\n\r\n## Environment & Version\r\n\r\n### Environment\r\n\r\n- [X] docker compose\r\n- [ ] kubernetes\r\n- [ ] docker swarm\r\n\r\n### Version\r\n\r\n- Version: `2.0`\r\n\r\n<!--\r\nTo find your version, get the image name of a mailu container and read the version from the tag (example for version 1.7).\r\n\r\n$> docker ps -a | grep mailu\r\n140b09d4b09c mailu/roundcube:1.7 \"docker-php-entrypoi\u2026\" 2 weeks ago Up 2 days (healthy) 80/tcp\r\n$> grep MAILU_VERSION docker-compose.yml mailu.env\r\n-->\r\n\r\n## Description\r\n<!--\r\nFurther explain the bug in a few words. It should be clear what the unexpected behaviour is. Share it in an easy-to-understand language.\r\n-->\r\nPulled the image today to create a new server. The nginx fails with the following error.\r\n\r\n## Replication Steps\r\n<!--\r\nSteps for replicating your issue\r\n-->\r\n* docker-compose up -d\r\n* docker shows unhealthy front container\r\n* docker logs mailu_front_1\r\n\r\n## Observed behaviour\r\n<!--\r\nExplain or paste the result you received.\r\n-->\r\n\r\n## Expected behaviour\r\n<!--\r\nExplain what results you expected - be as specific as possible.\r\nJust saying \"it doesn\u2019t work as expected\" is not useful. It's also helpful to describe what you actually experienced.\r\n-->\r\n\r\n## Logs\r\n<!--\r\nOften it is very useful to include log fragments of the involved component.\r\nYou can get the logs via `docker logs <container name> --tail 1000`.\r\nFor example for the admin container: `docker logs mailu_admin_1 --tail 1000`\r\nor using docker compose `docker compose -f /mailu/docker-compose.yml logs --tail 1000 admin`\r\n\r\nIf you can find the relevant section, please share only the parts that seem relevant. If you have any logs, please enclose them in code tags, like so:\r\n\r\n-->\r\n\r\n\r\n```\r\n# docker logs mailu_front_1\r\nTraceback (most recent call last):\r\n File \"/config.py\", line 8, in <module>\r\n args = system.set_env()\r\n File \"/app/venv/lib/python3.10/site-packages/socrate/system.py\", line 80, in set_env\r\n del os.environ['LD_PRELOAD']\r\n File \"/usr/lib/python3.10/os.py\", line 696, in __delitem__\r\n raise KeyError(key) from None\r\nKeyError: 'LD_PRELOAD'\r\n```\r\n\n", "before_files": [{"content": "import hmac\nimport logging as log\nimport os\nimport sys\nimport re\nfrom pwd import getpwnam\nimport socket\nimport tenacity\n\[email protected](stop=tenacity.stop_after_attempt(100),\n wait=tenacity.wait_random(min=2, max=5))\ndef resolve_hostname(hostname):\n \"\"\" This function uses system DNS to resolve a hostname.\n It is capable of retrying in case the host is not immediately available\n \"\"\"\n try:\n return sorted(socket.getaddrinfo(hostname, None, socket.AF_UNSPEC, socket.SOCK_STREAM, 0, socket.AI_PASSIVE), key=lambda s:s[0])[0][4][0]\n except Exception as e:\n log.warn(\"Unable to lookup '%s': %s\",hostname,e)\n raise e\n\ndef _coerce_value(value):\n if isinstance(value, str) and value.lower() in ('true','yes'):\n return True\n elif isinstance(value, str) and value.lower() in ('false', 'no'):\n return False\n return value\n\nclass LogFilter(object):\n def __init__(self, stream, re_patterns, log_file):\n self.stream = stream\n if isinstance(re_patterns, list):\n self.pattern = re.compile('|'.join([f'(?:{pattern})' for pattern in re_patterns]))\n elif isinstance(re_patterns, str):\n self.pattern = re.compile(re_patterns)\n else:\n self.pattern = re_patterns\n self.found = False\n self.log_file = log_file\n\n def __getattr__(self, attr_name):\n return getattr(self.stream, attr_name)\n\n def write(self, data):\n if data == '\\n' and self.found:\n self.found = False\n else:\n if not self.pattern.search(data):\n self.stream.write(data)\n self.stream.flush()\n if self.log_file:\n try:\n with open(self.log_file, 'a', encoding='utf-8') as l:\n l.write(data)\n except:\n pass\n else:\n # caught bad pattern\n self.found = True\n\n def flush(self):\n self.stream.flush()\n\ndef _is_compatible_with_hardened_malloc():\n with open('/proc/cpuinfo', 'r') as f:\n lines = f.readlines()\n for line in lines:\n # See #2764, we need vmovdqu\n if line.startswith('flags') and ' avx ' not in line:\n return False\n return True\n\ndef set_env(required_secrets=[], log_filters=[], log_file=None):\n if log_filters:\n sys.stdout = LogFilter(sys.stdout, log_filters, log_file)\n sys.stderr = LogFilter(sys.stderr, log_filters, log_file)\n log.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", 'WARNING'))\n\n if not _is_compatible_with_hardened_malloc():\n del os.environ['LD_PRELOAD']\n\n \"\"\" This will set all the environment variables and retains only the secrets we need \"\"\"\n if 'SECRET_KEY_FILE' in os.environ:\n try:\n secret_key = open(os.environ.get(\"SECRET_KEY_FILE\"), \"r\").read().strip()\n except Exception as exc:\n log.error(f\"Can't read SECRET_KEY from file: {exc}\")\n raise exc\n else:\n secret_key = os.environ.get('SECRET_KEY')\n clean_env()\n # derive the keys we need\n for secret in required_secrets:\n os.environ[f'{secret}_KEY'] = hmac.new(bytearray(secret_key, 'utf-8'), bytearray(secret, 'utf-8'), 'sha256').hexdigest()\n\n return {\n key: _coerce_value(os.environ.get(key, value))\n for key, value in os.environ.items()\n }\n\ndef clean_env():\n \"\"\" remove all secret keys \"\"\"\n [os.environ.pop(key, None) for key in os.environ.keys() if key.endswith(\"_KEY\")]\n\ndef drop_privs_to(username='mailu'):\n pwnam = getpwnam(username)\n os.setgroups([])\n os.setgid(pwnam.pw_gid)\n os.setuid(pwnam.pw_uid)\n os.environ['HOME'] = pwnam.pw_dir\n", "path": "core/base/libs/socrate/socrate/system.py"}], "after_files": [{"content": "import hmac\nimport logging as log\nimport os\nimport sys\nimport re\nfrom pwd import getpwnam\nimport socket\nimport tenacity\n\[email protected](stop=tenacity.stop_after_attempt(100),\n wait=tenacity.wait_random(min=2, max=5))\ndef resolve_hostname(hostname):\n \"\"\" This function uses system DNS to resolve a hostname.\n It is capable of retrying in case the host is not immediately available\n \"\"\"\n try:\n return sorted(socket.getaddrinfo(hostname, None, socket.AF_UNSPEC, socket.SOCK_STREAM, 0, socket.AI_PASSIVE), key=lambda s:s[0])[0][4][0]\n except Exception as e:\n log.warn(\"Unable to lookup '%s': %s\",hostname,e)\n raise e\n\ndef _coerce_value(value):\n if isinstance(value, str) and value.lower() in ('true','yes'):\n return True\n elif isinstance(value, str) and value.lower() in ('false', 'no'):\n return False\n return value\n\nclass LogFilter(object):\n def __init__(self, stream, re_patterns, log_file):\n self.stream = stream\n if isinstance(re_patterns, list):\n self.pattern = re.compile('|'.join([f'(?:{pattern})' for pattern in re_patterns]))\n elif isinstance(re_patterns, str):\n self.pattern = re.compile(re_patterns)\n else:\n self.pattern = re_patterns\n self.found = False\n self.log_file = log_file\n\n def __getattr__(self, attr_name):\n return getattr(self.stream, attr_name)\n\n def write(self, data):\n if data == '\\n' and self.found:\n self.found = False\n else:\n if not self.pattern.search(data):\n self.stream.write(data)\n self.stream.flush()\n if self.log_file:\n try:\n with open(self.log_file, 'a', encoding='utf-8') as l:\n l.write(data)\n except:\n pass\n else:\n # caught bad pattern\n self.found = True\n\n def flush(self):\n self.stream.flush()\n\ndef _is_compatible_with_hardened_malloc():\n with open('/proc/cpuinfo', 'r') as f:\n lines = f.readlines()\n for line in lines:\n # See #2764, we need vmovdqu\n if line.startswith('flags') and ' avx ' not in line:\n return False\n # See #2541\n if line.startswith('Features') and ' lrcpc ' not in line:\n return False\n return True\n\ndef set_env(required_secrets=[], log_filters=[], log_file=None):\n if log_filters:\n sys.stdout = LogFilter(sys.stdout, log_filters, log_file)\n sys.stderr = LogFilter(sys.stderr, log_filters, log_file)\n log.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", 'WARNING'))\n\n if 'LD_PRELOAD' in os.environ and not _is_compatible_with_hardened_malloc():\n log.warning('Disabling hardened-malloc on this CPU')\n del os.environ['LD_PRELOAD']\n\n \"\"\" This will set all the environment variables and retains only the secrets we need \"\"\"\n if 'SECRET_KEY_FILE' in os.environ:\n try:\n secret_key = open(os.environ.get(\"SECRET_KEY_FILE\"), \"r\").read().strip()\n except Exception as exc:\n log.error(f\"Can't read SECRET_KEY from file: {exc}\")\n raise exc\n else:\n secret_key = os.environ.get('SECRET_KEY')\n clean_env()\n # derive the keys we need\n for secret in required_secrets:\n os.environ[f'{secret}_KEY'] = hmac.new(bytearray(secret_key, 'utf-8'), bytearray(secret, 'utf-8'), 'sha256').hexdigest()\n\n return {\n key: _coerce_value(os.environ.get(key, value))\n for key, value in os.environ.items()\n }\n\ndef clean_env():\n \"\"\" remove all secret keys \"\"\"\n [os.environ.pop(key, None) for key in os.environ.keys() if key.endswith(\"_KEY\")]\n\ndef drop_privs_to(username='mailu'):\n pwnam = getpwnam(username)\n os.setgroups([])\n os.setgid(pwnam.pw_gid)\n os.setuid(pwnam.pw_uid)\n os.environ['HOME'] = pwnam.pw_dir\n", "path": "core/base/libs/socrate/socrate/system.py"}]} | 2,169 | 277 |
gh_patches_debug_32252 | rasdani/github-patches | git_diff | fal-ai__dbt-fal-569 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
fal run --scripts selector with global scripts not working
<!-- *** Make sure you have searched for an existing bug report for this issue *** -->
**Describe the bug**
Fal CLI `run` command using `--scripts` flag selector does not execute any script when the script passed is under the `before` key in the `schema.yml` configuration.
**Your environment**
- OS: macOS Monterrey 12.6
- Paste the following commands output:
```
fal --0.6.0
dbt --1.2.1
```
- Adapter being used: bigquery
**How to reproduce**
Add scripts to run under `--before` key in the `schema.yml`:
```
version: 2
fal:
scripts:
before:
- fal_scripts/delete_bq_datasets.py
- fal_scripts/download_prod_artifacts.py
```
File structure:
```
dbt_project
├── analysis
├── dbt_packages
├── dbt_project.yml
├── fal_scripts
│ ├── delete_bq_datasets.py
│ └── download_prod_artifacts.py
├── logs
├── macros
├── models
│ ├── exposures
│ └── schema.yml
├── packages.yml
├── seeds
├── snapshots
├── target
└── tests
```
Then run:
```sh
fal run --before --scripts download_prod_artifacts.py
```
Or:
```sh
fal run --scripts download_prod_artifacts.py
```
**Expected behavior**
Run only the script passed to the `--scripts` flag: `download_prod_artifacts.py`.
**Actual behavior**
Does nothing for neither case.
```sh
fal run --scripts download_prod_artifacts.py
```
```
13:25:19 Found 370 models, 544 tests, 0 snapshots, 18 analyses, 583 macros, 0 operations, 42 seed files, 129 sources, 2 exposures, 0 metrics
13:25:19 Could not read dbt sources artifact
```
```sh
fal run --before --scripts download_prod_artifacts.py
```
```
13:27:34 Found 370 models, 544 tests, 0 snapshots, 18 analyses, 583 macros, 0 operations, 42 seed files, 129 sources, 2 exposures, 0 metrics
13:27:34 Could not read dbt sources artifact
```
**Additional context**
The issue might be [this](https://github.com/fal-ai/fal/blob/771ad3dc8946dbda57e91b188719f8a20c6eb353/src/fal/cli/fal_runner.py#L47) section of code (related with `scripts` and `global_scripts` variables).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/fal/cli/fal_runner.py`
Content:
```
1 import argparse
2 from pathlib import Path
3 from typing import Dict, List
4
5 from dbt.flags import PROFILES_DIR
6 from fal.planner.executor import parallel_executor
7 from fal.planner.schedule import Scheduler
8 from fal.planner.tasks import FalLocalHookTask, Status, TaskGroup
9
10 from fal.fal_script import FalScript
11 from faldbt.project import DbtModel, FalDbt, FalGeneralException
12
13
14 def create_fal_dbt(
15 args: argparse.Namespace, generated_models: Dict[str, Path] = {}
16 ) -> FalDbt:
17 profiles_dir = PROFILES_DIR
18 if args.profiles_dir is not None:
19 profiles_dir = args.profiles_dir
20
21 real_state = None
22 if hasattr(args, "state") and args.state is not None:
23 real_state = args.state
24
25 return FalDbt(
26 args.project_dir,
27 profiles_dir,
28 args.select,
29 args.exclude,
30 args.selector,
31 args.keyword,
32 args.threads,
33 real_state,
34 args.target,
35 getattr(args, "vars", "{}"),
36 generated_models,
37 )
38
39
40 def fal_run(args: argparse.Namespace):
41 "Runs the fal run command in a subprocess"
42
43 selector_flags = args.select or args.exclude or args.selector
44 if args.all and selector_flags:
45 raise FalGeneralException(
46 "Cannot pass --all flag alongside selection flags (--select/--models, --exclude, --selector)"
47 )
48
49 faldbt = create_fal_dbt(args)
50 models = _get_filtered_models(faldbt, args.all, selector_flags, args.before)
51
52 scripts = _select_scripts(args, models, faldbt)
53
54 global_scripts = _get_global_scripts(faldbt, args.before)
55
56 if args.before:
57 if not _scripts_flag(args):
58 # run globals when no --script is passed
59 _run_scripts(args, global_scripts, faldbt)
60
61 pre_hook_scripts = _get_hooks_for_model(models, faldbt, "pre-hook")
62 _run_scripts(args, pre_hook_scripts, faldbt)
63
64 _run_scripts(args, scripts, faldbt)
65
66 else:
67 _run_scripts(args, scripts, faldbt)
68
69 post_hook_scripts = _get_hooks_for_model(models, faldbt, "post-hook")
70 _run_scripts(args, post_hook_scripts, faldbt)
71
72 if not _scripts_flag(args):
73 # run globals when no --script is passed
74 _run_scripts(args, global_scripts, faldbt)
75
76
77 def _run_scripts(args: argparse.Namespace, scripts: List[FalScript], faldbt: FalDbt):
78 scheduler = Scheduler(
79 [TaskGroup(FalLocalHookTask.from_fal_script(script)) for script in scripts]
80 )
81 parallel_executor(args, faldbt, scheduler)
82
83 failed_tasks: List[FalLocalHookTask] = [
84 group.task for group in scheduler.filter_groups(Status.FAILURE)
85 ] # type: ignore
86 failed_script_ids = [task.build_fal_script(faldbt).id for task in failed_tasks]
87 if failed_script_ids:
88 raise RuntimeError(f"Error in scripts {str.join(', ',failed_script_ids)}")
89
90
91 def _scripts_flag(args: argparse.Namespace) -> bool:
92 return bool(args.scripts)
93
94
95 def _get_hooks_for_model(
96 models: List[DbtModel], faldbt: FalDbt, hook_type: str
97 ) -> List[FalScript]:
98 return [
99 FalScript.from_hook(faldbt, model, hook)
100 for model in models
101 for hook in model._get_hooks(hook_type=hook_type)
102 ]
103
104
105 def _select_scripts(
106 args: argparse.Namespace, models: List[DbtModel], faldbt: FalDbt
107 ) -> List[FalScript]:
108 scripts = []
109 scripts_flag = _scripts_flag(args)
110
111 for model in models:
112 model_scripts = model.get_scripts(args.keyword, before=bool(args.before))
113 for path in model_scripts:
114 if not scripts_flag:
115 # run all scripts when no --script is passed
116 scripts.append(FalScript(faldbt, model, path))
117 elif path in args.scripts:
118 # if --script selector is there only run selected scripts
119 scripts.append(FalScript(faldbt, model, path))
120
121 return scripts
122
123
124 def _get_global_scripts(faldbt: FalDbt, is_before: bool):
125 return [
126 FalScript(faldbt, None, path)
127 for path in faldbt._global_script_paths["before" if is_before else "after"]
128 ]
129
130
131 def _get_models_with_keyword(faldbt: FalDbt) -> List[DbtModel]:
132 return list(
133 filter(lambda model: faldbt.keyword in model.meta, faldbt.list_models())
134 )
135
136
137 def _get_filtered_models(faldbt: FalDbt, all, selected, before) -> List[DbtModel]:
138 selected_ids = _models_ids(faldbt._compile_task._flattened_nodes)
139 filtered_models: List[DbtModel] = []
140
141 if (
142 not all
143 and not selected
144 and not before
145 and faldbt._run_results.nativeRunResult is None
146 ):
147 from faldbt.parse import FalParseError
148
149 raise FalParseError(
150 "Cannot define models to run without selection flags or dbt run_results artifact or --before flag"
151 )
152
153 models = _get_models_with_keyword(faldbt)
154
155 for node in models:
156 if selected:
157 if node.unique_id in selected_ids:
158 filtered_models.append(node)
159 elif before:
160 if node.get_scripts(faldbt.keyword, before=before) != []:
161 filtered_models.append(node)
162 elif all:
163 filtered_models.append(node)
164 elif node.status != "skipped":
165 filtered_models.append(node)
166
167 return filtered_models
168
169
170 def _models_ids(models):
171 return list(map(lambda r: r.unique_id, models))
172
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/fal/cli/fal_runner.py b/src/fal/cli/fal_runner.py
--- a/src/fal/cli/fal_runner.py
+++ b/src/fal/cli/fal_runner.py
@@ -50,15 +50,15 @@
models = _get_filtered_models(faldbt, args.all, selector_flags, args.before)
scripts = _select_scripts(args, models, faldbt)
-
- global_scripts = _get_global_scripts(faldbt, args.before)
+ global_scripts = _get_global_scripts(faldbt, args)
if args.before:
- if not _scripts_flag(args):
- # run globals when no --script is passed
+ if not _scripts_flag(args) or not selector_flags:
+ # run globals when no --script is passed or no selector is passed
_run_scripts(args, global_scripts, faldbt)
pre_hook_scripts = _get_hooks_for_model(models, faldbt, "pre-hook")
+
_run_scripts(args, pre_hook_scripts, faldbt)
_run_scripts(args, scripts, faldbt)
@@ -69,7 +69,7 @@
post_hook_scripts = _get_hooks_for_model(models, faldbt, "post-hook")
_run_scripts(args, post_hook_scripts, faldbt)
- if not _scripts_flag(args):
+ if not _scripts_flag(args) or not selector_flags:
# run globals when no --script is passed
_run_scripts(args, global_scripts, faldbt)
@@ -121,10 +121,12 @@
return scripts
-def _get_global_scripts(faldbt: FalDbt, is_before: bool):
+def _get_global_scripts(faldbt: FalDbt, args: argparse.Namespace):
+ scripts_flag = _scripts_flag(args)
return [
FalScript(faldbt, None, path)
- for path in faldbt._global_script_paths["before" if is_before else "after"]
+ for path in faldbt._global_script_paths["before" if args.before else "after"]
+ if not scripts_flag or path in args.scripts
]
| {"golden_diff": "diff --git a/src/fal/cli/fal_runner.py b/src/fal/cli/fal_runner.py\n--- a/src/fal/cli/fal_runner.py\n+++ b/src/fal/cli/fal_runner.py\n@@ -50,15 +50,15 @@\n models = _get_filtered_models(faldbt, args.all, selector_flags, args.before)\n \n scripts = _select_scripts(args, models, faldbt)\n-\n- global_scripts = _get_global_scripts(faldbt, args.before)\n+ global_scripts = _get_global_scripts(faldbt, args)\n \n if args.before:\n- if not _scripts_flag(args):\n- # run globals when no --script is passed\n+ if not _scripts_flag(args) or not selector_flags:\n+ # run globals when no --script is passed or no selector is passed\n _run_scripts(args, global_scripts, faldbt)\n \n pre_hook_scripts = _get_hooks_for_model(models, faldbt, \"pre-hook\")\n+\n _run_scripts(args, pre_hook_scripts, faldbt)\n \n _run_scripts(args, scripts, faldbt)\n@@ -69,7 +69,7 @@\n post_hook_scripts = _get_hooks_for_model(models, faldbt, \"post-hook\")\n _run_scripts(args, post_hook_scripts, faldbt)\n \n- if not _scripts_flag(args):\n+ if not _scripts_flag(args) or not selector_flags:\n # run globals when no --script is passed\n _run_scripts(args, global_scripts, faldbt)\n \n@@ -121,10 +121,12 @@\n return scripts\n \n \n-def _get_global_scripts(faldbt: FalDbt, is_before: bool):\n+def _get_global_scripts(faldbt: FalDbt, args: argparse.Namespace):\n+ scripts_flag = _scripts_flag(args)\n return [\n FalScript(faldbt, None, path)\n- for path in faldbt._global_script_paths[\"before\" if is_before else \"after\"]\n+ for path in faldbt._global_script_paths[\"before\" if args.before else \"after\"]\n+ if not scripts_flag or path in args.scripts\n ]\n", "issue": "fal run --scripts selector with global scripts not working\n<!-- *** Make sure you have searched for an existing bug report for this issue *** -->\r\n\r\n**Describe the bug**\r\nFal CLI `run` command using `--scripts` flag selector does not execute any script when the script passed is under the `before` key in the `schema.yml` configuration.\r\n\r\n**Your environment**\r\n- OS: macOS Monterrey 12.6\r\n- Paste the following commands output:\r\n```\r\nfal --0.6.0\r\ndbt --1.2.1\r\n```\r\n- Adapter being used: bigquery\r\n\r\n**How to reproduce**\r\nAdd scripts to run under `--before` key in the `schema.yml`:\r\n```\r\nversion: 2\r\n\r\nfal:\r\n scripts:\r\n before:\r\n - fal_scripts/delete_bq_datasets.py\r\n - fal_scripts/download_prod_artifacts.py\r\n```\r\nFile structure:\r\n```\r\ndbt_project\r\n\u251c\u2500\u2500 analysis\r\n\u251c\u2500\u2500 dbt_packages\r\n\u251c\u2500\u2500 dbt_project.yml\r\n\u251c\u2500\u2500 fal_scripts\r\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 delete_bq_datasets.py\r\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 download_prod_artifacts.py\r\n\u251c\u2500\u2500 logs\r\n\u251c\u2500\u2500 macros\r\n\u251c\u2500\u2500 models\r\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 exposures\r\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 schema.yml\r\n\u251c\u2500\u2500 packages.yml\r\n\u251c\u2500\u2500 seeds\r\n\u251c\u2500\u2500 snapshots\r\n\u251c\u2500\u2500 target\r\n\u2514\u2500\u2500 tests\r\n\r\n```\r\nThen run:\r\n```sh\r\nfal run --before --scripts download_prod_artifacts.py\r\n```\r\nOr:\r\n```sh\r\nfal run --scripts download_prod_artifacts.py\r\n```\r\n\r\n**Expected behavior**\r\nRun only the script passed to the `--scripts` flag: `download_prod_artifacts.py`. \r\n\r\n**Actual behavior**\r\nDoes nothing for neither case.\r\n\r\n```sh\r\nfal run --scripts download_prod_artifacts.py\r\n```\r\n```\r\n13:25:19 Found 370 models, 544 tests, 0 snapshots, 18 analyses, 583 macros, 0 operations, 42 seed files, 129 sources, 2 exposures, 0 metrics\r\n13:25:19 Could not read dbt sources artifact\r\n```\r\n```sh\r\nfal run --before --scripts download_prod_artifacts.py\r\n```\r\n```\r\n13:27:34 Found 370 models, 544 tests, 0 snapshots, 18 analyses, 583 macros, 0 operations, 42 seed files, 129 sources, 2 exposures, 0 metrics\r\n13:27:34 Could not read dbt sources artifact\r\n```\r\n**Additional context**\r\nThe issue might be [this](https://github.com/fal-ai/fal/blob/771ad3dc8946dbda57e91b188719f8a20c6eb353/src/fal/cli/fal_runner.py#L47) section of code (related with `scripts` and `global_scripts` variables).\r\n\n", "before_files": [{"content": "import argparse\nfrom pathlib import Path\nfrom typing import Dict, List\n\nfrom dbt.flags import PROFILES_DIR\nfrom fal.planner.executor import parallel_executor\nfrom fal.planner.schedule import Scheduler\nfrom fal.planner.tasks import FalLocalHookTask, Status, TaskGroup\n\nfrom fal.fal_script import FalScript\nfrom faldbt.project import DbtModel, FalDbt, FalGeneralException\n\n\ndef create_fal_dbt(\n args: argparse.Namespace, generated_models: Dict[str, Path] = {}\n) -> FalDbt:\n profiles_dir = PROFILES_DIR\n if args.profiles_dir is not None:\n profiles_dir = args.profiles_dir\n\n real_state = None\n if hasattr(args, \"state\") and args.state is not None:\n real_state = args.state\n\n return FalDbt(\n args.project_dir,\n profiles_dir,\n args.select,\n args.exclude,\n args.selector,\n args.keyword,\n args.threads,\n real_state,\n args.target,\n getattr(args, \"vars\", \"{}\"),\n generated_models,\n )\n\n\ndef fal_run(args: argparse.Namespace):\n \"Runs the fal run command in a subprocess\"\n\n selector_flags = args.select or args.exclude or args.selector\n if args.all and selector_flags:\n raise FalGeneralException(\n \"Cannot pass --all flag alongside selection flags (--select/--models, --exclude, --selector)\"\n )\n\n faldbt = create_fal_dbt(args)\n models = _get_filtered_models(faldbt, args.all, selector_flags, args.before)\n\n scripts = _select_scripts(args, models, faldbt)\n\n global_scripts = _get_global_scripts(faldbt, args.before)\n\n if args.before:\n if not _scripts_flag(args):\n # run globals when no --script is passed\n _run_scripts(args, global_scripts, faldbt)\n\n pre_hook_scripts = _get_hooks_for_model(models, faldbt, \"pre-hook\")\n _run_scripts(args, pre_hook_scripts, faldbt)\n\n _run_scripts(args, scripts, faldbt)\n\n else:\n _run_scripts(args, scripts, faldbt)\n\n post_hook_scripts = _get_hooks_for_model(models, faldbt, \"post-hook\")\n _run_scripts(args, post_hook_scripts, faldbt)\n\n if not _scripts_flag(args):\n # run globals when no --script is passed\n _run_scripts(args, global_scripts, faldbt)\n\n\ndef _run_scripts(args: argparse.Namespace, scripts: List[FalScript], faldbt: FalDbt):\n scheduler = Scheduler(\n [TaskGroup(FalLocalHookTask.from_fal_script(script)) for script in scripts]\n )\n parallel_executor(args, faldbt, scheduler)\n\n failed_tasks: List[FalLocalHookTask] = [\n group.task for group in scheduler.filter_groups(Status.FAILURE)\n ] # type: ignore\n failed_script_ids = [task.build_fal_script(faldbt).id for task in failed_tasks]\n if failed_script_ids:\n raise RuntimeError(f\"Error in scripts {str.join(', ',failed_script_ids)}\")\n\n\ndef _scripts_flag(args: argparse.Namespace) -> bool:\n return bool(args.scripts)\n\n\ndef _get_hooks_for_model(\n models: List[DbtModel], faldbt: FalDbt, hook_type: str\n) -> List[FalScript]:\n return [\n FalScript.from_hook(faldbt, model, hook)\n for model in models\n for hook in model._get_hooks(hook_type=hook_type)\n ]\n\n\ndef _select_scripts(\n args: argparse.Namespace, models: List[DbtModel], faldbt: FalDbt\n) -> List[FalScript]:\n scripts = []\n scripts_flag = _scripts_flag(args)\n\n for model in models:\n model_scripts = model.get_scripts(args.keyword, before=bool(args.before))\n for path in model_scripts:\n if not scripts_flag:\n # run all scripts when no --script is passed\n scripts.append(FalScript(faldbt, model, path))\n elif path in args.scripts:\n # if --script selector is there only run selected scripts\n scripts.append(FalScript(faldbt, model, path))\n\n return scripts\n\n\ndef _get_global_scripts(faldbt: FalDbt, is_before: bool):\n return [\n FalScript(faldbt, None, path)\n for path in faldbt._global_script_paths[\"before\" if is_before else \"after\"]\n ]\n\n\ndef _get_models_with_keyword(faldbt: FalDbt) -> List[DbtModel]:\n return list(\n filter(lambda model: faldbt.keyword in model.meta, faldbt.list_models())\n )\n\n\ndef _get_filtered_models(faldbt: FalDbt, all, selected, before) -> List[DbtModel]:\n selected_ids = _models_ids(faldbt._compile_task._flattened_nodes)\n filtered_models: List[DbtModel] = []\n\n if (\n not all\n and not selected\n and not before\n and faldbt._run_results.nativeRunResult is None\n ):\n from faldbt.parse import FalParseError\n\n raise FalParseError(\n \"Cannot define models to run without selection flags or dbt run_results artifact or --before flag\"\n )\n\n models = _get_models_with_keyword(faldbt)\n\n for node in models:\n if selected:\n if node.unique_id in selected_ids:\n filtered_models.append(node)\n elif before:\n if node.get_scripts(faldbt.keyword, before=before) != []:\n filtered_models.append(node)\n elif all:\n filtered_models.append(node)\n elif node.status != \"skipped\":\n filtered_models.append(node)\n\n return filtered_models\n\n\ndef _models_ids(models):\n return list(map(lambda r: r.unique_id, models))\n", "path": "src/fal/cli/fal_runner.py"}], "after_files": [{"content": "import argparse\nfrom pathlib import Path\nfrom typing import Dict, List\n\nfrom dbt.flags import PROFILES_DIR\nfrom fal.planner.executor import parallel_executor\nfrom fal.planner.schedule import Scheduler\nfrom fal.planner.tasks import FalLocalHookTask, Status, TaskGroup\n\nfrom fal.fal_script import FalScript\nfrom faldbt.project import DbtModel, FalDbt, FalGeneralException\n\n\ndef create_fal_dbt(\n args: argparse.Namespace, generated_models: Dict[str, Path] = {}\n) -> FalDbt:\n profiles_dir = PROFILES_DIR\n if args.profiles_dir is not None:\n profiles_dir = args.profiles_dir\n\n real_state = None\n if hasattr(args, \"state\") and args.state is not None:\n real_state = args.state\n\n return FalDbt(\n args.project_dir,\n profiles_dir,\n args.select,\n args.exclude,\n args.selector,\n args.keyword,\n args.threads,\n real_state,\n args.target,\n getattr(args, \"vars\", \"{}\"),\n generated_models,\n )\n\n\ndef fal_run(args: argparse.Namespace):\n \"Runs the fal run command in a subprocess\"\n\n selector_flags = args.select or args.exclude or args.selector\n if args.all and selector_flags:\n raise FalGeneralException(\n \"Cannot pass --all flag alongside selection flags (--select/--models, --exclude, --selector)\"\n )\n\n faldbt = create_fal_dbt(args)\n models = _get_filtered_models(faldbt, args.all, selector_flags, args.before)\n\n scripts = _select_scripts(args, models, faldbt)\n global_scripts = _get_global_scripts(faldbt, args)\n\n if args.before:\n if not _scripts_flag(args) or not selector_flags:\n # run globals when no --script is passed or no selector is passed\n _run_scripts(args, global_scripts, faldbt)\n\n pre_hook_scripts = _get_hooks_for_model(models, faldbt, \"pre-hook\")\n\n _run_scripts(args, pre_hook_scripts, faldbt)\n\n _run_scripts(args, scripts, faldbt)\n\n else:\n _run_scripts(args, scripts, faldbt)\n\n post_hook_scripts = _get_hooks_for_model(models, faldbt, \"post-hook\")\n _run_scripts(args, post_hook_scripts, faldbt)\n\n if not _scripts_flag(args) or not selector_flags:\n # run globals when no --script is passed\n _run_scripts(args, global_scripts, faldbt)\n\n\ndef _run_scripts(args: argparse.Namespace, scripts: List[FalScript], faldbt: FalDbt):\n scheduler = Scheduler(\n [TaskGroup(FalLocalHookTask.from_fal_script(script)) for script in scripts]\n )\n parallel_executor(args, faldbt, scheduler)\n\n failed_tasks: List[FalLocalHookTask] = [\n group.task for group in scheduler.filter_groups(Status.FAILURE)\n ] # type: ignore\n failed_script_ids = [task.build_fal_script(faldbt).id for task in failed_tasks]\n if failed_script_ids:\n raise RuntimeError(f\"Error in scripts {str.join(', ',failed_script_ids)}\")\n\n\ndef _scripts_flag(args: argparse.Namespace) -> bool:\n return bool(args.scripts)\n\n\ndef _get_hooks_for_model(\n models: List[DbtModel], faldbt: FalDbt, hook_type: str\n) -> List[FalScript]:\n return [\n FalScript.from_hook(faldbt, model, hook)\n for model in models\n for hook in model._get_hooks(hook_type=hook_type)\n ]\n\n\ndef _select_scripts(\n args: argparse.Namespace, models: List[DbtModel], faldbt: FalDbt\n) -> List[FalScript]:\n scripts = []\n scripts_flag = _scripts_flag(args)\n\n for model in models:\n model_scripts = model.get_scripts(args.keyword, before=bool(args.before))\n for path in model_scripts:\n if not scripts_flag:\n # run all scripts when no --script is passed\n scripts.append(FalScript(faldbt, model, path))\n elif path in args.scripts:\n # if --script selector is there only run selected scripts\n scripts.append(FalScript(faldbt, model, path))\n\n return scripts\n\n\ndef _get_global_scripts(faldbt: FalDbt, args: argparse.Namespace):\n scripts_flag = _scripts_flag(args)\n return [\n FalScript(faldbt, None, path)\n for path in faldbt._global_script_paths[\"before\" if args.before else \"after\"]\n if not scripts_flag or path in args.scripts\n ]\n\n\ndef _get_models_with_keyword(faldbt: FalDbt) -> List[DbtModel]:\n return list(\n filter(lambda model: faldbt.keyword in model.meta, faldbt.list_models())\n )\n\n\ndef _get_filtered_models(faldbt: FalDbt, all, selected, before) -> List[DbtModel]:\n selected_ids = _models_ids(faldbt._compile_task._flattened_nodes)\n filtered_models: List[DbtModel] = []\n\n if (\n not all\n and not selected\n and not before\n and faldbt._run_results.nativeRunResult is None\n ):\n from faldbt.parse import FalParseError\n\n raise FalParseError(\n \"Cannot define models to run without selection flags or dbt run_results artifact or --before flag\"\n )\n\n models = _get_models_with_keyword(faldbt)\n\n for node in models:\n if selected:\n if node.unique_id in selected_ids:\n filtered_models.append(node)\n elif before:\n if node.get_scripts(faldbt.keyword, before=before) != []:\n filtered_models.append(node)\n elif all:\n filtered_models.append(node)\n elif node.status != \"skipped\":\n filtered_models.append(node)\n\n return filtered_models\n\n\ndef _models_ids(models):\n return list(map(lambda r: r.unique_id, models))\n", "path": "src/fal/cli/fal_runner.py"}]} | 2,608 | 487 |
gh_patches_debug_40529 | rasdani/github-patches | git_diff | nautobot__nautobot-1148 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove Custom Fields from Admin UI
### Proposed Changes
Remove custom fields from Admin UI. This should be as simple as deleting a bunch of code from `nautobot/extras/admin.py` that's no longer needed.
### Justification
Now that we have custom field management in the regular UI (#735, #997), the admin UI for custom field management is redundant.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nautobot/extras/admin.py`
Content:
```
1 from db_file_storage.form_widgets import DBAdminClearableFileInput
2 from django import forms
3 from django.contrib import admin, messages
4 from django.db import transaction
5 from django.db.models import ProtectedError
6
7 from .models import CustomField, CustomFieldChoice, FileProxy, JobResult
8
9
10 def order_content_types(field):
11 """
12 Order the list of available ContentTypes by application
13 """
14 queryset = field.queryset.order_by("app_label", "model")
15 field.choices = [(ct.pk, "{} > {}".format(ct.app_label, ct.name)) for ct in queryset]
16
17
18 #
19 # Custom fields
20 #
21
22
23 class CustomFieldForm(forms.ModelForm):
24 class Meta:
25 model = CustomField
26 exclude = []
27 widgets = {
28 "default": forms.TextInput(),
29 "validation_regex": forms.Textarea(
30 attrs={
31 "cols": 80,
32 "rows": 3,
33 }
34 ),
35 }
36
37 def __init__(self, *args, **kwargs):
38 super().__init__(*args, **kwargs)
39
40 order_content_types(self.fields["content_types"])
41
42
43 class CustomFieldChoiceAdmin(admin.TabularInline):
44 """
45 Defines the inline formset factory that handles choices for selection type custom fields.
46 The `extra` defines the default number of inline rows that appear in the UI.
47 """
48
49 model = CustomFieldChoice
50 extra = 5
51
52
53 @admin.register(CustomField)
54 class CustomFieldAdmin(admin.ModelAdmin):
55 """
56 Define the structure and composition of the custom field form in the admin panel.
57 """
58
59 actions = None
60 form = CustomFieldForm
61 inlines = [CustomFieldChoiceAdmin]
62 list_display = [
63 "name",
64 "models",
65 "type",
66 "required",
67 "filter_logic",
68 "default",
69 "weight",
70 "description",
71 ]
72 list_filter = [
73 "type",
74 "required",
75 "content_types",
76 ]
77 fieldsets = (
78 (
79 "Custom Field",
80 {
81 "fields": (
82 "type",
83 "name",
84 "weight",
85 "label",
86 "description",
87 "required",
88 "default",
89 "filter_logic",
90 )
91 },
92 ),
93 (
94 "Assignment",
95 {
96 "description": "A custom field must be assigned to one or more object types.",
97 "fields": ("content_types",),
98 },
99 ),
100 (
101 "Validation Rules",
102 {
103 "fields": (
104 "validation_minimum",
105 "validation_maximum",
106 "validation_regex",
107 ),
108 "classes": ("monospace",),
109 },
110 ),
111 )
112
113 def models(self, obj):
114 return ", ".join([ct.name for ct in obj.content_types.all()])
115
116 @transaction.atomic
117 def save_formset(self, request, form, formset, change):
118 # TODO(John): revisit this when custom fields are moved out of admin... there is a better way...
119 if formset.model != CustomFieldChoice:
120 return super().save_formset(request, form, formset, change)
121 instances = formset.save(commit=False)
122 for instance in instances:
123 instance.save()
124 formset.save_m2m()
125 for obj in formset.deleted_objects:
126 try:
127 obj.delete()
128 except ProtectedError as e:
129 self.message_user(request, e, level=messages.ERROR)
130 raise e
131
132
133 #
134 # File attachments
135 #
136
137
138 class FileProxyForm(forms.ModelForm):
139 class Meta:
140 model = FileProxy
141 exclude = []
142 widgets = {
143 "file": DBAdminClearableFileInput,
144 }
145
146
147 @admin.register(FileProxy)
148 class FileProxyAdmin(admin.ModelAdmin):
149 form = FileProxyForm
150 list_display = ["name", "uploaded_at"]
151 list_filter = ["uploaded_at"]
152
153
154 #
155 # Job results (jobs, scripts, reports, Git repository sync, etc.)
156 #
157
158
159 @admin.register(JobResult)
160 class JobResultAdmin(admin.ModelAdmin):
161 list_display = [
162 "obj_type",
163 "name",
164 "created",
165 "completed",
166 "user",
167 "status",
168 ]
169 fields = [
170 "obj_type",
171 "name",
172 "created",
173 "completed",
174 "user",
175 "status",
176 "data",
177 "job_id",
178 ]
179 list_filter = [
180 "status",
181 ]
182 readonly_fields = fields
183
184 def has_add_permission(self, request):
185 return False
186
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nautobot/extras/admin.py b/nautobot/extras/admin.py
--- a/nautobot/extras/admin.py
+++ b/nautobot/extras/admin.py
@@ -1,10 +1,8 @@
from db_file_storage.form_widgets import DBAdminClearableFileInput
from django import forms
-from django.contrib import admin, messages
-from django.db import transaction
-from django.db.models import ProtectedError
+from django.contrib import admin
-from .models import CustomField, CustomFieldChoice, FileProxy, JobResult
+from .models import FileProxy, JobResult
def order_content_types(field):
@@ -15,121 +13,6 @@
field.choices = [(ct.pk, "{} > {}".format(ct.app_label, ct.name)) for ct in queryset]
-#
-# Custom fields
-#
-
-
-class CustomFieldForm(forms.ModelForm):
- class Meta:
- model = CustomField
- exclude = []
- widgets = {
- "default": forms.TextInput(),
- "validation_regex": forms.Textarea(
- attrs={
- "cols": 80,
- "rows": 3,
- }
- ),
- }
-
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
-
- order_content_types(self.fields["content_types"])
-
-
-class CustomFieldChoiceAdmin(admin.TabularInline):
- """
- Defines the inline formset factory that handles choices for selection type custom fields.
- The `extra` defines the default number of inline rows that appear in the UI.
- """
-
- model = CustomFieldChoice
- extra = 5
-
-
[email protected](CustomField)
-class CustomFieldAdmin(admin.ModelAdmin):
- """
- Define the structure and composition of the custom field form in the admin panel.
- """
-
- actions = None
- form = CustomFieldForm
- inlines = [CustomFieldChoiceAdmin]
- list_display = [
- "name",
- "models",
- "type",
- "required",
- "filter_logic",
- "default",
- "weight",
- "description",
- ]
- list_filter = [
- "type",
- "required",
- "content_types",
- ]
- fieldsets = (
- (
- "Custom Field",
- {
- "fields": (
- "type",
- "name",
- "weight",
- "label",
- "description",
- "required",
- "default",
- "filter_logic",
- )
- },
- ),
- (
- "Assignment",
- {
- "description": "A custom field must be assigned to one or more object types.",
- "fields": ("content_types",),
- },
- ),
- (
- "Validation Rules",
- {
- "fields": (
- "validation_minimum",
- "validation_maximum",
- "validation_regex",
- ),
- "classes": ("monospace",),
- },
- ),
- )
-
- def models(self, obj):
- return ", ".join([ct.name for ct in obj.content_types.all()])
-
- @transaction.atomic
- def save_formset(self, request, form, formset, change):
- # TODO(John): revisit this when custom fields are moved out of admin... there is a better way...
- if formset.model != CustomFieldChoice:
- return super().save_formset(request, form, formset, change)
- instances = formset.save(commit=False)
- for instance in instances:
- instance.save()
- formset.save_m2m()
- for obj in formset.deleted_objects:
- try:
- obj.delete()
- except ProtectedError as e:
- self.message_user(request, e, level=messages.ERROR)
- raise e
-
-
#
# File attachments
#
| {"golden_diff": "diff --git a/nautobot/extras/admin.py b/nautobot/extras/admin.py\n--- a/nautobot/extras/admin.py\n+++ b/nautobot/extras/admin.py\n@@ -1,10 +1,8 @@\n from db_file_storage.form_widgets import DBAdminClearableFileInput\n from django import forms\n-from django.contrib import admin, messages\n-from django.db import transaction\n-from django.db.models import ProtectedError\n+from django.contrib import admin\n \n-from .models import CustomField, CustomFieldChoice, FileProxy, JobResult\n+from .models import FileProxy, JobResult\n \n \n def order_content_types(field):\n@@ -15,121 +13,6 @@\n field.choices = [(ct.pk, \"{} > {}\".format(ct.app_label, ct.name)) for ct in queryset]\n \n \n-#\n-# Custom fields\n-#\n-\n-\n-class CustomFieldForm(forms.ModelForm):\n- class Meta:\n- model = CustomField\n- exclude = []\n- widgets = {\n- \"default\": forms.TextInput(),\n- \"validation_regex\": forms.Textarea(\n- attrs={\n- \"cols\": 80,\n- \"rows\": 3,\n- }\n- ),\n- }\n-\n- def __init__(self, *args, **kwargs):\n- super().__init__(*args, **kwargs)\n-\n- order_content_types(self.fields[\"content_types\"])\n-\n-\n-class CustomFieldChoiceAdmin(admin.TabularInline):\n- \"\"\"\n- Defines the inline formset factory that handles choices for selection type custom fields.\n- The `extra` defines the default number of inline rows that appear in the UI.\n- \"\"\"\n-\n- model = CustomFieldChoice\n- extra = 5\n-\n-\[email protected](CustomField)\n-class CustomFieldAdmin(admin.ModelAdmin):\n- \"\"\"\n- Define the structure and composition of the custom field form in the admin panel.\n- \"\"\"\n-\n- actions = None\n- form = CustomFieldForm\n- inlines = [CustomFieldChoiceAdmin]\n- list_display = [\n- \"name\",\n- \"models\",\n- \"type\",\n- \"required\",\n- \"filter_logic\",\n- \"default\",\n- \"weight\",\n- \"description\",\n- ]\n- list_filter = [\n- \"type\",\n- \"required\",\n- \"content_types\",\n- ]\n- fieldsets = (\n- (\n- \"Custom Field\",\n- {\n- \"fields\": (\n- \"type\",\n- \"name\",\n- \"weight\",\n- \"label\",\n- \"description\",\n- \"required\",\n- \"default\",\n- \"filter_logic\",\n- )\n- },\n- ),\n- (\n- \"Assignment\",\n- {\n- \"description\": \"A custom field must be assigned to one or more object types.\",\n- \"fields\": (\"content_types\",),\n- },\n- ),\n- (\n- \"Validation Rules\",\n- {\n- \"fields\": (\n- \"validation_minimum\",\n- \"validation_maximum\",\n- \"validation_regex\",\n- ),\n- \"classes\": (\"monospace\",),\n- },\n- ),\n- )\n-\n- def models(self, obj):\n- return \", \".join([ct.name for ct in obj.content_types.all()])\n-\n- @transaction.atomic\n- def save_formset(self, request, form, formset, change):\n- # TODO(John): revisit this when custom fields are moved out of admin... there is a better way...\n- if formset.model != CustomFieldChoice:\n- return super().save_formset(request, form, formset, change)\n- instances = formset.save(commit=False)\n- for instance in instances:\n- instance.save()\n- formset.save_m2m()\n- for obj in formset.deleted_objects:\n- try:\n- obj.delete()\n- except ProtectedError as e:\n- self.message_user(request, e, level=messages.ERROR)\n- raise e\n-\n-\n #\n # File attachments\n #\n", "issue": "Remove Custom Fields from Admin UI\n### Proposed Changes\r\n\r\nRemove custom fields from Admin UI. This should be as simple as deleting a bunch of code from `nautobot/extras/admin.py` that's no longer needed.\r\n\r\n### Justification\r\n\r\nNow that we have custom field management in the regular UI (#735, #997), the admin UI for custom field management is redundant.\n", "before_files": [{"content": "from db_file_storage.form_widgets import DBAdminClearableFileInput\nfrom django import forms\nfrom django.contrib import admin, messages\nfrom django.db import transaction\nfrom django.db.models import ProtectedError\n\nfrom .models import CustomField, CustomFieldChoice, FileProxy, JobResult\n\n\ndef order_content_types(field):\n \"\"\"\n Order the list of available ContentTypes by application\n \"\"\"\n queryset = field.queryset.order_by(\"app_label\", \"model\")\n field.choices = [(ct.pk, \"{} > {}\".format(ct.app_label, ct.name)) for ct in queryset]\n\n\n#\n# Custom fields\n#\n\n\nclass CustomFieldForm(forms.ModelForm):\n class Meta:\n model = CustomField\n exclude = []\n widgets = {\n \"default\": forms.TextInput(),\n \"validation_regex\": forms.Textarea(\n attrs={\n \"cols\": 80,\n \"rows\": 3,\n }\n ),\n }\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n order_content_types(self.fields[\"content_types\"])\n\n\nclass CustomFieldChoiceAdmin(admin.TabularInline):\n \"\"\"\n Defines the inline formset factory that handles choices for selection type custom fields.\n The `extra` defines the default number of inline rows that appear in the UI.\n \"\"\"\n\n model = CustomFieldChoice\n extra = 5\n\n\[email protected](CustomField)\nclass CustomFieldAdmin(admin.ModelAdmin):\n \"\"\"\n Define the structure and composition of the custom field form in the admin panel.\n \"\"\"\n\n actions = None\n form = CustomFieldForm\n inlines = [CustomFieldChoiceAdmin]\n list_display = [\n \"name\",\n \"models\",\n \"type\",\n \"required\",\n \"filter_logic\",\n \"default\",\n \"weight\",\n \"description\",\n ]\n list_filter = [\n \"type\",\n \"required\",\n \"content_types\",\n ]\n fieldsets = (\n (\n \"Custom Field\",\n {\n \"fields\": (\n \"type\",\n \"name\",\n \"weight\",\n \"label\",\n \"description\",\n \"required\",\n \"default\",\n \"filter_logic\",\n )\n },\n ),\n (\n \"Assignment\",\n {\n \"description\": \"A custom field must be assigned to one or more object types.\",\n \"fields\": (\"content_types\",),\n },\n ),\n (\n \"Validation Rules\",\n {\n \"fields\": (\n \"validation_minimum\",\n \"validation_maximum\",\n \"validation_regex\",\n ),\n \"classes\": (\"monospace\",),\n },\n ),\n )\n\n def models(self, obj):\n return \", \".join([ct.name for ct in obj.content_types.all()])\n\n @transaction.atomic\n def save_formset(self, request, form, formset, change):\n # TODO(John): revisit this when custom fields are moved out of admin... there is a better way...\n if formset.model != CustomFieldChoice:\n return super().save_formset(request, form, formset, change)\n instances = formset.save(commit=False)\n for instance in instances:\n instance.save()\n formset.save_m2m()\n for obj in formset.deleted_objects:\n try:\n obj.delete()\n except ProtectedError as e:\n self.message_user(request, e, level=messages.ERROR)\n raise e\n\n\n#\n# File attachments\n#\n\n\nclass FileProxyForm(forms.ModelForm):\n class Meta:\n model = FileProxy\n exclude = []\n widgets = {\n \"file\": DBAdminClearableFileInput,\n }\n\n\[email protected](FileProxy)\nclass FileProxyAdmin(admin.ModelAdmin):\n form = FileProxyForm\n list_display = [\"name\", \"uploaded_at\"]\n list_filter = [\"uploaded_at\"]\n\n\n#\n# Job results (jobs, scripts, reports, Git repository sync, etc.)\n#\n\n\[email protected](JobResult)\nclass JobResultAdmin(admin.ModelAdmin):\n list_display = [\n \"obj_type\",\n \"name\",\n \"created\",\n \"completed\",\n \"user\",\n \"status\",\n ]\n fields = [\n \"obj_type\",\n \"name\",\n \"created\",\n \"completed\",\n \"user\",\n \"status\",\n \"data\",\n \"job_id\",\n ]\n list_filter = [\n \"status\",\n ]\n readonly_fields = fields\n\n def has_add_permission(self, request):\n return False\n", "path": "nautobot/extras/admin.py"}], "after_files": [{"content": "from db_file_storage.form_widgets import DBAdminClearableFileInput\nfrom django import forms\nfrom django.contrib import admin\n\nfrom .models import FileProxy, JobResult\n\n\ndef order_content_types(field):\n \"\"\"\n Order the list of available ContentTypes by application\n \"\"\"\n queryset = field.queryset.order_by(\"app_label\", \"model\")\n field.choices = [(ct.pk, \"{} > {}\".format(ct.app_label, ct.name)) for ct in queryset]\n\n\n#\n# File attachments\n#\n\n\nclass FileProxyForm(forms.ModelForm):\n class Meta:\n model = FileProxy\n exclude = []\n widgets = {\n \"file\": DBAdminClearableFileInput,\n }\n\n\[email protected](FileProxy)\nclass FileProxyAdmin(admin.ModelAdmin):\n form = FileProxyForm\n list_display = [\"name\", \"uploaded_at\"]\n list_filter = [\"uploaded_at\"]\n\n\n#\n# Job results (jobs, scripts, reports, Git repository sync, etc.)\n#\n\n\[email protected](JobResult)\nclass JobResultAdmin(admin.ModelAdmin):\n list_display = [\n \"obj_type\",\n \"name\",\n \"created\",\n \"completed\",\n \"user\",\n \"status\",\n ]\n fields = [\n \"obj_type\",\n \"name\",\n \"created\",\n \"completed\",\n \"user\",\n \"status\",\n \"data\",\n \"job_id\",\n ]\n list_filter = [\n \"status\",\n ]\n readonly_fields = fields\n\n def has_add_permission(self, request):\n return False\n", "path": "nautobot/extras/admin.py"}]} | 1,737 | 875 |
gh_patches_debug_4375 | rasdani/github-patches | git_diff | ansible__ansible-modules-extras-3020 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
#446 broke npm state=latest for missing packages
##### Issue Type:
- Bug Report (`npm` module)
##### Ansible Version:
Running against devel:
``` console
$ ansible --version
ansible 2.1.0 (devel be5488cb60) last updated 2015/12/15 09:36:59 (GMT -400)
lib/ansible/modules/core: (devel 6b13da738b) last updated 2015/12/15 09:38:18 (GMT -400)
lib/ansible/modules/extras: (devel f3251de29c) last updated 2015/12/15 09:38:42 (GMT -400)
config file = /home/tomxtobin/.ansible.cfg
configured module search path = Default w/o overrides
```
##### Ansible Configuration:
``` ini
[defaults]
hostfile = ~/ansible/hosts
nocows = 1
```
##### Environment:
N/A (but it's Arch Linux)
##### Summary:
It looks like PR #446 broke `npm: name=foo state=latest` for a missing package `foo` (i.e., `foo` isn't present on the system yet).
Suggested fix: for `state == 'latest'`, actually differentiate between the result of checking `len(missing)` and `len(outdated)` to see whether the package is installed or not, and run either `npm.install()` or `npm.update()` as appropriate.
##### Steps To Reproduce:
Let's use the `gulp` package as an example.
On a system that doesn't already have `gulp` installed globally:
``` console
$ ansible example-host -m command -a 'npm ls -g -depth 0 gulp'
example-host | FAILED | rc=1 >>
/usr/lib
└── (empty)npm ERR! code 1
```
``` console
$ ansible example-host -m command -a 'gulp --version'
example-host | FAILED | rc=2 >>
[Errno 2] No such file or directory
```
Run a task against such system to install `gulp` globally:
``` console
$ ansible example-host -m npm -a 'name=gulp state=latest global=yes'
example-host | SUCCESS => {
"changed": true
}
```
##### Expected Results:
The module (`gulp`, above) actually gets installed on the system(s) I'm running that task against.
Against such a system, I can run something like:
``` console
$ ansible example-host -m command -a 'npm ls -g -depth 0 gulp'
example-host | SUCCESS | rc=0 >>
/usr/lib
└── [email protected]
```
``` console
$ ansible example-host -m command -a 'gulp --version'
example-host | SUCCESS | rc=0 >>
[15:24:28] CLI version 3.9.0
```
(Assuming the latest version of `gulp` happened to be `3.9.0`.)
##### Actual Results:
Ansible claims it succeeds in running the task, but it doesn't actually install `gulp` on the system(s) in question.
On such a system:
``` console
$ ansible example-host -m command -a 'npm ls -g -depth 0 gulp'
example-host | FAILED | rc=1 >>
/usr/lib
└── (empty)npm ERR! code 1
```
``` console
$ ansible example-host -m command -a 'gulp --version'
example-host | FAILED | rc=2 >>
[Errno 2] No such file or directory
```
You can actually keep re-running the task over and over, and Ansible will keep claiming to successfully install `gulp`:
``` console
$ ansible example-host -m npm -a 'name=gulp state=latest global=yes'
example-host | SUCCESS => {
"changed": true
}
$ ansible example-host -m npm -a 'name=gulp state=latest global=yes'
example-host | SUCCESS => {
"changed": true
}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `packaging/language/npm.py`
Content:
```
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # (c) 2013, Chris Hoffman <[email protected]>
5 #
6 # This file is part of Ansible
7 #
8 # Ansible is free software: you can redistribute it and/or modify
9 # it under the terms of the GNU General Public License as published by
10 # the Free Software Foundation, either version 3 of the License, or
11 # (at your option) any later version.
12 #
13 # Ansible is distributed in the hope that it will be useful,
14 # but WITHOUT ANY WARRANTY; without even the implied warranty of
15 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 # GNU General Public License for more details.
17 #
18 # You should have received a copy of the GNU General Public License
19 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
20
21 DOCUMENTATION = '''
22 ---
23 module: npm
24 short_description: Manage node.js packages with npm
25 description:
26 - Manage node.js packages with Node Package Manager (npm)
27 version_added: 1.2
28 author: "Chris Hoffman (@chrishoffman)"
29 options:
30 name:
31 description:
32 - The name of a node.js library to install
33 required: false
34 path:
35 description:
36 - The base path where to install the node.js libraries
37 required: false
38 version:
39 description:
40 - The version to be installed
41 required: false
42 global:
43 description:
44 - Install the node.js library globally
45 required: false
46 default: no
47 choices: [ "yes", "no" ]
48 executable:
49 description:
50 - The executable location for npm.
51 - This is useful if you are using a version manager, such as nvm
52 required: false
53 ignore_scripts:
54 description:
55 - Use the --ignore-scripts flag when installing.
56 required: false
57 choices: [ "yes", "no" ]
58 default: no
59 version_added: "1.8"
60 production:
61 description:
62 - Install dependencies in production mode, excluding devDependencies
63 required: false
64 choices: [ "yes", "no" ]
65 default: no
66 registry:
67 description:
68 - The registry to install modules from.
69 required: false
70 version_added: "1.6"
71 state:
72 description:
73 - The state of the node.js library
74 required: false
75 default: present
76 choices: [ "present", "absent", "latest" ]
77 '''
78
79 EXAMPLES = '''
80 description: Install "coffee-script" node.js package.
81 - npm: name=coffee-script path=/app/location
82
83 description: Install "coffee-script" node.js package on version 1.6.1.
84 - npm: name=coffee-script version=1.6.1 path=/app/location
85
86 description: Install "coffee-script" node.js package globally.
87 - npm: name=coffee-script global=yes
88
89 description: Remove the globally package "coffee-script".
90 - npm: name=coffee-script global=yes state=absent
91
92 description: Install "coffee-script" node.js package from custom registry.
93 - npm: name=coffee-script registry=http://registry.mysite.com
94
95 description: Install packages based on package.json.
96 - npm: path=/app/location
97
98 description: Update packages based on package.json to their latest version.
99 - npm: path=/app/location state=latest
100
101 description: Install packages based on package.json using the npm installed with nvm v0.10.1.
102 - npm: path=/app/location executable=/opt/nvm/v0.10.1/bin/npm state=present
103 '''
104
105 import os
106
107 try:
108 import json
109 except ImportError:
110 try:
111 import simplejson as json
112 except ImportError:
113 # Let snippet from module_utils/basic.py return a proper error in this case
114 pass
115
116
117 class Npm(object):
118 def __init__(self, module, **kwargs):
119 self.module = module
120 self.glbl = kwargs['glbl']
121 self.name = kwargs['name']
122 self.version = kwargs['version']
123 self.path = kwargs['path']
124 self.registry = kwargs['registry']
125 self.production = kwargs['production']
126 self.ignore_scripts = kwargs['ignore_scripts']
127
128 if kwargs['executable']:
129 self.executable = kwargs['executable'].split(' ')
130 else:
131 self.executable = [module.get_bin_path('npm', True)]
132
133 if kwargs['version']:
134 self.name_version = self.name + '@' + str(self.version)
135 else:
136 self.name_version = self.name
137
138 def _exec(self, args, run_in_check_mode=False, check_rc=True):
139 if not self.module.check_mode or (self.module.check_mode and run_in_check_mode):
140 cmd = self.executable + args
141
142 if self.glbl:
143 cmd.append('--global')
144 if self.production:
145 cmd.append('--production')
146 if self.ignore_scripts:
147 cmd.append('--ignore-scripts')
148 if self.name:
149 cmd.append(self.name_version)
150 if self.registry:
151 cmd.append('--registry')
152 cmd.append(self.registry)
153
154 #If path is specified, cd into that path and run the command.
155 cwd = None
156 if self.path:
157 if not os.path.exists(self.path):
158 os.makedirs(self.path)
159 if not os.path.isdir(self.path):
160 self.module.fail_json(msg="path %s is not a directory" % self.path)
161 cwd = self.path
162
163 rc, out, err = self.module.run_command(cmd, check_rc=check_rc, cwd=cwd)
164 return out
165 return ''
166
167 def list(self):
168 cmd = ['list', '--json']
169
170 installed = list()
171 missing = list()
172 data = json.loads(self._exec(cmd, True, False))
173 if 'dependencies' in data:
174 for dep in data['dependencies']:
175 if 'missing' in data['dependencies'][dep] and data['dependencies'][dep]['missing']:
176 missing.append(dep)
177 elif 'invalid' in data['dependencies'][dep] and data['dependencies'][dep]['invalid']:
178 missing.append(dep)
179 else:
180 installed.append(dep)
181 if self.name and self.name not in installed:
182 missing.append(self.name)
183 #Named dependency not installed
184 else:
185 missing.append(self.name)
186
187 return installed, missing
188
189 def install(self):
190 return self._exec(['install'])
191
192 def update(self):
193 return self._exec(['update'])
194
195 def uninstall(self):
196 return self._exec(['uninstall'])
197
198 def list_outdated(self):
199 outdated = list()
200 data = self._exec(['outdated'], True, False)
201 for dep in data.splitlines():
202 if dep:
203 # node.js v0.10.22 changed the `npm outdated` module separator
204 # from "@" to " ". Split on both for backwards compatibility.
205 pkg, other = re.split('\s|@', dep, 1)
206 outdated.append(pkg)
207
208 return outdated
209
210
211 def main():
212 arg_spec = dict(
213 name=dict(default=None),
214 path=dict(default=None, type='path'),
215 version=dict(default=None),
216 production=dict(default='no', type='bool'),
217 executable=dict(default=None, type='path'),
218 registry=dict(default=None),
219 state=dict(default='present', choices=['present', 'absent', 'latest']),
220 ignore_scripts=dict(default=False, type='bool'),
221 )
222 arg_spec['global'] = dict(default='no', type='bool')
223 module = AnsibleModule(
224 argument_spec=arg_spec,
225 supports_check_mode=True
226 )
227
228 name = module.params['name']
229 path = module.params['path']
230 version = module.params['version']
231 glbl = module.params['global']
232 production = module.params['production']
233 executable = module.params['executable']
234 registry = module.params['registry']
235 state = module.params['state']
236 ignore_scripts = module.params['ignore_scripts']
237
238 if not path and not glbl:
239 module.fail_json(msg='path must be specified when not using global')
240 if state == 'absent' and not name:
241 module.fail_json(msg='uninstalling a package is only available for named packages')
242
243 npm = Npm(module, name=name, path=path, version=version, glbl=glbl, production=production, \
244 executable=executable, registry=registry, ignore_scripts=ignore_scripts)
245
246 changed = False
247 if state == 'present':
248 installed, missing = npm.list()
249 if len(missing):
250 changed = True
251 npm.install()
252 elif state == 'latest':
253 installed, missing = npm.list()
254 outdated = npm.list_outdated()
255 if len(missing) or len(outdated):
256 changed = True
257 npm.update()
258 else: #absent
259 installed, missing = npm.list()
260 if name in installed:
261 changed = True
262 npm.uninstall()
263
264 module.exit_json(changed=changed)
265
266 # import module snippets
267 from ansible.module_utils.basic import *
268 main()
269
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/packaging/language/npm.py b/packaging/language/npm.py
--- a/packaging/language/npm.py
+++ b/packaging/language/npm.py
@@ -252,7 +252,10 @@
elif state == 'latest':
installed, missing = npm.list()
outdated = npm.list_outdated()
- if len(missing) or len(outdated):
+ if len(missing):
+ changed = True
+ npm.install()
+ if len(outdated):
changed = True
npm.update()
else: #absent
| {"golden_diff": "diff --git a/packaging/language/npm.py b/packaging/language/npm.py\n--- a/packaging/language/npm.py\n+++ b/packaging/language/npm.py\n@@ -252,7 +252,10 @@\n elif state == 'latest':\n installed, missing = npm.list()\n outdated = npm.list_outdated()\n- if len(missing) or len(outdated):\n+ if len(missing):\n+ changed = True\n+ npm.install()\n+ if len(outdated):\n changed = True\n npm.update()\n else: #absent\n", "issue": "#446 broke npm state=latest for missing packages\n##### Issue Type:\n- Bug Report (`npm` module)\n##### Ansible Version:\n\nRunning against devel:\n\n``` console\n$ ansible --version\nansible 2.1.0 (devel be5488cb60) last updated 2015/12/15 09:36:59 (GMT -400)\n lib/ansible/modules/core: (devel 6b13da738b) last updated 2015/12/15 09:38:18 (GMT -400)\n lib/ansible/modules/extras: (devel f3251de29c) last updated 2015/12/15 09:38:42 (GMT -400)\n config file = /home/tomxtobin/.ansible.cfg\n configured module search path = Default w/o overrides\n```\n##### Ansible Configuration:\n\n``` ini\n[defaults]\nhostfile = ~/ansible/hosts\nnocows = 1\n```\n##### Environment:\n\nN/A (but it's Arch Linux)\n##### Summary:\n\nIt looks like PR #446 broke `npm: name=foo state=latest` for a missing package `foo` (i.e., `foo` isn't present on the system yet).\n\nSuggested fix: for `state == 'latest'`, actually differentiate between the result of checking `len(missing)` and `len(outdated)` to see whether the package is installed or not, and run either `npm.install()` or `npm.update()` as appropriate.\n##### Steps To Reproduce:\n\nLet's use the `gulp` package as an example.\n\nOn a system that doesn't already have `gulp` installed globally:\n\n``` console\n$ ansible example-host -m command -a 'npm ls -g -depth 0 gulp' \nexample-host | FAILED | rc=1 >>\n/usr/lib\n\u2514\u2500\u2500 (empty)npm ERR! code 1\n```\n\n``` console\n$ ansible example-host -m command -a 'gulp --version' \nexample-host | FAILED | rc=2 >>\n[Errno 2] No such file or directory\n```\n\nRun a task against such system to install `gulp` globally:\n\n``` console\n$ ansible example-host -m npm -a 'name=gulp state=latest global=yes'\nexample-host | SUCCESS => {\n \"changed\": true\n}\n```\n##### Expected Results:\n\nThe module (`gulp`, above) actually gets installed on the system(s) I'm running that task against.\n\nAgainst such a system, I can run something like:\n\n``` console\n$ ansible example-host -m command -a 'npm ls -g -depth 0 gulp' \nexample-host | SUCCESS | rc=0 >>\n/usr/lib\n\u2514\u2500\u2500 [email protected] \n```\n\n``` console\n$ ansible example-host -m command -a 'gulp --version'\nexample-host | SUCCESS | rc=0 >>\n[15:24:28] CLI version 3.9.0\n```\n\n(Assuming the latest version of `gulp` happened to be `3.9.0`.)\n##### Actual Results:\n\nAnsible claims it succeeds in running the task, but it doesn't actually install `gulp` on the system(s) in question.\n\nOn such a system:\n\n``` console\n$ ansible example-host -m command -a 'npm ls -g -depth 0 gulp' \nexample-host | FAILED | rc=1 >>\n/usr/lib\n\u2514\u2500\u2500 (empty)npm ERR! code 1\n```\n\n``` console\n$ ansible example-host -m command -a 'gulp --version' \nexample-host | FAILED | rc=2 >>\n[Errno 2] No such file or directory\n```\n\nYou can actually keep re-running the task over and over, and Ansible will keep claiming to successfully install `gulp`:\n\n``` console\n$ ansible example-host -m npm -a 'name=gulp state=latest global=yes'\nexample-host | SUCCESS => {\n \"changed\": true\n}\n$ ansible example-host -m npm -a 'name=gulp state=latest global=yes'\nexample-host | SUCCESS => {\n \"changed\": true\n}\n```\n\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# (c) 2013, Chris Hoffman <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n\nDOCUMENTATION = '''\n---\nmodule: npm\nshort_description: Manage node.js packages with npm\ndescription:\n - Manage node.js packages with Node Package Manager (npm)\nversion_added: 1.2\nauthor: \"Chris Hoffman (@chrishoffman)\"\noptions:\n name:\n description:\n - The name of a node.js library to install\n required: false\n path:\n description:\n - The base path where to install the node.js libraries\n required: false\n version:\n description:\n - The version to be installed\n required: false\n global:\n description:\n - Install the node.js library globally\n required: false\n default: no\n choices: [ \"yes\", \"no\" ]\n executable:\n description:\n - The executable location for npm.\n - This is useful if you are using a version manager, such as nvm\n required: false\n ignore_scripts:\n description:\n - Use the --ignore-scripts flag when installing.\n required: false\n choices: [ \"yes\", \"no\" ]\n default: no\n version_added: \"1.8\"\n production:\n description:\n - Install dependencies in production mode, excluding devDependencies\n required: false\n choices: [ \"yes\", \"no\" ]\n default: no\n registry:\n description:\n - The registry to install modules from.\n required: false\n version_added: \"1.6\"\n state:\n description:\n - The state of the node.js library\n required: false\n default: present\n choices: [ \"present\", \"absent\", \"latest\" ]\n'''\n\nEXAMPLES = '''\ndescription: Install \"coffee-script\" node.js package.\n- npm: name=coffee-script path=/app/location\n\ndescription: Install \"coffee-script\" node.js package on version 1.6.1.\n- npm: name=coffee-script version=1.6.1 path=/app/location\n\ndescription: Install \"coffee-script\" node.js package globally.\n- npm: name=coffee-script global=yes\n\ndescription: Remove the globally package \"coffee-script\".\n- npm: name=coffee-script global=yes state=absent\n\ndescription: Install \"coffee-script\" node.js package from custom registry.\n- npm: name=coffee-script registry=http://registry.mysite.com\n\ndescription: Install packages based on package.json.\n- npm: path=/app/location\n\ndescription: Update packages based on package.json to their latest version.\n- npm: path=/app/location state=latest\n\ndescription: Install packages based on package.json using the npm installed with nvm v0.10.1.\n- npm: path=/app/location executable=/opt/nvm/v0.10.1/bin/npm state=present\n'''\n\nimport os\n\ntry:\n import json\nexcept ImportError:\n try:\n import simplejson as json\n except ImportError:\n # Let snippet from module_utils/basic.py return a proper error in this case\n pass\n\n\nclass Npm(object):\n def __init__(self, module, **kwargs):\n self.module = module\n self.glbl = kwargs['glbl']\n self.name = kwargs['name']\n self.version = kwargs['version']\n self.path = kwargs['path']\n self.registry = kwargs['registry']\n self.production = kwargs['production']\n self.ignore_scripts = kwargs['ignore_scripts']\n\n if kwargs['executable']:\n self.executable = kwargs['executable'].split(' ')\n else:\n self.executable = [module.get_bin_path('npm', True)]\n\n if kwargs['version']:\n self.name_version = self.name + '@' + str(self.version)\n else:\n self.name_version = self.name\n\n def _exec(self, args, run_in_check_mode=False, check_rc=True):\n if not self.module.check_mode or (self.module.check_mode and run_in_check_mode):\n cmd = self.executable + args\n\n if self.glbl:\n cmd.append('--global')\n if self.production:\n cmd.append('--production')\n if self.ignore_scripts:\n cmd.append('--ignore-scripts')\n if self.name:\n cmd.append(self.name_version)\n if self.registry:\n cmd.append('--registry')\n cmd.append(self.registry)\n\n #If path is specified, cd into that path and run the command.\n cwd = None\n if self.path:\n if not os.path.exists(self.path):\n os.makedirs(self.path)\n if not os.path.isdir(self.path):\n self.module.fail_json(msg=\"path %s is not a directory\" % self.path)\n cwd = self.path\n\n rc, out, err = self.module.run_command(cmd, check_rc=check_rc, cwd=cwd)\n return out\n return ''\n\n def list(self):\n cmd = ['list', '--json']\n\n installed = list()\n missing = list()\n data = json.loads(self._exec(cmd, True, False))\n if 'dependencies' in data:\n for dep in data['dependencies']:\n if 'missing' in data['dependencies'][dep] and data['dependencies'][dep]['missing']:\n missing.append(dep)\n elif 'invalid' in data['dependencies'][dep] and data['dependencies'][dep]['invalid']:\n missing.append(dep)\n else:\n installed.append(dep)\n if self.name and self.name not in installed:\n missing.append(self.name)\n #Named dependency not installed\n else:\n missing.append(self.name)\n\n return installed, missing\n\n def install(self):\n return self._exec(['install'])\n\n def update(self):\n return self._exec(['update'])\n\n def uninstall(self):\n return self._exec(['uninstall'])\n\n def list_outdated(self):\n outdated = list()\n data = self._exec(['outdated'], True, False)\n for dep in data.splitlines():\n if dep:\n # node.js v0.10.22 changed the `npm outdated` module separator\n # from \"@\" to \" \". Split on both for backwards compatibility.\n pkg, other = re.split('\\s|@', dep, 1)\n outdated.append(pkg)\n\n return outdated\n\n\ndef main():\n arg_spec = dict(\n name=dict(default=None),\n path=dict(default=None, type='path'),\n version=dict(default=None),\n production=dict(default='no', type='bool'),\n executable=dict(default=None, type='path'),\n registry=dict(default=None),\n state=dict(default='present', choices=['present', 'absent', 'latest']),\n ignore_scripts=dict(default=False, type='bool'),\n )\n arg_spec['global'] = dict(default='no', type='bool')\n module = AnsibleModule(\n argument_spec=arg_spec,\n supports_check_mode=True\n )\n\n name = module.params['name']\n path = module.params['path']\n version = module.params['version']\n glbl = module.params['global']\n production = module.params['production']\n executable = module.params['executable']\n registry = module.params['registry']\n state = module.params['state']\n ignore_scripts = module.params['ignore_scripts']\n\n if not path and not glbl:\n module.fail_json(msg='path must be specified when not using global')\n if state == 'absent' and not name:\n module.fail_json(msg='uninstalling a package is only available for named packages')\n\n npm = Npm(module, name=name, path=path, version=version, glbl=glbl, production=production, \\\n executable=executable, registry=registry, ignore_scripts=ignore_scripts)\n\n changed = False\n if state == 'present':\n installed, missing = npm.list()\n if len(missing):\n changed = True\n npm.install()\n elif state == 'latest':\n installed, missing = npm.list()\n outdated = npm.list_outdated()\n if len(missing) or len(outdated):\n changed = True\n npm.update()\n else: #absent\n installed, missing = npm.list()\n if name in installed:\n changed = True\n npm.uninstall()\n\n module.exit_json(changed=changed)\n\n# import module snippets\nfrom ansible.module_utils.basic import *\nmain()\n", "path": "packaging/language/npm.py"}], "after_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# (c) 2013, Chris Hoffman <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n\nDOCUMENTATION = '''\n---\nmodule: npm\nshort_description: Manage node.js packages with npm\ndescription:\n - Manage node.js packages with Node Package Manager (npm)\nversion_added: 1.2\nauthor: \"Chris Hoffman (@chrishoffman)\"\noptions:\n name:\n description:\n - The name of a node.js library to install\n required: false\n path:\n description:\n - The base path where to install the node.js libraries\n required: false\n version:\n description:\n - The version to be installed\n required: false\n global:\n description:\n - Install the node.js library globally\n required: false\n default: no\n choices: [ \"yes\", \"no\" ]\n executable:\n description:\n - The executable location for npm.\n - This is useful if you are using a version manager, such as nvm\n required: false\n ignore_scripts:\n description:\n - Use the --ignore-scripts flag when installing.\n required: false\n choices: [ \"yes\", \"no\" ]\n default: no\n version_added: \"1.8\"\n production:\n description:\n - Install dependencies in production mode, excluding devDependencies\n required: false\n choices: [ \"yes\", \"no\" ]\n default: no\n registry:\n description:\n - The registry to install modules from.\n required: false\n version_added: \"1.6\"\n state:\n description:\n - The state of the node.js library\n required: false\n default: present\n choices: [ \"present\", \"absent\", \"latest\" ]\n'''\n\nEXAMPLES = '''\ndescription: Install \"coffee-script\" node.js package.\n- npm: name=coffee-script path=/app/location\n\ndescription: Install \"coffee-script\" node.js package on version 1.6.1.\n- npm: name=coffee-script version=1.6.1 path=/app/location\n\ndescription: Install \"coffee-script\" node.js package globally.\n- npm: name=coffee-script global=yes\n\ndescription: Remove the globally package \"coffee-script\".\n- npm: name=coffee-script global=yes state=absent\n\ndescription: Install \"coffee-script\" node.js package from custom registry.\n- npm: name=coffee-script registry=http://registry.mysite.com\n\ndescription: Install packages based on package.json.\n- npm: path=/app/location\n\ndescription: Update packages based on package.json to their latest version.\n- npm: path=/app/location state=latest\n\ndescription: Install packages based on package.json using the npm installed with nvm v0.10.1.\n- npm: path=/app/location executable=/opt/nvm/v0.10.1/bin/npm state=present\n'''\n\nimport os\n\ntry:\n import json\nexcept ImportError:\n try:\n import simplejson as json\n except ImportError:\n # Let snippet from module_utils/basic.py return a proper error in this case\n pass\n\n\nclass Npm(object):\n def __init__(self, module, **kwargs):\n self.module = module\n self.glbl = kwargs['glbl']\n self.name = kwargs['name']\n self.version = kwargs['version']\n self.path = kwargs['path']\n self.registry = kwargs['registry']\n self.production = kwargs['production']\n self.ignore_scripts = kwargs['ignore_scripts']\n\n if kwargs['executable']:\n self.executable = kwargs['executable'].split(' ')\n else:\n self.executable = [module.get_bin_path('npm', True)]\n\n if kwargs['version']:\n self.name_version = self.name + '@' + str(self.version)\n else:\n self.name_version = self.name\n\n def _exec(self, args, run_in_check_mode=False, check_rc=True):\n if not self.module.check_mode or (self.module.check_mode and run_in_check_mode):\n cmd = self.executable + args\n\n if self.glbl:\n cmd.append('--global')\n if self.production:\n cmd.append('--production')\n if self.ignore_scripts:\n cmd.append('--ignore-scripts')\n if self.name:\n cmd.append(self.name_version)\n if self.registry:\n cmd.append('--registry')\n cmd.append(self.registry)\n\n #If path is specified, cd into that path and run the command.\n cwd = None\n if self.path:\n if not os.path.exists(self.path):\n os.makedirs(self.path)\n if not os.path.isdir(self.path):\n self.module.fail_json(msg=\"path %s is not a directory\" % self.path)\n cwd = self.path\n\n rc, out, err = self.module.run_command(cmd, check_rc=check_rc, cwd=cwd)\n return out\n return ''\n\n def list(self):\n cmd = ['list', '--json']\n\n installed = list()\n missing = list()\n data = json.loads(self._exec(cmd, True, False))\n if 'dependencies' in data:\n for dep in data['dependencies']:\n if 'missing' in data['dependencies'][dep] and data['dependencies'][dep]['missing']:\n missing.append(dep)\n elif 'invalid' in data['dependencies'][dep] and data['dependencies'][dep]['invalid']:\n missing.append(dep)\n else:\n installed.append(dep)\n if self.name and self.name not in installed:\n missing.append(self.name)\n #Named dependency not installed\n else:\n missing.append(self.name)\n\n return installed, missing\n\n def install(self):\n return self._exec(['install'])\n\n def update(self):\n return self._exec(['update'])\n\n def uninstall(self):\n return self._exec(['uninstall'])\n\n def list_outdated(self):\n outdated = list()\n data = self._exec(['outdated'], True, False)\n for dep in data.splitlines():\n if dep:\n # node.js v0.10.22 changed the `npm outdated` module separator\n # from \"@\" to \" \". Split on both for backwards compatibility.\n pkg, other = re.split('\\s|@', dep, 1)\n outdated.append(pkg)\n\n return outdated\n\n\ndef main():\n arg_spec = dict(\n name=dict(default=None),\n path=dict(default=None, type='path'),\n version=dict(default=None),\n production=dict(default='no', type='bool'),\n executable=dict(default=None, type='path'),\n registry=dict(default=None),\n state=dict(default='present', choices=['present', 'absent', 'latest']),\n ignore_scripts=dict(default=False, type='bool'),\n )\n arg_spec['global'] = dict(default='no', type='bool')\n module = AnsibleModule(\n argument_spec=arg_spec,\n supports_check_mode=True\n )\n\n name = module.params['name']\n path = module.params['path']\n version = module.params['version']\n glbl = module.params['global']\n production = module.params['production']\n executable = module.params['executable']\n registry = module.params['registry']\n state = module.params['state']\n ignore_scripts = module.params['ignore_scripts']\n\n if not path and not glbl:\n module.fail_json(msg='path must be specified when not using global')\n if state == 'absent' and not name:\n module.fail_json(msg='uninstalling a package is only available for named packages')\n\n npm = Npm(module, name=name, path=path, version=version, glbl=glbl, production=production, \\\n executable=executable, registry=registry, ignore_scripts=ignore_scripts)\n\n changed = False\n if state == 'present':\n installed, missing = npm.list()\n if len(missing):\n changed = True\n npm.install()\n elif state == 'latest':\n installed, missing = npm.list()\n outdated = npm.list_outdated()\n if len(missing):\n changed = True\n npm.install()\n if len(outdated):\n changed = True\n npm.update()\n else: #absent\n installed, missing = npm.list()\n if name in installed:\n changed = True\n npm.uninstall()\n\n module.exit_json(changed=changed)\n\n# import module snippets\nfrom ansible.module_utils.basic import *\nmain()\n", "path": "packaging/language/npm.py"}]} | 3,808 | 127 |
gh_patches_debug_14993 | rasdani/github-patches | git_diff | PrefectHQ__prefect-1583 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add example code block to `switch` docstring
I recently realized I hadn't touched the `switch` code in a long time, and I would've really appreciated an example to work off of. Instead, I ended up looking at our tests which most users won't want to do. Relevant doc: https://docs.prefect.io/api/unreleased/tasks/control_flow.html#prefect-tasks-control-flow-conditional-switch
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/prefect/tasks/control_flow/conditional.py`
Content:
```
1 from typing import Any, Dict
2
3 import prefect
4 from prefect import Task
5 from prefect.engine import signals
6 from prefect.engine.result import NoResult
7
8 __all__ = ["switch", "ifelse"]
9
10
11 class Merge(Task):
12 def __init__(self, **kwargs) -> None:
13 if kwargs.setdefault("skip_on_upstream_skip", False):
14 raise ValueError("Merge tasks must have `skip_on_upstream_skip=False`.")
15 super().__init__(**kwargs)
16
17 def run(self, **task_results: Any) -> Any:
18 return next((v for v in task_results.values() if v != NoResult), None)
19
20
21 class CompareValue(Task):
22 """
23 This task stores a `value` at initialization and compares it to a `value` received at runtime.
24 If the values don't match, it raises a SKIP exception.
25
26 Args:
27 - value (Any): the value this task will attempt to match when it runs
28 - **kwargs: keyword arguments for the Task
29 """
30
31 def __init__(self, value: Any, **kwargs: Any):
32 self.value = value
33 kwargs.setdefault("name", 'CompareValue: "{}"'.format(value))
34 super().__init__(**kwargs)
35
36 def run(self, value: Any) -> None:
37 """
38 Raises a SKIP signal if the passed value does not match the task's match value;
39 succeeds silently otherwise.
40
41 Args:
42 - value (Any): the value that will be matched against the task's value.
43 """
44 if value != self.value:
45 raise signals.SKIP(
46 'Provided value "{}" did not match "{}"'.format(value, self.value)
47 )
48
49
50 def switch(condition: Task, cases: Dict[Any, Task]) -> None:
51 """
52 Adds a SWITCH to a workflow.
53
54 The condition task is evaluated and the result is compared to the keys of the cases
55 dictionary. The task corresponding to the matching key is run; all other tasks are
56 skipped. Any tasks downstream of the skipped tasks are also skipped unless they set
57 `skip_on_upstream_skip=False`.
58
59 Args:
60 - condition (Task): a task whose result forms the condition for the switch
61 - cases (Dict[Any, Task]): a dict representing the "case" statements of the switch.
62 The value of the `condition` task will be compared to the keys of this dict, and
63 the matching task will be executed.
64
65 Raises:
66 - PrefectWarning: if any of the tasks in "cases" have upstream dependencies,
67 then this task will warn that those upstream tasks may run whether or not the switch condition matches their branch. The most common cause of this
68 is passing a list of tasks as one of the cases, which adds the `List` task
69 to the switch condition but leaves the tasks themselves upstream.
70 """
71
72 with prefect.tags("switch"):
73 for value, task in cases.items():
74 task = prefect.utilities.tasks.as_task(task)
75 match_condition = CompareValue(value=value).bind(value=condition)
76 task.set_dependencies(upstream_tasks=[match_condition])
77
78
79 def ifelse(condition: Task, true_task: Task, false_task: Task) -> None:
80 """
81 Builds a conditional branch into a workflow.
82
83 If the condition evaluates True(ish), the true_task will run. If it
84 evaluates False(ish), the false_task will run. The task doesn't run is Skipped, as are
85 all downstream tasks that don't set `skip_on_upstream_skip=False`.
86
87 Args:
88 - condition (Task): a task whose boolean result forms the condition for the ifelse
89 - true_task (Task): a task that will be executed if the condition is True
90 - false_task (Task): a task that will be executed if the condition is False
91 """
92
93 switch(condition=condition, cases={True: true_task, False: false_task})
94
95
96 def merge(*tasks: Task) -> Task:
97 """
98 Merges conditional branches back together.
99
100 A conditional branch in a flow results in one or more tasks proceeding and one or
101 more tasks skipping. It is often convenient to merge those branches back into a
102 single result. This function is a simple way to achieve that goal.
103
104 The merge will return the first real result it encounters, or `None`. If multiple
105 tasks might return a result, group them with a list.
106
107 Example:
108 ```python
109 with Flow("My Flow"):
110 true_branch = ActionIfTrue()
111 false_branch = ActionIfFalse()
112 ifelse(CheckCondition(), true_branch, false_branch)
113
114 merged_result = merge(true_branch, false_branch)
115 ```
116
117 Args:
118 - *tasks (Task): tasks whose results should be merged into a single result. The tasks are
119 assumed to all sit downstream of different `switch` branches, such that only
120 one of them will contain a result and the others will all be skipped.
121
122 Returns:
123 - Task: a Task representing the merged result.
124
125 """
126 return Merge().bind(**{"task_{}".format(i + 1): t for i, t in enumerate(tasks)})
127
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/prefect/tasks/control_flow/conditional.py b/src/prefect/tasks/control_flow/conditional.py
--- a/src/prefect/tasks/control_flow/conditional.py
+++ b/src/prefect/tasks/control_flow/conditional.py
@@ -56,6 +56,24 @@
skipped. Any tasks downstream of the skipped tasks are also skipped unless they set
`skip_on_upstream_skip=False`.
+ Example:
+ ```python
+ @task
+ def condition():
+ return "b" # returning 'b' will take the b_branch
+
+ @task
+ def a_branch():
+ return "A Branch"
+
+ @task
+ def b_branch():
+ return "B Branch"
+
+ with Flow("switch-flow") as flow:
+ switch(condition, dict(a=a_branch, b=b_branch))
+ ```
+
Args:
- condition (Task): a task whose result forms the condition for the switch
- cases (Dict[Any, Task]): a dict representing the "case" statements of the switch.
| {"golden_diff": "diff --git a/src/prefect/tasks/control_flow/conditional.py b/src/prefect/tasks/control_flow/conditional.py\n--- a/src/prefect/tasks/control_flow/conditional.py\n+++ b/src/prefect/tasks/control_flow/conditional.py\n@@ -56,6 +56,24 @@\n skipped. Any tasks downstream of the skipped tasks are also skipped unless they set\n `skip_on_upstream_skip=False`.\n \n+ Example:\n+ ```python\n+ @task\n+ def condition():\n+ return \"b\" # returning 'b' will take the b_branch\n+\n+ @task\n+ def a_branch():\n+ return \"A Branch\"\n+\n+ @task\n+ def b_branch():\n+ return \"B Branch\"\n+\n+ with Flow(\"switch-flow\") as flow:\n+ switch(condition, dict(a=a_branch, b=b_branch))\n+ ```\n+\n Args:\n - condition (Task): a task whose result forms the condition for the switch\n - cases (Dict[Any, Task]): a dict representing the \"case\" statements of the switch.\n", "issue": "Add example code block to `switch` docstring\nI recently realized I hadn't touched the `switch` code in a long time, and I would've really appreciated an example to work off of. Instead, I ended up looking at our tests which most users won't want to do. Relevant doc: https://docs.prefect.io/api/unreleased/tasks/control_flow.html#prefect-tasks-control-flow-conditional-switch\n", "before_files": [{"content": "from typing import Any, Dict\n\nimport prefect\nfrom prefect import Task\nfrom prefect.engine import signals\nfrom prefect.engine.result import NoResult\n\n__all__ = [\"switch\", \"ifelse\"]\n\n\nclass Merge(Task):\n def __init__(self, **kwargs) -> None:\n if kwargs.setdefault(\"skip_on_upstream_skip\", False):\n raise ValueError(\"Merge tasks must have `skip_on_upstream_skip=False`.\")\n super().__init__(**kwargs)\n\n def run(self, **task_results: Any) -> Any:\n return next((v for v in task_results.values() if v != NoResult), None)\n\n\nclass CompareValue(Task):\n \"\"\"\n This task stores a `value` at initialization and compares it to a `value` received at runtime.\n If the values don't match, it raises a SKIP exception.\n\n Args:\n - value (Any): the value this task will attempt to match when it runs\n - **kwargs: keyword arguments for the Task\n \"\"\"\n\n def __init__(self, value: Any, **kwargs: Any):\n self.value = value\n kwargs.setdefault(\"name\", 'CompareValue: \"{}\"'.format(value))\n super().__init__(**kwargs)\n\n def run(self, value: Any) -> None:\n \"\"\"\n Raises a SKIP signal if the passed value does not match the task's match value;\n succeeds silently otherwise.\n\n Args:\n - value (Any): the value that will be matched against the task's value.\n \"\"\"\n if value != self.value:\n raise signals.SKIP(\n 'Provided value \"{}\" did not match \"{}\"'.format(value, self.value)\n )\n\n\ndef switch(condition: Task, cases: Dict[Any, Task]) -> None:\n \"\"\"\n Adds a SWITCH to a workflow.\n\n The condition task is evaluated and the result is compared to the keys of the cases\n dictionary. The task corresponding to the matching key is run; all other tasks are\n skipped. Any tasks downstream of the skipped tasks are also skipped unless they set\n `skip_on_upstream_skip=False`.\n\n Args:\n - condition (Task): a task whose result forms the condition for the switch\n - cases (Dict[Any, Task]): a dict representing the \"case\" statements of the switch.\n The value of the `condition` task will be compared to the keys of this dict, and\n the matching task will be executed.\n\n Raises:\n - PrefectWarning: if any of the tasks in \"cases\" have upstream dependencies,\n then this task will warn that those upstream tasks may run whether or not the switch condition matches their branch. The most common cause of this\n is passing a list of tasks as one of the cases, which adds the `List` task\n to the switch condition but leaves the tasks themselves upstream.\n \"\"\"\n\n with prefect.tags(\"switch\"):\n for value, task in cases.items():\n task = prefect.utilities.tasks.as_task(task)\n match_condition = CompareValue(value=value).bind(value=condition)\n task.set_dependencies(upstream_tasks=[match_condition])\n\n\ndef ifelse(condition: Task, true_task: Task, false_task: Task) -> None:\n \"\"\"\n Builds a conditional branch into a workflow.\n\n If the condition evaluates True(ish), the true_task will run. If it\n evaluates False(ish), the false_task will run. The task doesn't run is Skipped, as are\n all downstream tasks that don't set `skip_on_upstream_skip=False`.\n\n Args:\n - condition (Task): a task whose boolean result forms the condition for the ifelse\n - true_task (Task): a task that will be executed if the condition is True\n - false_task (Task): a task that will be executed if the condition is False\n \"\"\"\n\n switch(condition=condition, cases={True: true_task, False: false_task})\n\n\ndef merge(*tasks: Task) -> Task:\n \"\"\"\n Merges conditional branches back together.\n\n A conditional branch in a flow results in one or more tasks proceeding and one or\n more tasks skipping. It is often convenient to merge those branches back into a\n single result. This function is a simple way to achieve that goal.\n\n The merge will return the first real result it encounters, or `None`. If multiple\n tasks might return a result, group them with a list.\n\n Example:\n ```python\n with Flow(\"My Flow\"):\n true_branch = ActionIfTrue()\n false_branch = ActionIfFalse()\n ifelse(CheckCondition(), true_branch, false_branch)\n\n merged_result = merge(true_branch, false_branch)\n ```\n\n Args:\n - *tasks (Task): tasks whose results should be merged into a single result. The tasks are\n assumed to all sit downstream of different `switch` branches, such that only\n one of them will contain a result and the others will all be skipped.\n\n Returns:\n - Task: a Task representing the merged result.\n\n \"\"\"\n return Merge().bind(**{\"task_{}\".format(i + 1): t for i, t in enumerate(tasks)})\n", "path": "src/prefect/tasks/control_flow/conditional.py"}], "after_files": [{"content": "from typing import Any, Dict\n\nimport prefect\nfrom prefect import Task\nfrom prefect.engine import signals\nfrom prefect.engine.result import NoResult\n\n__all__ = [\"switch\", \"ifelse\"]\n\n\nclass Merge(Task):\n def __init__(self, **kwargs) -> None:\n if kwargs.setdefault(\"skip_on_upstream_skip\", False):\n raise ValueError(\"Merge tasks must have `skip_on_upstream_skip=False`.\")\n super().__init__(**kwargs)\n\n def run(self, **task_results: Any) -> Any:\n return next((v for v in task_results.values() if v != NoResult), None)\n\n\nclass CompareValue(Task):\n \"\"\"\n This task stores a `value` at initialization and compares it to a `value` received at runtime.\n If the values don't match, it raises a SKIP exception.\n\n Args:\n - value (Any): the value this task will attempt to match when it runs\n - **kwargs: keyword arguments for the Task\n \"\"\"\n\n def __init__(self, value: Any, **kwargs: Any):\n self.value = value\n kwargs.setdefault(\"name\", 'CompareValue: \"{}\"'.format(value))\n super().__init__(**kwargs)\n\n def run(self, value: Any) -> None:\n \"\"\"\n Raises a SKIP signal if the passed value does not match the task's match value;\n succeeds silently otherwise.\n\n Args:\n - value (Any): the value that will be matched against the task's value.\n \"\"\"\n if value != self.value:\n raise signals.SKIP(\n 'Provided value \"{}\" did not match \"{}\"'.format(value, self.value)\n )\n\n\ndef switch(condition: Task, cases: Dict[Any, Task]) -> None:\n \"\"\"\n Adds a SWITCH to a workflow.\n\n The condition task is evaluated and the result is compared to the keys of the cases\n dictionary. The task corresponding to the matching key is run; all other tasks are\n skipped. Any tasks downstream of the skipped tasks are also skipped unless they set\n `skip_on_upstream_skip=False`.\n\n Example:\n ```python\n @task\n def condition():\n return \"b\" # returning 'b' will take the b_branch\n\n @task\n def a_branch():\n return \"A Branch\"\n\n @task\n def b_branch():\n return \"B Branch\"\n\n with Flow(\"switch-flow\") as flow:\n switch(condition, dict(a=a_branch, b=b_branch))\n ```\n\n Args:\n - condition (Task): a task whose result forms the condition for the switch\n - cases (Dict[Any, Task]): a dict representing the \"case\" statements of the switch.\n The value of the `condition` task will be compared to the keys of this dict, and\n the matching task will be executed.\n\n Raises:\n - PrefectWarning: if any of the tasks in \"cases\" have upstream dependencies,\n then this task will warn that those upstream tasks may run whether or not the switch condition matches their branch. The most common cause of this\n is passing a list of tasks as one of the cases, which adds the `List` task\n to the switch condition but leaves the tasks themselves upstream.\n \"\"\"\n\n with prefect.tags(\"switch\"):\n for value, task in cases.items():\n task = prefect.utilities.tasks.as_task(task)\n match_condition = CompareValue(value=value).bind(value=condition)\n task.set_dependencies(upstream_tasks=[match_condition])\n\n\ndef ifelse(condition: Task, true_task: Task, false_task: Task) -> None:\n \"\"\"\n Builds a conditional branch into a workflow.\n\n If the condition evaluates True(ish), the true_task will run. If it\n evaluates False(ish), the false_task will run. The task doesn't run is Skipped, as are\n all downstream tasks that don't set `skip_on_upstream_skip=False`.\n\n Args:\n - condition (Task): a task whose boolean result forms the condition for the ifelse\n - true_task (Task): a task that will be executed if the condition is True\n - false_task (Task): a task that will be executed if the condition is False\n \"\"\"\n\n switch(condition=condition, cases={True: true_task, False: false_task})\n\n\ndef merge(*tasks: Task) -> Task:\n \"\"\"\n Merges conditional branches back together.\n\n A conditional branch in a flow results in one or more tasks proceeding and one or\n more tasks skipping. It is often convenient to merge those branches back into a\n single result. This function is a simple way to achieve that goal.\n\n The merge will return the first real result it encounters, or `None`. If multiple\n tasks might return a result, group them with a list.\n\n Example:\n ```python\n with Flow(\"My Flow\"):\n true_branch = ActionIfTrue()\n false_branch = ActionIfFalse()\n ifelse(CheckCondition(), true_branch, false_branch)\n\n merged_result = merge(true_branch, false_branch)\n ```\n\n Args:\n - *tasks (Task): tasks whose results should be merged into a single result. The tasks are\n assumed to all sit downstream of different `switch` branches, such that only\n one of them will contain a result and the others will all be skipped.\n\n Returns:\n - Task: a Task representing the merged result.\n\n \"\"\"\n return Merge().bind(**{\"task_{}\".format(i + 1): t for i, t in enumerate(tasks)})\n", "path": "src/prefect/tasks/control_flow/conditional.py"}]} | 1,724 | 238 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.