problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_24340 | rasdani/github-patches | git_diff | apache__airflow-17850 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ExasolHook get_pandas_df does not return pandas dataframe but None
When calling the exasol hooks get_pandas_df function (https://github.com/apache/airflow/blob/main/airflow/providers/exasol/hooks/exasol.py) I noticed that it does not return a pandas dataframe. It returns None. In fact the function definition type hint explicitly states that None is returned. But the name of the function suggests otherwise. The name get_pandas_df implies that it should return a dataframe and not None.
I think that it would make more sense if get_pandas_df would indeed return a dataframe as the name is alluring to. So the code should be like this:
`def get_pandas_df(self, sql: Union[str, list], parameters: Optional[dict] = None, **kwargs) -> pd.DataFrame:
... some code ...
with closing(self.get_conn()) as conn:
df=conn.export_to_pandas(sql, query_params=parameters, **kwargs)
return df`
INSTEAD OF:
`def get_pandas_df(self, sql: Union[str, list], parameters: Optional[dict] = None, **kwargs) -> None:
... some code ...
with closing(self.get_conn()) as conn:
conn.export_to_pandas(sql, query_params=parameters, **kwargs)`
**Apache Airflow version**: 2.1.0
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): Not using Kubernetes
**Environment**:Official Airflow-Docker Image
- **Cloud provider or hardware configuration**: no cloud - docker host (DELL Server with 48 Cores, 512GB RAM and many TB storage)
- **OS** (e.g. from /etc/os-release):Official Airflow-Docker Image on CentOS 7 Host
- **Kernel** (e.g. `uname -a`): Linux cad18b35be00 3.10.0-1160.21.1.el7.x86_64 #1 SMP Tue Mar 16 18:28:22 UTC 2021 x86_64 GNU/Linux
- **Install tools**: only docker
- **Others**:
**What happened**:
You can replicate the findings with following dag file:
import datetime
from airflow import DAG
from airflow.operators.python_operator import PythonOperator
from airflow.providers.exasol.operators.exasol import ExasolHook
import pandas as pd
default_args = {"owner": "airflow"}
def call_exasol_hook(**kwargs):
#Make connection to Exasol
hook = ExasolHook(exasol_conn_id='Exasol QA')
sql = 'select 42;'
df = hook.get_pandas_df(sql = sql)
return df
with DAG(
dag_id="exasol_hook_problem",
start_date=datetime.datetime(2021, 5, 5),
schedule_interval="@once",
default_args=default_args,
catchup=False,
) as dag:
set_variable = PythonOperator(
task_id='call_exasol_hook',
python_callable=call_exasol_hook
)
Sorry for the strange code formatting. I do not know how to fix this in the github UI form.
Sorry also in case I missed something.
When testing or executing the task via CLI:
` airflow tasks test exasol_hook_problem call_exasol_hook 2021-07-20`
the logs show:
`[2021-07-21 12:53:19,775] {python.py:151} INFO - Done. Returned value was: None`
None was returned - although get_pandas_df was called. A pandas df should have been returned instead.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `airflow/providers/exasol/hooks/exasol.py`
Content:
```
1 #
2 # Licensed to the Apache Software Foundation (ASF) under one
3 # or more contributor license agreements. See the NOTICE file
4 # distributed with this work for additional information
5 # regarding copyright ownership. The ASF licenses this file
6 # to you under the Apache License, Version 2.0 (the
7 # "License"); you may not use this file except in compliance
8 # with the License. You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing,
13 # software distributed under the License is distributed on an
14 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
15 # KIND, either express or implied. See the License for the
16 # specific language governing permissions and limitations
17 # under the License.
18
19 from contextlib import closing
20 from typing import Any, Dict, List, Optional, Tuple, Union
21
22 import pyexasol
23 from pyexasol import ExaConnection
24
25 from airflow.hooks.dbapi import DbApiHook
26
27
28 class ExasolHook(DbApiHook):
29 """
30 Interact with Exasol.
31 You can specify the pyexasol ``compression``, ``encryption``, ``json_lib``
32 and ``client_name`` parameters in the extra field of your connection
33 as ``{"compression": True, "json_lib": "rapidjson", etc}``.
34 See `pyexasol reference
35 <https://github.com/badoo/pyexasol/blob/master/docs/REFERENCE.md#connect>`_
36 for more details.
37 """
38
39 conn_name_attr = 'exasol_conn_id'
40 default_conn_name = 'exasol_default'
41 conn_type = 'exasol'
42 hook_name = 'Exasol'
43 supports_autocommit = True
44
45 def __init__(self, *args, **kwargs) -> None:
46 super().__init__(*args, **kwargs)
47 self.schema = kwargs.pop("schema", None)
48
49 def get_conn(self) -> ExaConnection:
50 conn_id = getattr(self, self.conn_name_attr)
51 conn = self.get_connection(conn_id)
52 conn_args = dict(
53 dsn=f'{conn.host}:{conn.port}',
54 user=conn.login,
55 password=conn.password,
56 schema=self.schema or conn.schema,
57 )
58 # check for parameters in conn.extra
59 for arg_name, arg_val in conn.extra_dejson.items():
60 if arg_name in ['compression', 'encryption', 'json_lib', 'client_name']:
61 conn_args[arg_name] = arg_val
62
63 conn = pyexasol.connect(**conn_args)
64 return conn
65
66 def get_pandas_df(self, sql: Union[str, list], parameters: Optional[dict] = None, **kwargs) -> None:
67 """
68 Executes the sql and returns a pandas dataframe
69
70 :param sql: the sql statement to be executed (str) or a list of
71 sql statements to execute
72 :type sql: str or list
73 :param parameters: The parameters to render the SQL query with.
74 :type parameters: dict or iterable
75 :param kwargs: (optional) passed into pyexasol.ExaConnection.export_to_pandas method
76 :type kwargs: dict
77 """
78 with closing(self.get_conn()) as conn:
79 conn.export_to_pandas(sql, query_params=parameters, **kwargs)
80
81 def get_records(
82 self, sql: Union[str, list], parameters: Optional[dict] = None
83 ) -> List[Union[dict, Tuple[Any, ...]]]:
84 """
85 Executes the sql and returns a set of records.
86
87 :param sql: the sql statement to be executed (str) or a list of
88 sql statements to execute
89 :type sql: str or list
90 :param parameters: The parameters to render the SQL query with.
91 :type parameters: dict or iterable
92 """
93 with closing(self.get_conn()) as conn:
94 with closing(conn.execute(sql, parameters)) as cur:
95 return cur.fetchall()
96
97 def get_first(self, sql: Union[str, list], parameters: Optional[dict] = None) -> Optional[Any]:
98 """
99 Executes the sql and returns the first resulting row.
100
101 :param sql: the sql statement to be executed (str) or a list of
102 sql statements to execute
103 :type sql: str or list
104 :param parameters: The parameters to render the SQL query with.
105 :type parameters: dict or iterable
106 """
107 with closing(self.get_conn()) as conn:
108 with closing(conn.execute(sql, parameters)) as cur:
109 return cur.fetchone()
110
111 def export_to_file(
112 self,
113 filename: str,
114 query_or_table: str,
115 query_params: Optional[Dict] = None,
116 export_params: Optional[Dict] = None,
117 ) -> None:
118 """
119 Exports data to a file.
120
121 :param filename: Path to the file to which the data has to be exported
122 :type filename: str
123 :param query_or_table: the sql statement to be executed or table name to export
124 :type query_or_table: str
125 :param query_params: Query parameters passed to underlying ``export_to_file``
126 method of :class:`~pyexasol.connection.ExaConnection`.
127 :type query_params: dict
128 :param export_params: Extra parameters passed to underlying ``export_to_file``
129 method of :class:`~pyexasol.connection.ExaConnection`.
130 :type export_params: dict
131 """
132 self.log.info("Getting data from exasol")
133 with closing(self.get_conn()) as conn:
134 conn.export_to_file(
135 dst=filename,
136 query_or_table=query_or_table,
137 query_params=query_params,
138 export_params=export_params,
139 )
140 self.log.info("Data saved to %s", filename)
141
142 def run(self, sql: Union[str, list], autocommit: bool = False, parameters: Optional[dict] = None) -> None:
143 """
144 Runs a command or a list of commands. Pass a list of sql
145 statements to the sql parameter to get them to execute
146 sequentially
147
148 :param sql: the sql statement to be executed (str) or a list of
149 sql statements to execute
150 :type sql: str or list
151 :param autocommit: What to set the connection's autocommit setting to
152 before executing the query.
153 :type autocommit: bool
154 :param parameters: The parameters to render the SQL query with.
155 :type parameters: dict or iterable
156 """
157 if isinstance(sql, str):
158 sql = [sql]
159
160 with closing(self.get_conn()) as conn:
161 if self.supports_autocommit:
162 self.set_autocommit(conn, autocommit)
163
164 for query in sql:
165 self.log.info(query)
166 with closing(conn.execute(query, parameters)) as cur:
167 self.log.info(cur.row_count)
168 # If autocommit was set to False for db that supports autocommit,
169 # or if db does not supports autocommit, we do a manual commit.
170 if not self.get_autocommit(conn):
171 conn.commit()
172
173 def set_autocommit(self, conn, autocommit: bool) -> None:
174 """
175 Sets the autocommit flag on the connection
176
177 :param conn: Connection to set autocommit setting to.
178 :type conn: connection object
179 :param autocommit: The autocommit setting to set.
180 :type autocommit: bool
181 """
182 if not self.supports_autocommit and autocommit:
183 self.log.warning(
184 "%s connection doesn't support autocommit but autocommit activated.",
185 getattr(self, self.conn_name_attr),
186 )
187 conn.set_autocommit(autocommit)
188
189 def get_autocommit(self, conn) -> bool:
190 """
191 Get autocommit setting for the provided connection.
192 Return True if autocommit is set.
193 Return False if autocommit is not set or set to False or conn
194 does not support autocommit.
195
196 :param conn: Connection to get autocommit setting from.
197 :type conn: connection object
198 :return: connection autocommit setting.
199 :rtype: bool
200 """
201 autocommit = conn.attr.get('autocommit')
202 if autocommit is None:
203 autocommit = super().get_autocommit(conn)
204 return autocommit
205
206 @staticmethod
207 def _serialize_cell(cell, conn=None) -> object:
208 """
209 Exasol will adapt all arguments to the execute() method internally,
210 hence we return cell without any conversion.
211
212 :param cell: The cell to insert into the table
213 :type cell: object
214 :param conn: The database connection
215 :type conn: connection object
216 :return: The cell
217 :rtype: object
218 """
219 return cell
220
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/airflow/providers/exasol/hooks/exasol.py b/airflow/providers/exasol/hooks/exasol.py
--- a/airflow/providers/exasol/hooks/exasol.py
+++ b/airflow/providers/exasol/hooks/exasol.py
@@ -19,6 +19,7 @@
from contextlib import closing
from typing import Any, Dict, List, Optional, Tuple, Union
+import pandas as pd
import pyexasol
from pyexasol import ExaConnection
@@ -63,7 +64,9 @@
conn = pyexasol.connect(**conn_args)
return conn
- def get_pandas_df(self, sql: Union[str, list], parameters: Optional[dict] = None, **kwargs) -> None:
+ def get_pandas_df(
+ self, sql: Union[str, list], parameters: Optional[dict] = None, **kwargs
+ ) -> pd.DataFrame:
"""
Executes the sql and returns a pandas dataframe
@@ -76,7 +79,8 @@
:type kwargs: dict
"""
with closing(self.get_conn()) as conn:
- conn.export_to_pandas(sql, query_params=parameters, **kwargs)
+ df = conn.export_to_pandas(sql, query_params=parameters, **kwargs)
+ return df
def get_records(
self, sql: Union[str, list], parameters: Optional[dict] = None
| {"golden_diff": "diff --git a/airflow/providers/exasol/hooks/exasol.py b/airflow/providers/exasol/hooks/exasol.py\n--- a/airflow/providers/exasol/hooks/exasol.py\n+++ b/airflow/providers/exasol/hooks/exasol.py\n@@ -19,6 +19,7 @@\n from contextlib import closing\n from typing import Any, Dict, List, Optional, Tuple, Union\n \n+import pandas as pd\n import pyexasol\n from pyexasol import ExaConnection\n \n@@ -63,7 +64,9 @@\n conn = pyexasol.connect(**conn_args)\n return conn\n \n- def get_pandas_df(self, sql: Union[str, list], parameters: Optional[dict] = None, **kwargs) -> None:\n+ def get_pandas_df(\n+ self, sql: Union[str, list], parameters: Optional[dict] = None, **kwargs\n+ ) -> pd.DataFrame:\n \"\"\"\n Executes the sql and returns a pandas dataframe\n \n@@ -76,7 +79,8 @@\n :type kwargs: dict\n \"\"\"\n with closing(self.get_conn()) as conn:\n- conn.export_to_pandas(sql, query_params=parameters, **kwargs)\n+ df = conn.export_to_pandas(sql, query_params=parameters, **kwargs)\n+ return df\n \n def get_records(\n self, sql: Union[str, list], parameters: Optional[dict] = None\n", "issue": "ExasolHook get_pandas_df does not return pandas dataframe but None\n\r\nWhen calling the exasol hooks get_pandas_df function (https://github.com/apache/airflow/blob/main/airflow/providers/exasol/hooks/exasol.py) I noticed that it does not return a pandas dataframe. It returns None. In fact the function definition type hint explicitly states that None is returned. But the name of the function suggests otherwise. The name get_pandas_df implies that it should return a dataframe and not None.\r\n\r\nI think that it would make more sense if get_pandas_df would indeed return a dataframe as the name is alluring to. So the code should be like this:\r\n\r\n`def get_pandas_df(self, sql: Union[str, list], parameters: Optional[dict] = None, **kwargs) -> pd.DataFrame:\r\n... some code ...\r\nwith closing(self.get_conn()) as conn:\r\ndf=conn.export_to_pandas(sql, query_params=parameters, **kwargs)\r\nreturn df`\r\n\r\nINSTEAD OF:\r\n\r\n`def get_pandas_df(self, sql: Union[str, list], parameters: Optional[dict] = None, **kwargs) -> None:\r\n... some code ...\r\nwith closing(self.get_conn()) as conn:\r\nconn.export_to_pandas(sql, query_params=parameters, **kwargs)`\r\n\r\n**Apache Airflow version**: 2.1.0\r\n\r\n\r\n**Kubernetes version (if you are using kubernetes)** (use `kubectl version`): Not using Kubernetes\r\n\r\n**Environment**:Official Airflow-Docker Image\r\n\r\n- **Cloud provider or hardware configuration**: no cloud - docker host (DELL Server with 48 Cores, 512GB RAM and many TB storage)\r\n- **OS** (e.g. from /etc/os-release):Official Airflow-Docker Image on CentOS 7 Host\r\n- **Kernel** (e.g. `uname -a`): Linux cad18b35be00 3.10.0-1160.21.1.el7.x86_64 #1 SMP Tue Mar 16 18:28:22 UTC 2021 x86_64 GNU/Linux\r\n- **Install tools**: only docker\r\n- **Others**:\r\n\r\n**What happened**:\r\nYou can replicate the findings with following dag file:\r\n\r\nimport datetime\r\n\r\nfrom airflow import DAG\r\nfrom airflow.operators.python_operator import PythonOperator\r\nfrom airflow.providers.exasol.operators.exasol import ExasolHook\r\nimport pandas as pd\r\n\r\n\r\ndefault_args = {\"owner\": \"airflow\"}\r\n\r\n\r\ndef call_exasol_hook(**kwargs):\r\n #Make connection to Exasol\r\n hook = ExasolHook(exasol_conn_id='Exasol QA')\r\n sql = 'select 42;' \r\n df = hook.get_pandas_df(sql = sql) \r\n return df\r\n \r\nwith DAG(\r\n dag_id=\"exasol_hook_problem\",\r\n start_date=datetime.datetime(2021, 5, 5),\r\n schedule_interval=\"@once\",\r\n default_args=default_args,\r\n catchup=False,\r\n) as dag:\r\n \r\n set_variable = PythonOperator(\r\n task_id='call_exasol_hook',\r\n python_callable=call_exasol_hook\r\n )\r\n\r\nSorry for the strange code formatting. I do not know how to fix this in the github UI form. \r\nSorry also in case I missed something.\r\n \r\nWhen testing or executing the task via CLI:\r\n` airflow tasks test exasol_hook_problem call_exasol_hook 2021-07-20`\r\n\r\nthe logs show:\r\n`[2021-07-21 12:53:19,775] {python.py:151} INFO - Done. Returned value was: None`\r\n\r\nNone was returned - although get_pandas_df was called. A pandas df should have been returned instead.\r\n\r\n\n", "before_files": [{"content": "#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nfrom contextlib import closing\nfrom typing import Any, Dict, List, Optional, Tuple, Union\n\nimport pyexasol\nfrom pyexasol import ExaConnection\n\nfrom airflow.hooks.dbapi import DbApiHook\n\n\nclass ExasolHook(DbApiHook):\n \"\"\"\n Interact with Exasol.\n You can specify the pyexasol ``compression``, ``encryption``, ``json_lib``\n and ``client_name`` parameters in the extra field of your connection\n as ``{\"compression\": True, \"json_lib\": \"rapidjson\", etc}``.\n See `pyexasol reference\n <https://github.com/badoo/pyexasol/blob/master/docs/REFERENCE.md#connect>`_\n for more details.\n \"\"\"\n\n conn_name_attr = 'exasol_conn_id'\n default_conn_name = 'exasol_default'\n conn_type = 'exasol'\n hook_name = 'Exasol'\n supports_autocommit = True\n\n def __init__(self, *args, **kwargs) -> None:\n super().__init__(*args, **kwargs)\n self.schema = kwargs.pop(\"schema\", None)\n\n def get_conn(self) -> ExaConnection:\n conn_id = getattr(self, self.conn_name_attr)\n conn = self.get_connection(conn_id)\n conn_args = dict(\n dsn=f'{conn.host}:{conn.port}',\n user=conn.login,\n password=conn.password,\n schema=self.schema or conn.schema,\n )\n # check for parameters in conn.extra\n for arg_name, arg_val in conn.extra_dejson.items():\n if arg_name in ['compression', 'encryption', 'json_lib', 'client_name']:\n conn_args[arg_name] = arg_val\n\n conn = pyexasol.connect(**conn_args)\n return conn\n\n def get_pandas_df(self, sql: Union[str, list], parameters: Optional[dict] = None, **kwargs) -> None:\n \"\"\"\n Executes the sql and returns a pandas dataframe\n\n :param sql: the sql statement to be executed (str) or a list of\n sql statements to execute\n :type sql: str or list\n :param parameters: The parameters to render the SQL query with.\n :type parameters: dict or iterable\n :param kwargs: (optional) passed into pyexasol.ExaConnection.export_to_pandas method\n :type kwargs: dict\n \"\"\"\n with closing(self.get_conn()) as conn:\n conn.export_to_pandas(sql, query_params=parameters, **kwargs)\n\n def get_records(\n self, sql: Union[str, list], parameters: Optional[dict] = None\n ) -> List[Union[dict, Tuple[Any, ...]]]:\n \"\"\"\n Executes the sql and returns a set of records.\n\n :param sql: the sql statement to be executed (str) or a list of\n sql statements to execute\n :type sql: str or list\n :param parameters: The parameters to render the SQL query with.\n :type parameters: dict or iterable\n \"\"\"\n with closing(self.get_conn()) as conn:\n with closing(conn.execute(sql, parameters)) as cur:\n return cur.fetchall()\n\n def get_first(self, sql: Union[str, list], parameters: Optional[dict] = None) -> Optional[Any]:\n \"\"\"\n Executes the sql and returns the first resulting row.\n\n :param sql: the sql statement to be executed (str) or a list of\n sql statements to execute\n :type sql: str or list\n :param parameters: The parameters to render the SQL query with.\n :type parameters: dict or iterable\n \"\"\"\n with closing(self.get_conn()) as conn:\n with closing(conn.execute(sql, parameters)) as cur:\n return cur.fetchone()\n\n def export_to_file(\n self,\n filename: str,\n query_or_table: str,\n query_params: Optional[Dict] = None,\n export_params: Optional[Dict] = None,\n ) -> None:\n \"\"\"\n Exports data to a file.\n\n :param filename: Path to the file to which the data has to be exported\n :type filename: str\n :param query_or_table: the sql statement to be executed or table name to export\n :type query_or_table: str\n :param query_params: Query parameters passed to underlying ``export_to_file``\n method of :class:`~pyexasol.connection.ExaConnection`.\n :type query_params: dict\n :param export_params: Extra parameters passed to underlying ``export_to_file``\n method of :class:`~pyexasol.connection.ExaConnection`.\n :type export_params: dict\n \"\"\"\n self.log.info(\"Getting data from exasol\")\n with closing(self.get_conn()) as conn:\n conn.export_to_file(\n dst=filename,\n query_or_table=query_or_table,\n query_params=query_params,\n export_params=export_params,\n )\n self.log.info(\"Data saved to %s\", filename)\n\n def run(self, sql: Union[str, list], autocommit: bool = False, parameters: Optional[dict] = None) -> None:\n \"\"\"\n Runs a command or a list of commands. Pass a list of sql\n statements to the sql parameter to get them to execute\n sequentially\n\n :param sql: the sql statement to be executed (str) or a list of\n sql statements to execute\n :type sql: str or list\n :param autocommit: What to set the connection's autocommit setting to\n before executing the query.\n :type autocommit: bool\n :param parameters: The parameters to render the SQL query with.\n :type parameters: dict or iterable\n \"\"\"\n if isinstance(sql, str):\n sql = [sql]\n\n with closing(self.get_conn()) as conn:\n if self.supports_autocommit:\n self.set_autocommit(conn, autocommit)\n\n for query in sql:\n self.log.info(query)\n with closing(conn.execute(query, parameters)) as cur:\n self.log.info(cur.row_count)\n # If autocommit was set to False for db that supports autocommit,\n # or if db does not supports autocommit, we do a manual commit.\n if not self.get_autocommit(conn):\n conn.commit()\n\n def set_autocommit(self, conn, autocommit: bool) -> None:\n \"\"\"\n Sets the autocommit flag on the connection\n\n :param conn: Connection to set autocommit setting to.\n :type conn: connection object\n :param autocommit: The autocommit setting to set.\n :type autocommit: bool\n \"\"\"\n if not self.supports_autocommit and autocommit:\n self.log.warning(\n \"%s connection doesn't support autocommit but autocommit activated.\",\n getattr(self, self.conn_name_attr),\n )\n conn.set_autocommit(autocommit)\n\n def get_autocommit(self, conn) -> bool:\n \"\"\"\n Get autocommit setting for the provided connection.\n Return True if autocommit is set.\n Return False if autocommit is not set or set to False or conn\n does not support autocommit.\n\n :param conn: Connection to get autocommit setting from.\n :type conn: connection object\n :return: connection autocommit setting.\n :rtype: bool\n \"\"\"\n autocommit = conn.attr.get('autocommit')\n if autocommit is None:\n autocommit = super().get_autocommit(conn)\n return autocommit\n\n @staticmethod\n def _serialize_cell(cell, conn=None) -> object:\n \"\"\"\n Exasol will adapt all arguments to the execute() method internally,\n hence we return cell without any conversion.\n\n :param cell: The cell to insert into the table\n :type cell: object\n :param conn: The database connection\n :type conn: connection object\n :return: The cell\n :rtype: object\n \"\"\"\n return cell\n", "path": "airflow/providers/exasol/hooks/exasol.py"}], "after_files": [{"content": "#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nfrom contextlib import closing\nfrom typing import Any, Dict, List, Optional, Tuple, Union\n\nimport pandas as pd\nimport pyexasol\nfrom pyexasol import ExaConnection\n\nfrom airflow.hooks.dbapi import DbApiHook\n\n\nclass ExasolHook(DbApiHook):\n \"\"\"\n Interact with Exasol.\n You can specify the pyexasol ``compression``, ``encryption``, ``json_lib``\n and ``client_name`` parameters in the extra field of your connection\n as ``{\"compression\": True, \"json_lib\": \"rapidjson\", etc}``.\n See `pyexasol reference\n <https://github.com/badoo/pyexasol/blob/master/docs/REFERENCE.md#connect>`_\n for more details.\n \"\"\"\n\n conn_name_attr = 'exasol_conn_id'\n default_conn_name = 'exasol_default'\n conn_type = 'exasol'\n hook_name = 'Exasol'\n supports_autocommit = True\n\n def __init__(self, *args, **kwargs) -> None:\n super().__init__(*args, **kwargs)\n self.schema = kwargs.pop(\"schema\", None)\n\n def get_conn(self) -> ExaConnection:\n conn_id = getattr(self, self.conn_name_attr)\n conn = self.get_connection(conn_id)\n conn_args = dict(\n dsn=f'{conn.host}:{conn.port}',\n user=conn.login,\n password=conn.password,\n schema=self.schema or conn.schema,\n )\n # check for parameters in conn.extra\n for arg_name, arg_val in conn.extra_dejson.items():\n if arg_name in ['compression', 'encryption', 'json_lib', 'client_name']:\n conn_args[arg_name] = arg_val\n\n conn = pyexasol.connect(**conn_args)\n return conn\n\n def get_pandas_df(\n self, sql: Union[str, list], parameters: Optional[dict] = None, **kwargs\n ) -> pd.DataFrame:\n \"\"\"\n Executes the sql and returns a pandas dataframe\n\n :param sql: the sql statement to be executed (str) or a list of\n sql statements to execute\n :type sql: str or list\n :param parameters: The parameters to render the SQL query with.\n :type parameters: dict or iterable\n :param kwargs: (optional) passed into pyexasol.ExaConnection.export_to_pandas method\n :type kwargs: dict\n \"\"\"\n with closing(self.get_conn()) as conn:\n df = conn.export_to_pandas(sql, query_params=parameters, **kwargs)\n return df\n\n def get_records(\n self, sql: Union[str, list], parameters: Optional[dict] = None\n ) -> List[Union[dict, Tuple[Any, ...]]]:\n \"\"\"\n Executes the sql and returns a set of records.\n\n :param sql: the sql statement to be executed (str) or a list of\n sql statements to execute\n :type sql: str or list\n :param parameters: The parameters to render the SQL query with.\n :type parameters: dict or iterable\n \"\"\"\n with closing(self.get_conn()) as conn:\n with closing(conn.execute(sql, parameters)) as cur:\n return cur.fetchall()\n\n def get_first(self, sql: Union[str, list], parameters: Optional[dict] = None) -> Optional[Any]:\n \"\"\"\n Executes the sql and returns the first resulting row.\n\n :param sql: the sql statement to be executed (str) or a list of\n sql statements to execute\n :type sql: str or list\n :param parameters: The parameters to render the SQL query with.\n :type parameters: dict or iterable\n \"\"\"\n with closing(self.get_conn()) as conn:\n with closing(conn.execute(sql, parameters)) as cur:\n return cur.fetchone()\n\n def export_to_file(\n self,\n filename: str,\n query_or_table: str,\n query_params: Optional[Dict] = None,\n export_params: Optional[Dict] = None,\n ) -> None:\n \"\"\"\n Exports data to a file.\n\n :param filename: Path to the file to which the data has to be exported\n :type filename: str\n :param query_or_table: the sql statement to be executed or table name to export\n :type query_or_table: str\n :param query_params: Query parameters passed to underlying ``export_to_file``\n method of :class:`~pyexasol.connection.ExaConnection`.\n :type query_params: dict\n :param export_params: Extra parameters passed to underlying ``export_to_file``\n method of :class:`~pyexasol.connection.ExaConnection`.\n :type export_params: dict\n \"\"\"\n self.log.info(\"Getting data from exasol\")\n with closing(self.get_conn()) as conn:\n conn.export_to_file(\n dst=filename,\n query_or_table=query_or_table,\n query_params=query_params,\n export_params=export_params,\n )\n self.log.info(\"Data saved to %s\", filename)\n\n def run(self, sql: Union[str, list], autocommit: bool = False, parameters: Optional[dict] = None) -> None:\n \"\"\"\n Runs a command or a list of commands. Pass a list of sql\n statements to the sql parameter to get them to execute\n sequentially\n\n :param sql: the sql statement to be executed (str) or a list of\n sql statements to execute\n :type sql: str or list\n :param autocommit: What to set the connection's autocommit setting to\n before executing the query.\n :type autocommit: bool\n :param parameters: The parameters to render the SQL query with.\n :type parameters: dict or iterable\n \"\"\"\n if isinstance(sql, str):\n sql = [sql]\n\n with closing(self.get_conn()) as conn:\n if self.supports_autocommit:\n self.set_autocommit(conn, autocommit)\n\n for query in sql:\n self.log.info(query)\n with closing(conn.execute(query, parameters)) as cur:\n self.log.info(cur.row_count)\n # If autocommit was set to False for db that supports autocommit,\n # or if db does not supports autocommit, we do a manual commit.\n if not self.get_autocommit(conn):\n conn.commit()\n\n def set_autocommit(self, conn, autocommit: bool) -> None:\n \"\"\"\n Sets the autocommit flag on the connection\n\n :param conn: Connection to set autocommit setting to.\n :type conn: connection object\n :param autocommit: The autocommit setting to set.\n :type autocommit: bool\n \"\"\"\n if not self.supports_autocommit and autocommit:\n self.log.warning(\n \"%s connection doesn't support autocommit but autocommit activated.\",\n getattr(self, self.conn_name_attr),\n )\n conn.set_autocommit(autocommit)\n\n def get_autocommit(self, conn) -> bool:\n \"\"\"\n Get autocommit setting for the provided connection.\n Return True if autocommit is set.\n Return False if autocommit is not set or set to False or conn\n does not support autocommit.\n\n :param conn: Connection to get autocommit setting from.\n :type conn: connection object\n :return: connection autocommit setting.\n :rtype: bool\n \"\"\"\n autocommit = conn.attr.get('autocommit')\n if autocommit is None:\n autocommit = super().get_autocommit(conn)\n return autocommit\n\n @staticmethod\n def _serialize_cell(cell, conn=None) -> object:\n \"\"\"\n Exasol will adapt all arguments to the execute() method internally,\n hence we return cell without any conversion.\n\n :param cell: The cell to insert into the table\n :type cell: object\n :param conn: The database connection\n :type conn: connection object\n :return: The cell\n :rtype: object\n \"\"\"\n return cell\n", "path": "airflow/providers/exasol/hooks/exasol.py"}]} | 3,552 | 317 |
gh_patches_debug_11327 | rasdani/github-patches | git_diff | cloud-custodian__cloud-custodian-9504 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add support for backup in timestream backup
### Describe the feature
This will use AWS backup service to take time-stream backup.
### Extra information or context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `c7n/resources/timestream.py`
Content:
```
1 from c7n.manager import resources
2 from c7n.actions import Action
3 from c7n.filters.kms import KmsRelatedFilter
4 from c7n.query import DescribeSource, QueryResourceManager, TypeInfo
5 from c7n.utils import local_session, type_schema
6 from c7n.tags import (
7 TagDelayedAction,
8 TagActionFilter,
9 Tag as TagAction,
10 RemoveTag as RemoveTagAction
11 )
12
13
14 class DescribeTimestream(DescribeSource):
15 def augment(self, resources):
16 for r in resources:
17 client = local_session(self.manager.session_factory).client('timestream-write')
18 r['Tags'] = client.list_tags_for_resource(ResourceARN=r['Arn'])['Tags']
19 return resources
20
21
22 @resources.register('timestream-database')
23 class TimestreamDatabase(QueryResourceManager):
24 class resource_type(TypeInfo):
25 service = 'timestream-write'
26 arn_type = ''
27 name = 'DatabaseName'
28 id = arn = 'Arn'
29 enum_spec = ('list_databases', 'Databases', {})
30 permission_prefix = 'timestream'
31 permissions = ('timestream:ListDatabases', )
32 permissions_augment = ("timestream:ListTagsForResource",)
33 source_mapping = {
34 'describe': DescribeTimestream,
35 }
36
37
38 @resources.register('timestream-table')
39 class TimestreamTable(QueryResourceManager):
40 class resource_type(TypeInfo):
41 service = 'timestream-write'
42 arn_type = ''
43 name = 'TableName'
44 id = arn = 'Arn'
45 enum_spec = ('list_tables', 'Tables', {})
46 permission_prefix = 'timestream'
47 permissions = ('timestream:ListTables', )
48
49 source_mapping = {
50 'describe': DescribeTimestream,
51 }
52
53
54 @TimestreamDatabase.action_registry.register('tag')
55 @TimestreamTable.action_registry.register('tag')
56 class TimestreamTag(TagAction):
57
58 permissions = ('timestream:TagResource', )
59
60 def process_resource_set(self, client, resource_set, tags):
61 for r in resource_set:
62 client.tag_resource(ResourceARN=r['Arn'], Tags=tags)
63
64
65 @TimestreamDatabase.action_registry.register('remove-tag')
66 @TimestreamTable.action_registry.register('remove-tag')
67 class TimestreamRemoveTag(RemoveTagAction):
68
69 permissions = ('timestream:UntagResource', )
70
71 def process_resource_set(self, client, resource_set, tag_keys):
72 for r in resource_set:
73 client.untag_resource(ResourceARN=r['Arn'], TagKeys=tag_keys)
74
75
76 TimestreamDatabase.action_registry.register('mark-for-op', TagDelayedAction)
77 TimestreamTable.action_registry.register('mark-for-op', TagDelayedAction)
78
79 TimestreamDatabase.filter_registry.register('marked-for-op', TagActionFilter)
80 TimestreamTable.filter_registry.register('marked-for-op', TagActionFilter)
81
82
83 @TimestreamTable.action_registry.register('delete')
84 class TimestreamTableDelete(Action):
85 """
86 Deletes a timestream table
87 """
88
89 schema = type_schema('delete')
90 permissions = ('timestream:DeleteTable', )
91
92 def process(self, resources):
93 client = local_session(self.manager.session_factory).client('timestream-write')
94 for r in resources:
95 try:
96 client.delete_table(
97 DatabaseName=r['DatabaseName'],
98 TableName=r['TableName']
99 )
100 except client.exceptions.ResourceNotFoundException:
101 continue
102
103
104 @TimestreamDatabase.action_registry.register('delete')
105 class TimestreamDatabaseDelete(Action):
106 """
107 Deletes a timestream database
108 """
109
110 schema = type_schema('delete', force={'type': 'boolean', 'default': False})
111 permissions = (
112 'timestream:DeleteDatabase',
113 'timestream:ListTables', 'timestream:DeleteTable', )
114
115 def process(self, resources):
116 client = local_session(self.manager.session_factory).client('timestream-write')
117 for r in resources:
118 try:
119 client.delete_database(
120 DatabaseName=r['DatabaseName'],
121 )
122 except client.exceptions.ResourceNotFoundException:
123 continue
124 except client.exceptions.ValidationException:
125 if not self.data.get('force', False):
126 self.log.error(
127 f'Unable to delete database:{r["DatabaseName"]}, '
128 'tables must be deleted first')
129 continue
130 tables = client.list_tables(DatabaseName=r['DatabaseName'])['Tables']
131 TimestreamTableDelete(
132 data={'type': 'delete'},
133 manager=self.manager,
134 log_dir=self.log_dir
135 ).process(tables)
136 client.delete_database(
137 DatabaseName=r['DatabaseName'],
138 )
139
140
141 @TimestreamDatabase.filter_registry.register('kms-key')
142 class KmsFilter(KmsRelatedFilter):
143 RelatedIdsExpression = 'KmsKeyId'
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/c7n/resources/timestream.py b/c7n/resources/timestream.py
--- a/c7n/resources/timestream.py
+++ b/c7n/resources/timestream.py
@@ -9,6 +9,7 @@
Tag as TagAction,
RemoveTag as RemoveTagAction
)
+from c7n.filters.backup import ConsecutiveAwsBackupsFilter
class DescribeTimestream(DescribeSource):
@@ -138,6 +139,9 @@
)
+TimestreamTable.filter_registry.register('consecutive-aws-backups', ConsecutiveAwsBackupsFilter)
+
+
@TimestreamDatabase.filter_registry.register('kms-key')
class KmsFilter(KmsRelatedFilter):
RelatedIdsExpression = 'KmsKeyId'
| {"golden_diff": "diff --git a/c7n/resources/timestream.py b/c7n/resources/timestream.py\n--- a/c7n/resources/timestream.py\n+++ b/c7n/resources/timestream.py\n@@ -9,6 +9,7 @@\n Tag as TagAction,\n RemoveTag as RemoveTagAction\n )\n+from c7n.filters.backup import ConsecutiveAwsBackupsFilter\n \n \n class DescribeTimestream(DescribeSource):\n@@ -138,6 +139,9 @@\n )\n \n \n+TimestreamTable.filter_registry.register('consecutive-aws-backups', ConsecutiveAwsBackupsFilter)\n+\n+\n @TimestreamDatabase.filter_registry.register('kms-key')\n class KmsFilter(KmsRelatedFilter):\n RelatedIdsExpression = 'KmsKeyId'\n", "issue": "Add support for backup in timestream backup\n### Describe the feature\n\nThis will use AWS backup service to take time-stream backup.\n\n### Extra information or context\n\n_No response_\n", "before_files": [{"content": "from c7n.manager import resources\nfrom c7n.actions import Action\nfrom c7n.filters.kms import KmsRelatedFilter\nfrom c7n.query import DescribeSource, QueryResourceManager, TypeInfo\nfrom c7n.utils import local_session, type_schema\nfrom c7n.tags import (\n TagDelayedAction,\n TagActionFilter,\n Tag as TagAction,\n RemoveTag as RemoveTagAction\n)\n\n\nclass DescribeTimestream(DescribeSource):\n def augment(self, resources):\n for r in resources:\n client = local_session(self.manager.session_factory).client('timestream-write')\n r['Tags'] = client.list_tags_for_resource(ResourceARN=r['Arn'])['Tags']\n return resources\n\n\[email protected]('timestream-database')\nclass TimestreamDatabase(QueryResourceManager):\n class resource_type(TypeInfo):\n service = 'timestream-write'\n arn_type = ''\n name = 'DatabaseName'\n id = arn = 'Arn'\n enum_spec = ('list_databases', 'Databases', {})\n permission_prefix = 'timestream'\n permissions = ('timestream:ListDatabases', )\n permissions_augment = (\"timestream:ListTagsForResource\",)\n source_mapping = {\n 'describe': DescribeTimestream,\n }\n\n\[email protected]('timestream-table')\nclass TimestreamTable(QueryResourceManager):\n class resource_type(TypeInfo):\n service = 'timestream-write'\n arn_type = ''\n name = 'TableName'\n id = arn = 'Arn'\n enum_spec = ('list_tables', 'Tables', {})\n permission_prefix = 'timestream'\n permissions = ('timestream:ListTables', )\n\n source_mapping = {\n 'describe': DescribeTimestream,\n }\n\n\[email protected]_registry.register('tag')\[email protected]_registry.register('tag')\nclass TimestreamTag(TagAction):\n\n permissions = ('timestream:TagResource', )\n\n def process_resource_set(self, client, resource_set, tags):\n for r in resource_set:\n client.tag_resource(ResourceARN=r['Arn'], Tags=tags)\n\n\[email protected]_registry.register('remove-tag')\[email protected]_registry.register('remove-tag')\nclass TimestreamRemoveTag(RemoveTagAction):\n\n permissions = ('timestream:UntagResource', )\n\n def process_resource_set(self, client, resource_set, tag_keys):\n for r in resource_set:\n client.untag_resource(ResourceARN=r['Arn'], TagKeys=tag_keys)\n\n\nTimestreamDatabase.action_registry.register('mark-for-op', TagDelayedAction)\nTimestreamTable.action_registry.register('mark-for-op', TagDelayedAction)\n\nTimestreamDatabase.filter_registry.register('marked-for-op', TagActionFilter)\nTimestreamTable.filter_registry.register('marked-for-op', TagActionFilter)\n\n\[email protected]_registry.register('delete')\nclass TimestreamTableDelete(Action):\n \"\"\"\n Deletes a timestream table\n \"\"\"\n\n schema = type_schema('delete')\n permissions = ('timestream:DeleteTable', )\n\n def process(self, resources):\n client = local_session(self.manager.session_factory).client('timestream-write')\n for r in resources:\n try:\n client.delete_table(\n DatabaseName=r['DatabaseName'],\n TableName=r['TableName']\n )\n except client.exceptions.ResourceNotFoundException:\n continue\n\n\[email protected]_registry.register('delete')\nclass TimestreamDatabaseDelete(Action):\n \"\"\"\n Deletes a timestream database\n \"\"\"\n\n schema = type_schema('delete', force={'type': 'boolean', 'default': False})\n permissions = (\n 'timestream:DeleteDatabase',\n 'timestream:ListTables', 'timestream:DeleteTable', )\n\n def process(self, resources):\n client = local_session(self.manager.session_factory).client('timestream-write')\n for r in resources:\n try:\n client.delete_database(\n DatabaseName=r['DatabaseName'],\n )\n except client.exceptions.ResourceNotFoundException:\n continue\n except client.exceptions.ValidationException:\n if not self.data.get('force', False):\n self.log.error(\n f'Unable to delete database:{r[\"DatabaseName\"]}, '\n 'tables must be deleted first')\n continue\n tables = client.list_tables(DatabaseName=r['DatabaseName'])['Tables']\n TimestreamTableDelete(\n data={'type': 'delete'},\n manager=self.manager,\n log_dir=self.log_dir\n ).process(tables)\n client.delete_database(\n DatabaseName=r['DatabaseName'],\n )\n\n\[email protected]_registry.register('kms-key')\nclass KmsFilter(KmsRelatedFilter):\n RelatedIdsExpression = 'KmsKeyId'\n", "path": "c7n/resources/timestream.py"}], "after_files": [{"content": "from c7n.manager import resources\nfrom c7n.actions import Action\nfrom c7n.filters.kms import KmsRelatedFilter\nfrom c7n.query import DescribeSource, QueryResourceManager, TypeInfo\nfrom c7n.utils import local_session, type_schema\nfrom c7n.tags import (\n TagDelayedAction,\n TagActionFilter,\n Tag as TagAction,\n RemoveTag as RemoveTagAction\n)\nfrom c7n.filters.backup import ConsecutiveAwsBackupsFilter\n\n\nclass DescribeTimestream(DescribeSource):\n def augment(self, resources):\n for r in resources:\n client = local_session(self.manager.session_factory).client('timestream-write')\n r['Tags'] = client.list_tags_for_resource(ResourceARN=r['Arn'])['Tags']\n return resources\n\n\[email protected]('timestream-database')\nclass TimestreamDatabase(QueryResourceManager):\n class resource_type(TypeInfo):\n service = 'timestream-write'\n arn_type = ''\n name = 'DatabaseName'\n id = arn = 'Arn'\n enum_spec = ('list_databases', 'Databases', {})\n permission_prefix = 'timestream'\n permissions = ('timestream:ListDatabases', )\n permissions_augment = (\"timestream:ListTagsForResource\",)\n source_mapping = {\n 'describe': DescribeTimestream,\n }\n\n\[email protected]('timestream-table')\nclass TimestreamTable(QueryResourceManager):\n class resource_type(TypeInfo):\n service = 'timestream-write'\n arn_type = ''\n name = 'TableName'\n id = arn = 'Arn'\n enum_spec = ('list_tables', 'Tables', {})\n permission_prefix = 'timestream'\n permissions = ('timestream:ListTables', )\n\n source_mapping = {\n 'describe': DescribeTimestream,\n }\n\n\[email protected]_registry.register('tag')\[email protected]_registry.register('tag')\nclass TimestreamTag(TagAction):\n\n permissions = ('timestream:TagResource', )\n\n def process_resource_set(self, client, resource_set, tags):\n for r in resource_set:\n client.tag_resource(ResourceARN=r['Arn'], Tags=tags)\n\n\[email protected]_registry.register('remove-tag')\[email protected]_registry.register('remove-tag')\nclass TimestreamRemoveTag(RemoveTagAction):\n\n permissions = ('timestream:UntagResource', )\n\n def process_resource_set(self, client, resource_set, tag_keys):\n for r in resource_set:\n client.untag_resource(ResourceARN=r['Arn'], TagKeys=tag_keys)\n\n\nTimestreamDatabase.action_registry.register('mark-for-op', TagDelayedAction)\nTimestreamTable.action_registry.register('mark-for-op', TagDelayedAction)\n\nTimestreamDatabase.filter_registry.register('marked-for-op', TagActionFilter)\nTimestreamTable.filter_registry.register('marked-for-op', TagActionFilter)\n\n\[email protected]_registry.register('delete')\nclass TimestreamTableDelete(Action):\n \"\"\"\n Deletes a timestream table\n \"\"\"\n\n schema = type_schema('delete')\n permissions = ('timestream:DeleteTable', )\n\n def process(self, resources):\n client = local_session(self.manager.session_factory).client('timestream-write')\n for r in resources:\n try:\n client.delete_table(\n DatabaseName=r['DatabaseName'],\n TableName=r['TableName']\n )\n except client.exceptions.ResourceNotFoundException:\n continue\n\n\[email protected]_registry.register('delete')\nclass TimestreamDatabaseDelete(Action):\n \"\"\"\n Deletes a timestream database\n \"\"\"\n\n schema = type_schema('delete', force={'type': 'boolean', 'default': False})\n permissions = (\n 'timestream:DeleteDatabase',\n 'timestream:ListTables', 'timestream:DeleteTable', )\n\n def process(self, resources):\n client = local_session(self.manager.session_factory).client('timestream-write')\n for r in resources:\n try:\n client.delete_database(\n DatabaseName=r['DatabaseName'],\n )\n except client.exceptions.ResourceNotFoundException:\n continue\n except client.exceptions.ValidationException:\n if not self.data.get('force', False):\n self.log.error(\n f'Unable to delete database:{r[\"DatabaseName\"]}, '\n 'tables must be deleted first')\n continue\n tables = client.list_tables(DatabaseName=r['DatabaseName'])['Tables']\n TimestreamTableDelete(\n data={'type': 'delete'},\n manager=self.manager,\n log_dir=self.log_dir\n ).process(tables)\n client.delete_database(\n DatabaseName=r['DatabaseName'],\n )\n\n\nTimestreamTable.filter_registry.register('consecutive-aws-backups', ConsecutiveAwsBackupsFilter)\n\n\[email protected]_registry.register('kms-key')\nclass KmsFilter(KmsRelatedFilter):\n RelatedIdsExpression = 'KmsKeyId'\n", "path": "c7n/resources/timestream.py"}]} | 1,656 | 168 |
gh_patches_debug_8455 | rasdani/github-patches | git_diff | gratipay__gratipay.com-1769 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Integrate SASS compilation into development environment
This started in [IRC](https://botbot.me/freenode/gittip/msg/8682409/).
The goal here is to use the same SASS compiler that we use in production, in development. We currently use the standard Ruby version, but that won't quite work for development since we shouldn't require all of Ruby just to run a single tool. So it seems the best course of action is to move to a Python-based SASS compiler.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gittip/wireup.py`
Content:
```
1 """Wireup
2 """
3 from __future__ import absolute_import, division, print_function, unicode_literals
4 import os
5 import sys
6 import threading
7 import time
8
9 import aspen
10 import balanced
11 import gittip
12 import raven
13 import psycopg2
14 import stripe
15 import gittip.utils.mixpanel
16 from gittip.models.community import Community
17 from gittip.models.participant import Participant
18 from postgres import Postgres
19
20
21 def canonical():
22 gittip.canonical_scheme = os.environ['CANONICAL_SCHEME']
23 gittip.canonical_host = os.environ['CANONICAL_HOST']
24
25
26 # wireup.db() should only ever be called once by the application
27 def db():
28 dburl = os.environ['DATABASE_URL']
29 maxconn = int(os.environ['DATABASE_MAXCONN'])
30 db = Postgres(dburl, maxconn=maxconn)
31
32 # register hstore type
33 with db.get_cursor() as cursor:
34 psycopg2.extras.register_hstore(cursor, globally=True, unicode=True)
35
36 db.register_model(Community)
37 db.register_model(Participant)
38
39 return db
40
41
42 def billing():
43 stripe.api_key= os.environ['STRIPE_SECRET_API_KEY']
44 stripe.publishable_api_key= os.environ['STRIPE_PUBLISHABLE_API_KEY']
45 balanced.configure(os.environ['BALANCED_API_SECRET'])
46
47
48 def username_restrictions(website):
49 gittip.RESTRICTED_USERNAMES = os.listdir(website.www_root)
50
51
52 def request_metrics(website):
53 def add_start_timestamp(request):
54 request.x_start = time.time()
55 def log_request_count_and_response_time(response):
56 print("count#requests=1")
57 response_time = time.time() - response.request.x_start
58 print("measure#response_time={}ms".format(response_time * 1000))
59 website.hooks.inbound_early.insert(0, add_start_timestamp)
60 website.hooks.outbound += [log_request_count_and_response_time]
61
62
63 def sentry(website):
64 if not website.sentry_dsn:
65 aspen.log_dammit("Won't log to Sentry (SENTRY_DSN is empty).")
66 return
67
68 sentry = raven.Client(website.sentry_dsn)
69
70 def tell_sentry(request):
71 cls, response = sys.exc_info()[:2]
72
73
74 # Decide if we care.
75 # ==================
76
77 if cls is aspen.Response:
78
79 if response.code < 500:
80
81 # Only log server errors to Sentry. For responses < 500 we use
82 # stream-/line-based access logging. See discussion on:
83
84 # https://github.com/gittip/www.gittip.com/pull/1560.
85
86 return
87
88
89 # Find a user.
90 # ============
91 # | is disallowed in usernames, so we can use it here to indicate
92 # situations in which we can't get a username.
93
94 request_context = getattr(request, 'context', None)
95 user = {}
96 user_id = 'n/a'
97 if request_context is None:
98 username = '| no context'
99 else:
100 user = request.context.get('user', None)
101 if user is None:
102 username = '| no user'
103 else:
104 is_anon = getattr(user, 'ANON', None)
105 if is_anon is None:
106 username = '| no ANON'
107 elif is_anon:
108 username = '| anonymous'
109 else:
110 participant = getattr(user, 'participant', None)
111 if participant is None:
112 username = '| no participant'
113 else:
114 username = getattr(user.participant, 'username', None)
115 if username is None:
116 username = '| no username'
117 else:
118 user_id = user.participant.id
119 username = username.encode('utf8')
120 user = { 'id': user_id
121 , 'is_admin': user.participant.is_admin
122 , 'is_suspicious': user.participant.is_suspicious
123 , 'claimed_time': user.participant.claimed_time.isoformat()
124 , 'url': 'https://www.gittip.com/{}/'.format(username)
125 }
126
127
128 # Fire off a Sentry call.
129 # =======================
130
131 tags = { 'username': username
132 , 'user_id': user_id
133 }
134 extra = { 'filepath': getattr(request, 'fs', None)
135 , 'request': str(request).splitlines()
136 , 'user': user
137 }
138 result = sentry.captureException(tags=tags, extra=extra)
139
140
141 # Emit a reference string to stdout.
142 # ==================================
143
144 ident = sentry.get_ident(result)
145 aspen.log_dammit('Exception reference: ' + ident)
146
147
148 website.hooks.error_early += [tell_sentry]
149 return tell_sentry
150
151
152 def mixpanel(website):
153 website.mixpanel_token = os.environ['MIXPANEL_TOKEN']
154 gittip.utils.mixpanel.MIXPANEL_TOKEN = os.environ['MIXPANEL_TOKEN']
155
156 def nanswers():
157 from gittip.models import participant
158 participant.NANSWERS_THRESHOLD = int(os.environ['NANSWERS_THRESHOLD'])
159
160 def nmembers(website):
161 from gittip.models import community
162 community.NMEMBERS_THRESHOLD = int(os.environ['NMEMBERS_THRESHOLD'])
163 website.NMEMBERS_THRESHOLD = community.NMEMBERS_THRESHOLD
164
165 def envvars(website):
166
167 missing_keys = []
168 malformed_values = []
169
170 def envvar(key, cast=None):
171 if key not in os.environ:
172 missing_keys.append(key)
173 return ""
174 value = os.environ[key].decode('ASCII')
175 if cast is not None:
176 try:
177 value = cast(value)
178 except:
179 err = str(sys.exc_info()[1])
180 malformed_values.append((key, err))
181 return ""
182 return value
183
184 def is_yesish(val):
185 return val.lower() in ('1', 'true', 'yes')
186
187 website.bitbucket_consumer_key = envvar('BITBUCKET_CONSUMER_KEY')
188 website.bitbucket_consumer_secret = envvar('BITBUCKET_CONSUMER_SECRET')
189 website.bitbucket_callback = envvar('BITBUCKET_CALLBACK')
190
191 website.github_client_id = envvar('GITHUB_CLIENT_ID')
192 website.github_client_secret = envvar('GITHUB_CLIENT_SECRET')
193 website.github_callback = envvar('GITHUB_CALLBACK')
194
195 website.twitter_consumer_key = envvar('TWITTER_CONSUMER_KEY')
196 website.twitter_consumer_secret = envvar('TWITTER_CONSUMER_SECRET')
197 website.twitter_access_token = envvar('TWITTER_ACCESS_TOKEN')
198 website.twitter_access_token_secret = envvar('TWITTER_ACCESS_TOKEN_SECRET')
199 website.twitter_callback = envvar('TWITTER_CALLBACK')
200
201 website.bountysource_www_host = envvar('BOUNTYSOURCE_WWW_HOST')
202 website.bountysource_api_host = envvar('BOUNTYSOURCE_API_HOST')
203 website.bountysource_api_secret = envvar('BOUNTYSOURCE_API_SECRET')
204 website.bountysource_callback = envvar('BOUNTYSOURCE_CALLBACK')
205
206 website.css_href = envvar('GITTIP_CSS_HREF') \
207 .replace('%version', website.version)
208 website.js_src = envvar('GITTIP_JS_SRC') \
209 .replace('%version', website.version)
210 website.cache_static = is_yesish(envvar('GITTIP_CACHE_STATIC'))
211
212 website.google_analytics_id = envvar('GOOGLE_ANALYTICS_ID')
213 website.gauges_id = envvar('GAUGES_ID')
214 website.sentry_dsn = envvar('SENTRY_DSN')
215
216 website.min_threads = envvar('MIN_THREADS', int)
217 website.log_busy_threads_every = envvar('LOG_BUSY_THREADS_EVERY', int)
218 website.log_metrics = is_yesish(envvar('LOG_METRICS'))
219
220 if malformed_values:
221 malformed_values.sort()
222 these = len(malformed_values) != 1 and 'these' or 'this'
223 plural = len(malformed_values) != 1 and 's' or ''
224 aspen.log_dammit("=" * 42)
225 aspen.log_dammit( "Oh no! Gittip.com couldn't understand %s " % these
226 , "environment variable%s:" % plural
227 )
228 aspen.log_dammit(" ")
229 for key, err in malformed_values:
230 aspen.log_dammit(" {} ({})".format(key, err))
231 aspen.log_dammit(" ")
232 aspen.log_dammit("See ./default_local.env for hints.")
233
234 aspen.log_dammit("=" * 42)
235 raise SystemExit
236
237 if missing_keys:
238 missing_keys.sort()
239 these = len(missing_keys) != 1 and 'these' or 'this'
240 plural = len(missing_keys) != 1 and 's' or ''
241 aspen.log_dammit("=" * 42)
242 aspen.log_dammit( "Oh no! Gittip.com needs %s missing " % these
243 , "environment variable%s:" % plural
244 )
245 aspen.log_dammit(" ")
246 for key in missing_keys:
247 aspen.log_dammit(" " + key)
248 aspen.log_dammit(" ")
249 aspen.log_dammit( "(Sorry, we must've started looking for "
250 , "%s since you last updated Gittip!)" % these
251 )
252 aspen.log_dammit(" ")
253 aspen.log_dammit("Running Gittip locally? Edit ./local.env.")
254 aspen.log_dammit("Running the test suite? Edit ./tests/env.")
255 aspen.log_dammit(" ")
256 aspen.log_dammit("See ./default_local.env for hints.")
257
258 aspen.log_dammit("=" * 42)
259 raise SystemExit
260
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gittip/wireup.py b/gittip/wireup.py
--- a/gittip/wireup.py
+++ b/gittip/wireup.py
@@ -208,6 +208,7 @@
website.js_src = envvar('GITTIP_JS_SRC') \
.replace('%version', website.version)
website.cache_static = is_yesish(envvar('GITTIP_CACHE_STATIC'))
+ website.compress_assets = is_yesish(envvar('GITTIP_COMPRESS_ASSETS'))
website.google_analytics_id = envvar('GOOGLE_ANALYTICS_ID')
website.gauges_id = envvar('GAUGES_ID')
| {"golden_diff": "diff --git a/gittip/wireup.py b/gittip/wireup.py\n--- a/gittip/wireup.py\n+++ b/gittip/wireup.py\n@@ -208,6 +208,7 @@\n website.js_src = envvar('GITTIP_JS_SRC') \\\n .replace('%version', website.version)\n website.cache_static = is_yesish(envvar('GITTIP_CACHE_STATIC'))\n+ website.compress_assets = is_yesish(envvar('GITTIP_COMPRESS_ASSETS'))\n \n website.google_analytics_id = envvar('GOOGLE_ANALYTICS_ID')\n website.gauges_id = envvar('GAUGES_ID')\n", "issue": "Integrate SASS compilation into development environment\nThis started in [IRC](https://botbot.me/freenode/gittip/msg/8682409/).\n\nThe goal here is to use the same SASS compiler that we use in production, in development. We currently use the standard Ruby version, but that won't quite work for development since we shouldn't require all of Ruby just to run a single tool. So it seems the best course of action is to move to a Python-based SASS compiler.\n\n", "before_files": [{"content": "\"\"\"Wireup\n\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\nimport os\nimport sys\nimport threading\nimport time\n\nimport aspen\nimport balanced\nimport gittip\nimport raven\nimport psycopg2\nimport stripe\nimport gittip.utils.mixpanel\nfrom gittip.models.community import Community\nfrom gittip.models.participant import Participant\nfrom postgres import Postgres\n\n\ndef canonical():\n gittip.canonical_scheme = os.environ['CANONICAL_SCHEME']\n gittip.canonical_host = os.environ['CANONICAL_HOST']\n\n\n# wireup.db() should only ever be called once by the application\ndef db():\n dburl = os.environ['DATABASE_URL']\n maxconn = int(os.environ['DATABASE_MAXCONN'])\n db = Postgres(dburl, maxconn=maxconn)\n\n # register hstore type\n with db.get_cursor() as cursor:\n psycopg2.extras.register_hstore(cursor, globally=True, unicode=True)\n\n db.register_model(Community)\n db.register_model(Participant)\n\n return db\n\n\ndef billing():\n stripe.api_key= os.environ['STRIPE_SECRET_API_KEY']\n stripe.publishable_api_key= os.environ['STRIPE_PUBLISHABLE_API_KEY']\n balanced.configure(os.environ['BALANCED_API_SECRET'])\n\n\ndef username_restrictions(website):\n gittip.RESTRICTED_USERNAMES = os.listdir(website.www_root)\n\n\ndef request_metrics(website):\n def add_start_timestamp(request):\n request.x_start = time.time()\n def log_request_count_and_response_time(response):\n print(\"count#requests=1\")\n response_time = time.time() - response.request.x_start\n print(\"measure#response_time={}ms\".format(response_time * 1000))\n website.hooks.inbound_early.insert(0, add_start_timestamp)\n website.hooks.outbound += [log_request_count_and_response_time]\n\n\ndef sentry(website):\n if not website.sentry_dsn:\n aspen.log_dammit(\"Won't log to Sentry (SENTRY_DSN is empty).\")\n return\n\n sentry = raven.Client(website.sentry_dsn)\n\n def tell_sentry(request):\n cls, response = sys.exc_info()[:2]\n\n\n # Decide if we care.\n # ==================\n\n if cls is aspen.Response:\n\n if response.code < 500:\n\n # Only log server errors to Sentry. For responses < 500 we use\n # stream-/line-based access logging. See discussion on:\n\n # https://github.com/gittip/www.gittip.com/pull/1560.\n\n return\n\n\n # Find a user.\n # ============\n # | is disallowed in usernames, so we can use it here to indicate\n # situations in which we can't get a username.\n\n request_context = getattr(request, 'context', None)\n user = {}\n user_id = 'n/a'\n if request_context is None:\n username = '| no context'\n else:\n user = request.context.get('user', None)\n if user is None:\n username = '| no user'\n else:\n is_anon = getattr(user, 'ANON', None)\n if is_anon is None:\n username = '| no ANON'\n elif is_anon:\n username = '| anonymous'\n else:\n participant = getattr(user, 'participant', None)\n if participant is None:\n username = '| no participant'\n else:\n username = getattr(user.participant, 'username', None)\n if username is None:\n username = '| no username'\n else:\n user_id = user.participant.id\n username = username.encode('utf8')\n user = { 'id': user_id\n , 'is_admin': user.participant.is_admin\n , 'is_suspicious': user.participant.is_suspicious\n , 'claimed_time': user.participant.claimed_time.isoformat()\n , 'url': 'https://www.gittip.com/{}/'.format(username)\n }\n\n\n # Fire off a Sentry call.\n # =======================\n\n tags = { 'username': username\n , 'user_id': user_id\n }\n extra = { 'filepath': getattr(request, 'fs', None)\n , 'request': str(request).splitlines()\n , 'user': user\n }\n result = sentry.captureException(tags=tags, extra=extra)\n\n\n # Emit a reference string to stdout.\n # ==================================\n\n ident = sentry.get_ident(result)\n aspen.log_dammit('Exception reference: ' + ident)\n\n\n website.hooks.error_early += [tell_sentry]\n return tell_sentry\n\n\ndef mixpanel(website):\n website.mixpanel_token = os.environ['MIXPANEL_TOKEN']\n gittip.utils.mixpanel.MIXPANEL_TOKEN = os.environ['MIXPANEL_TOKEN']\n\ndef nanswers():\n from gittip.models import participant\n participant.NANSWERS_THRESHOLD = int(os.environ['NANSWERS_THRESHOLD'])\n\ndef nmembers(website):\n from gittip.models import community\n community.NMEMBERS_THRESHOLD = int(os.environ['NMEMBERS_THRESHOLD'])\n website.NMEMBERS_THRESHOLD = community.NMEMBERS_THRESHOLD\n\ndef envvars(website):\n\n missing_keys = []\n malformed_values = []\n\n def envvar(key, cast=None):\n if key not in os.environ:\n missing_keys.append(key)\n return \"\"\n value = os.environ[key].decode('ASCII')\n if cast is not None:\n try:\n value = cast(value)\n except:\n err = str(sys.exc_info()[1])\n malformed_values.append((key, err))\n return \"\"\n return value\n\n def is_yesish(val):\n return val.lower() in ('1', 'true', 'yes')\n\n website.bitbucket_consumer_key = envvar('BITBUCKET_CONSUMER_KEY')\n website.bitbucket_consumer_secret = envvar('BITBUCKET_CONSUMER_SECRET')\n website.bitbucket_callback = envvar('BITBUCKET_CALLBACK')\n\n website.github_client_id = envvar('GITHUB_CLIENT_ID')\n website.github_client_secret = envvar('GITHUB_CLIENT_SECRET')\n website.github_callback = envvar('GITHUB_CALLBACK')\n\n website.twitter_consumer_key = envvar('TWITTER_CONSUMER_KEY')\n website.twitter_consumer_secret = envvar('TWITTER_CONSUMER_SECRET')\n website.twitter_access_token = envvar('TWITTER_ACCESS_TOKEN')\n website.twitter_access_token_secret = envvar('TWITTER_ACCESS_TOKEN_SECRET')\n website.twitter_callback = envvar('TWITTER_CALLBACK')\n\n website.bountysource_www_host = envvar('BOUNTYSOURCE_WWW_HOST')\n website.bountysource_api_host = envvar('BOUNTYSOURCE_API_HOST')\n website.bountysource_api_secret = envvar('BOUNTYSOURCE_API_SECRET')\n website.bountysource_callback = envvar('BOUNTYSOURCE_CALLBACK')\n\n website.css_href = envvar('GITTIP_CSS_HREF') \\\n .replace('%version', website.version)\n website.js_src = envvar('GITTIP_JS_SRC') \\\n .replace('%version', website.version)\n website.cache_static = is_yesish(envvar('GITTIP_CACHE_STATIC'))\n\n website.google_analytics_id = envvar('GOOGLE_ANALYTICS_ID')\n website.gauges_id = envvar('GAUGES_ID')\n website.sentry_dsn = envvar('SENTRY_DSN')\n\n website.min_threads = envvar('MIN_THREADS', int)\n website.log_busy_threads_every = envvar('LOG_BUSY_THREADS_EVERY', int)\n website.log_metrics = is_yesish(envvar('LOG_METRICS'))\n\n if malformed_values:\n malformed_values.sort()\n these = len(malformed_values) != 1 and 'these' or 'this'\n plural = len(malformed_values) != 1 and 's' or ''\n aspen.log_dammit(\"=\" * 42)\n aspen.log_dammit( \"Oh no! Gittip.com couldn't understand %s \" % these\n , \"environment variable%s:\" % plural\n )\n aspen.log_dammit(\" \")\n for key, err in malformed_values:\n aspen.log_dammit(\" {} ({})\".format(key, err))\n aspen.log_dammit(\" \")\n aspen.log_dammit(\"See ./default_local.env for hints.\")\n\n aspen.log_dammit(\"=\" * 42)\n raise SystemExit\n\n if missing_keys:\n missing_keys.sort()\n these = len(missing_keys) != 1 and 'these' or 'this'\n plural = len(missing_keys) != 1 and 's' or ''\n aspen.log_dammit(\"=\" * 42)\n aspen.log_dammit( \"Oh no! Gittip.com needs %s missing \" % these\n , \"environment variable%s:\" % plural\n )\n aspen.log_dammit(\" \")\n for key in missing_keys:\n aspen.log_dammit(\" \" + key)\n aspen.log_dammit(\" \")\n aspen.log_dammit( \"(Sorry, we must've started looking for \"\n , \"%s since you last updated Gittip!)\" % these\n )\n aspen.log_dammit(\" \")\n aspen.log_dammit(\"Running Gittip locally? Edit ./local.env.\")\n aspen.log_dammit(\"Running the test suite? Edit ./tests/env.\")\n aspen.log_dammit(\" \")\n aspen.log_dammit(\"See ./default_local.env for hints.\")\n\n aspen.log_dammit(\"=\" * 42)\n raise SystemExit\n", "path": "gittip/wireup.py"}], "after_files": [{"content": "\"\"\"Wireup\n\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\nimport os\nimport sys\nimport threading\nimport time\n\nimport aspen\nimport balanced\nimport gittip\nimport raven\nimport psycopg2\nimport stripe\nimport gittip.utils.mixpanel\nfrom gittip.models.community import Community\nfrom gittip.models.participant import Participant\nfrom postgres import Postgres\n\n\ndef canonical():\n gittip.canonical_scheme = os.environ['CANONICAL_SCHEME']\n gittip.canonical_host = os.environ['CANONICAL_HOST']\n\n\n# wireup.db() should only ever be called once by the application\ndef db():\n dburl = os.environ['DATABASE_URL']\n maxconn = int(os.environ['DATABASE_MAXCONN'])\n db = Postgres(dburl, maxconn=maxconn)\n\n # register hstore type\n with db.get_cursor() as cursor:\n psycopg2.extras.register_hstore(cursor, globally=True, unicode=True)\n\n db.register_model(Community)\n db.register_model(Participant)\n\n return db\n\n\ndef billing():\n stripe.api_key= os.environ['STRIPE_SECRET_API_KEY']\n stripe.publishable_api_key= os.environ['STRIPE_PUBLISHABLE_API_KEY']\n balanced.configure(os.environ['BALANCED_API_SECRET'])\n\n\ndef username_restrictions(website):\n gittip.RESTRICTED_USERNAMES = os.listdir(website.www_root)\n\n\ndef request_metrics(website):\n def add_start_timestamp(request):\n request.x_start = time.time()\n def log_request_count_and_response_time(response):\n print(\"count#requests=1\")\n response_time = time.time() - response.request.x_start\n print(\"measure#response_time={}ms\".format(response_time * 1000))\n website.hooks.inbound_early.insert(0, add_start_timestamp)\n website.hooks.outbound += [log_request_count_and_response_time]\n\n\ndef sentry(website):\n if not website.sentry_dsn:\n aspen.log_dammit(\"Won't log to Sentry (SENTRY_DSN is empty).\")\n return\n\n sentry = raven.Client(website.sentry_dsn)\n\n def tell_sentry(request):\n cls, response = sys.exc_info()[:2]\n\n\n # Decide if we care.\n # ==================\n\n if cls is aspen.Response:\n\n if response.code < 500:\n\n # Only log server errors to Sentry. For responses < 500 we use\n # stream-/line-based access logging. See discussion on:\n\n # https://github.com/gittip/www.gittip.com/pull/1560.\n\n return\n\n\n # Find a user.\n # ============\n # | is disallowed in usernames, so we can use it here to indicate\n # situations in which we can't get a username.\n\n request_context = getattr(request, 'context', None)\n user = {}\n user_id = 'n/a'\n if request_context is None:\n username = '| no context'\n else:\n user = request.context.get('user', None)\n if user is None:\n username = '| no user'\n else:\n is_anon = getattr(user, 'ANON', None)\n if is_anon is None:\n username = '| no ANON'\n elif is_anon:\n username = '| anonymous'\n else:\n participant = getattr(user, 'participant', None)\n if participant is None:\n username = '| no participant'\n else:\n username = getattr(user.participant, 'username', None)\n if username is None:\n username = '| no username'\n else:\n user_id = user.participant.id\n username = username.encode('utf8')\n user = { 'id': user_id\n , 'is_admin': user.participant.is_admin\n , 'is_suspicious': user.participant.is_suspicious\n , 'claimed_time': user.participant.claimed_time.isoformat()\n , 'url': 'https://www.gittip.com/{}/'.format(username)\n }\n\n\n # Fire off a Sentry call.\n # =======================\n\n tags = { 'username': username\n , 'user_id': user_id\n }\n extra = { 'filepath': getattr(request, 'fs', None)\n , 'request': str(request).splitlines()\n , 'user': user\n }\n result = sentry.captureException(tags=tags, extra=extra)\n\n\n # Emit a reference string to stdout.\n # ==================================\n\n ident = sentry.get_ident(result)\n aspen.log_dammit('Exception reference: ' + ident)\n\n\n website.hooks.error_early += [tell_sentry]\n return tell_sentry\n\n\ndef mixpanel(website):\n website.mixpanel_token = os.environ['MIXPANEL_TOKEN']\n gittip.utils.mixpanel.MIXPANEL_TOKEN = os.environ['MIXPANEL_TOKEN']\n\ndef nanswers():\n from gittip.models import participant\n participant.NANSWERS_THRESHOLD = int(os.environ['NANSWERS_THRESHOLD'])\n\ndef nmembers(website):\n from gittip.models import community\n community.NMEMBERS_THRESHOLD = int(os.environ['NMEMBERS_THRESHOLD'])\n website.NMEMBERS_THRESHOLD = community.NMEMBERS_THRESHOLD\n\ndef envvars(website):\n\n missing_keys = []\n malformed_values = []\n\n def envvar(key, cast=None):\n if key not in os.environ:\n missing_keys.append(key)\n return \"\"\n value = os.environ[key].decode('ASCII')\n if cast is not None:\n try:\n value = cast(value)\n except:\n err = str(sys.exc_info()[1])\n malformed_values.append((key, err))\n return \"\"\n return value\n\n def is_yesish(val):\n return val.lower() in ('1', 'true', 'yes')\n\n website.bitbucket_consumer_key = envvar('BITBUCKET_CONSUMER_KEY')\n website.bitbucket_consumer_secret = envvar('BITBUCKET_CONSUMER_SECRET')\n website.bitbucket_callback = envvar('BITBUCKET_CALLBACK')\n\n website.github_client_id = envvar('GITHUB_CLIENT_ID')\n website.github_client_secret = envvar('GITHUB_CLIENT_SECRET')\n website.github_callback = envvar('GITHUB_CALLBACK')\n\n website.twitter_consumer_key = envvar('TWITTER_CONSUMER_KEY')\n website.twitter_consumer_secret = envvar('TWITTER_CONSUMER_SECRET')\n website.twitter_access_token = envvar('TWITTER_ACCESS_TOKEN')\n website.twitter_access_token_secret = envvar('TWITTER_ACCESS_TOKEN_SECRET')\n website.twitter_callback = envvar('TWITTER_CALLBACK')\n\n website.bountysource_www_host = envvar('BOUNTYSOURCE_WWW_HOST')\n website.bountysource_api_host = envvar('BOUNTYSOURCE_API_HOST')\n website.bountysource_api_secret = envvar('BOUNTYSOURCE_API_SECRET')\n website.bountysource_callback = envvar('BOUNTYSOURCE_CALLBACK')\n\n website.css_href = envvar('GITTIP_CSS_HREF') \\\n .replace('%version', website.version)\n website.js_src = envvar('GITTIP_JS_SRC') \\\n .replace('%version', website.version)\n website.cache_static = is_yesish(envvar('GITTIP_CACHE_STATIC'))\n website.compress_assets = is_yesish(envvar('GITTIP_COMPRESS_ASSETS'))\n\n website.google_analytics_id = envvar('GOOGLE_ANALYTICS_ID')\n website.gauges_id = envvar('GAUGES_ID')\n website.sentry_dsn = envvar('SENTRY_DSN')\n\n website.min_threads = envvar('MIN_THREADS', int)\n website.log_busy_threads_every = envvar('LOG_BUSY_THREADS_EVERY', int)\n website.log_metrics = is_yesish(envvar('LOG_METRICS'))\n\n if malformed_values:\n malformed_values.sort()\n these = len(malformed_values) != 1 and 'these' or 'this'\n plural = len(malformed_values) != 1 and 's' or ''\n aspen.log_dammit(\"=\" * 42)\n aspen.log_dammit( \"Oh no! Gittip.com couldn't understand %s \" % these\n , \"environment variable%s:\" % plural\n )\n aspen.log_dammit(\" \")\n for key, err in malformed_values:\n aspen.log_dammit(\" {} ({})\".format(key, err))\n aspen.log_dammit(\" \")\n aspen.log_dammit(\"See ./default_local.env for hints.\")\n\n aspen.log_dammit(\"=\" * 42)\n raise SystemExit\n\n if missing_keys:\n missing_keys.sort()\n these = len(missing_keys) != 1 and 'these' or 'this'\n plural = len(missing_keys) != 1 and 's' or ''\n aspen.log_dammit(\"=\" * 42)\n aspen.log_dammit( \"Oh no! Gittip.com needs %s missing \" % these\n , \"environment variable%s:\" % plural\n )\n aspen.log_dammit(\" \")\n for key in missing_keys:\n aspen.log_dammit(\" \" + key)\n aspen.log_dammit(\" \")\n aspen.log_dammit( \"(Sorry, we must've started looking for \"\n , \"%s since you last updated Gittip!)\" % these\n )\n aspen.log_dammit(\" \")\n aspen.log_dammit(\"Running Gittip locally? Edit ./local.env.\")\n aspen.log_dammit(\"Running the test suite? Edit ./tests/env.\")\n aspen.log_dammit(\" \")\n aspen.log_dammit(\"See ./default_local.env for hints.\")\n\n aspen.log_dammit(\"=\" * 42)\n raise SystemExit\n", "path": "gittip/wireup.py"}]} | 3,168 | 148 |
gh_patches_debug_13691 | rasdani/github-patches | git_diff | pre-commit__pre-commit-2187 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
doesn't work in visual studio 2019 on Windows
### describe your issue
When I try to commit from visual studio 2019, I get the following message:
```
/usr/bin/env: ‘bash’: No such file or directory
```
I have confirmed that it works from vscode and command prompt.
I rewrote the `.git/hooks/pre-commit` to below and it worked.
```
#!/usr/bin/env sh
```
As far as I can see, visual studio git hooks only support `sh`.
Are you aware of this problem?
thank you.
### pre-commit --version
pre-commit 2.16.0
### .pre-commit-config.yaml
```yaml
repos:
- repo: https://github.com/pre-commit/mirrors-clang-format
rev: v13.0.0
hooks:
- id: clang-format
```
### ~/.cache/pre-commit/pre-commit.log (if present)
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/commands/install_uninstall.py`
Content:
```
1 import logging
2 import os.path
3 import shlex
4 import shutil
5 import sys
6 from typing import Optional
7 from typing import Sequence
8 from typing import Tuple
9
10 from pre_commit import git
11 from pre_commit import output
12 from pre_commit.clientlib import load_config
13 from pre_commit.repository import all_hooks
14 from pre_commit.repository import install_hook_envs
15 from pre_commit.store import Store
16 from pre_commit.util import make_executable
17 from pre_commit.util import resource_text
18
19
20 logger = logging.getLogger(__name__)
21
22 # This is used to identify the hook file we install
23 PRIOR_HASHES = (
24 b'4d9958c90bc262f47553e2c073f14cfe',
25 b'd8ee923c46731b42cd95cc869add4062',
26 b'49fd668cb42069aa1b6048464be5d395',
27 b'79f09a650522a87b0da915d0d983b2de',
28 b'e358c9dae00eac5d06b38dfdb1e33a8c',
29 )
30 CURRENT_HASH = b'138fd403232d2ddd5efb44317e38bf03'
31 TEMPLATE_START = '# start templated\n'
32 TEMPLATE_END = '# end templated\n'
33
34
35 def _hook_paths(
36 hook_type: str,
37 git_dir: Optional[str] = None,
38 ) -> Tuple[str, str]:
39 git_dir = git_dir if git_dir is not None else git.get_git_dir()
40 pth = os.path.join(git_dir, 'hooks', hook_type)
41 return pth, f'{pth}.legacy'
42
43
44 def is_our_script(filename: str) -> bool:
45 if not os.path.exists(filename): # pragma: win32 no cover (symlink)
46 return False
47 with open(filename, 'rb') as f:
48 contents = f.read()
49 return any(h in contents for h in (CURRENT_HASH,) + PRIOR_HASHES)
50
51
52 def _install_hook_script(
53 config_file: str,
54 hook_type: str,
55 overwrite: bool = False,
56 skip_on_missing_config: bool = False,
57 git_dir: Optional[str] = None,
58 ) -> None:
59 hook_path, legacy_path = _hook_paths(hook_type, git_dir=git_dir)
60
61 os.makedirs(os.path.dirname(hook_path), exist_ok=True)
62
63 # If we have an existing hook, move it to pre-commit.legacy
64 if os.path.lexists(hook_path) and not is_our_script(hook_path):
65 shutil.move(hook_path, legacy_path)
66
67 # If we specify overwrite, we simply delete the legacy file
68 if overwrite and os.path.exists(legacy_path):
69 os.remove(legacy_path)
70 elif os.path.exists(legacy_path):
71 output.write_line(
72 f'Running in migration mode with existing hooks at {legacy_path}\n'
73 f'Use -f to use only pre-commit.',
74 )
75
76 args = ['hook-impl', f'--config={config_file}', f'--hook-type={hook_type}']
77 if skip_on_missing_config:
78 args.append('--skip-on-missing-config')
79
80 with open(hook_path, 'w') as hook_file:
81 contents = resource_text('hook-tmpl')
82 before, rest = contents.split(TEMPLATE_START)
83 _, after = rest.split(TEMPLATE_END)
84
85 hook_file.write(before + TEMPLATE_START)
86 hook_file.write(f'INSTALL_PYTHON={shlex.quote(sys.executable)}\n')
87 # TODO: python3.8+: shlex.join
88 args_s = ' '.join(shlex.quote(part) for part in args)
89 hook_file.write(f'ARGS=({args_s})\n')
90 hook_file.write(TEMPLATE_END + after)
91 make_executable(hook_path)
92
93 output.write_line(f'pre-commit installed at {hook_path}')
94
95
96 def install(
97 config_file: str,
98 store: Store,
99 hook_types: Sequence[str],
100 overwrite: bool = False,
101 hooks: bool = False,
102 skip_on_missing_config: bool = False,
103 git_dir: Optional[str] = None,
104 ) -> int:
105 if git_dir is None and git.has_core_hookpaths_set():
106 logger.error(
107 'Cowardly refusing to install hooks with `core.hooksPath` set.\n'
108 'hint: `git config --unset-all core.hooksPath`',
109 )
110 return 1
111
112 for hook_type in hook_types:
113 _install_hook_script(
114 config_file, hook_type,
115 overwrite=overwrite,
116 skip_on_missing_config=skip_on_missing_config,
117 git_dir=git_dir,
118 )
119
120 if hooks:
121 install_hooks(config_file, store)
122
123 return 0
124
125
126 def install_hooks(config_file: str, store: Store) -> int:
127 install_hook_envs(all_hooks(load_config(config_file), store), store)
128 return 0
129
130
131 def _uninstall_hook_script(hook_type: str) -> None:
132 hook_path, legacy_path = _hook_paths(hook_type)
133
134 # If our file doesn't exist or it isn't ours, gtfo.
135 if not os.path.exists(hook_path) or not is_our_script(hook_path):
136 return
137
138 os.remove(hook_path)
139 output.write_line(f'{hook_type} uninstalled')
140
141 if os.path.exists(legacy_path):
142 os.replace(legacy_path, hook_path)
143 output.write_line(f'Restored previous hooks to {hook_path}')
144
145
146 def uninstall(hook_types: Sequence[str]) -> int:
147 for hook_type in hook_types:
148 _uninstall_hook_script(hook_type)
149 return 0
150
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pre_commit/commands/install_uninstall.py b/pre_commit/commands/install_uninstall.py
--- a/pre_commit/commands/install_uninstall.py
+++ b/pre_commit/commands/install_uninstall.py
@@ -82,6 +82,13 @@
before, rest = contents.split(TEMPLATE_START)
_, after = rest.split(TEMPLATE_END)
+ # on windows always use `/bin/sh` since `bash` might not be on PATH
+ # though we use bash-specific features `sh` on windows is actually
+ # bash in "POSIXLY_CORRECT" mode which still supports the features we
+ # use: subshells / arrays
+ if sys.platform == 'win32': # pragma: win32 cover
+ hook_file.write('#!/bin/sh\n')
+
hook_file.write(before + TEMPLATE_START)
hook_file.write(f'INSTALL_PYTHON={shlex.quote(sys.executable)}\n')
# TODO: python3.8+: shlex.join
| {"golden_diff": "diff --git a/pre_commit/commands/install_uninstall.py b/pre_commit/commands/install_uninstall.py\n--- a/pre_commit/commands/install_uninstall.py\n+++ b/pre_commit/commands/install_uninstall.py\n@@ -82,6 +82,13 @@\n before, rest = contents.split(TEMPLATE_START)\n _, after = rest.split(TEMPLATE_END)\n \n+ # on windows always use `/bin/sh` since `bash` might not be on PATH\n+ # though we use bash-specific features `sh` on windows is actually\n+ # bash in \"POSIXLY_CORRECT\" mode which still supports the features we\n+ # use: subshells / arrays\n+ if sys.platform == 'win32': # pragma: win32 cover\n+ hook_file.write('#!/bin/sh\\n')\n+\n hook_file.write(before + TEMPLATE_START)\n hook_file.write(f'INSTALL_PYTHON={shlex.quote(sys.executable)}\\n')\n # TODO: python3.8+: shlex.join\n", "issue": "doesn't work in visual studio 2019 on Windows\n### describe your issue\n\nWhen I try to commit from visual studio 2019, I get the following message:\r\n```\r\n/usr/bin/env: \u2018bash\u2019: No such file or directory\r\n```\r\n\r\nI have confirmed that it works from vscode and command prompt.\r\nI rewrote the `.git/hooks/pre-commit` to below and it worked.\r\n```\r\n#!/usr/bin/env sh\r\n```\r\n\r\nAs far as I can see, visual studio git hooks only support `sh`.\r\nAre you aware of this problem?\r\n\r\nthank you.\r\n\n\n### pre-commit --version\n\npre-commit 2.16.0\n\n### .pre-commit-config.yaml\n\n```yaml\nrepos:\r\n - repo: https://github.com/pre-commit/mirrors-clang-format\r\n rev: v13.0.0\r\n hooks:\r\n - id: clang-format\n```\n\n\n### ~/.cache/pre-commit/pre-commit.log (if present)\n\n_No response_\n", "before_files": [{"content": "import logging\nimport os.path\nimport shlex\nimport shutil\nimport sys\nfrom typing import Optional\nfrom typing import Sequence\nfrom typing import Tuple\n\nfrom pre_commit import git\nfrom pre_commit import output\nfrom pre_commit.clientlib import load_config\nfrom pre_commit.repository import all_hooks\nfrom pre_commit.repository import install_hook_envs\nfrom pre_commit.store import Store\nfrom pre_commit.util import make_executable\nfrom pre_commit.util import resource_text\n\n\nlogger = logging.getLogger(__name__)\n\n# This is used to identify the hook file we install\nPRIOR_HASHES = (\n b'4d9958c90bc262f47553e2c073f14cfe',\n b'd8ee923c46731b42cd95cc869add4062',\n b'49fd668cb42069aa1b6048464be5d395',\n b'79f09a650522a87b0da915d0d983b2de',\n b'e358c9dae00eac5d06b38dfdb1e33a8c',\n)\nCURRENT_HASH = b'138fd403232d2ddd5efb44317e38bf03'\nTEMPLATE_START = '# start templated\\n'\nTEMPLATE_END = '# end templated\\n'\n\n\ndef _hook_paths(\n hook_type: str,\n git_dir: Optional[str] = None,\n) -> Tuple[str, str]:\n git_dir = git_dir if git_dir is not None else git.get_git_dir()\n pth = os.path.join(git_dir, 'hooks', hook_type)\n return pth, f'{pth}.legacy'\n\n\ndef is_our_script(filename: str) -> bool:\n if not os.path.exists(filename): # pragma: win32 no cover (symlink)\n return False\n with open(filename, 'rb') as f:\n contents = f.read()\n return any(h in contents for h in (CURRENT_HASH,) + PRIOR_HASHES)\n\n\ndef _install_hook_script(\n config_file: str,\n hook_type: str,\n overwrite: bool = False,\n skip_on_missing_config: bool = False,\n git_dir: Optional[str] = None,\n) -> None:\n hook_path, legacy_path = _hook_paths(hook_type, git_dir=git_dir)\n\n os.makedirs(os.path.dirname(hook_path), exist_ok=True)\n\n # If we have an existing hook, move it to pre-commit.legacy\n if os.path.lexists(hook_path) and not is_our_script(hook_path):\n shutil.move(hook_path, legacy_path)\n\n # If we specify overwrite, we simply delete the legacy file\n if overwrite and os.path.exists(legacy_path):\n os.remove(legacy_path)\n elif os.path.exists(legacy_path):\n output.write_line(\n f'Running in migration mode with existing hooks at {legacy_path}\\n'\n f'Use -f to use only pre-commit.',\n )\n\n args = ['hook-impl', f'--config={config_file}', f'--hook-type={hook_type}']\n if skip_on_missing_config:\n args.append('--skip-on-missing-config')\n\n with open(hook_path, 'w') as hook_file:\n contents = resource_text('hook-tmpl')\n before, rest = contents.split(TEMPLATE_START)\n _, after = rest.split(TEMPLATE_END)\n\n hook_file.write(before + TEMPLATE_START)\n hook_file.write(f'INSTALL_PYTHON={shlex.quote(sys.executable)}\\n')\n # TODO: python3.8+: shlex.join\n args_s = ' '.join(shlex.quote(part) for part in args)\n hook_file.write(f'ARGS=({args_s})\\n')\n hook_file.write(TEMPLATE_END + after)\n make_executable(hook_path)\n\n output.write_line(f'pre-commit installed at {hook_path}')\n\n\ndef install(\n config_file: str,\n store: Store,\n hook_types: Sequence[str],\n overwrite: bool = False,\n hooks: bool = False,\n skip_on_missing_config: bool = False,\n git_dir: Optional[str] = None,\n) -> int:\n if git_dir is None and git.has_core_hookpaths_set():\n logger.error(\n 'Cowardly refusing to install hooks with `core.hooksPath` set.\\n'\n 'hint: `git config --unset-all core.hooksPath`',\n )\n return 1\n\n for hook_type in hook_types:\n _install_hook_script(\n config_file, hook_type,\n overwrite=overwrite,\n skip_on_missing_config=skip_on_missing_config,\n git_dir=git_dir,\n )\n\n if hooks:\n install_hooks(config_file, store)\n\n return 0\n\n\ndef install_hooks(config_file: str, store: Store) -> int:\n install_hook_envs(all_hooks(load_config(config_file), store), store)\n return 0\n\n\ndef _uninstall_hook_script(hook_type: str) -> None:\n hook_path, legacy_path = _hook_paths(hook_type)\n\n # If our file doesn't exist or it isn't ours, gtfo.\n if not os.path.exists(hook_path) or not is_our_script(hook_path):\n return\n\n os.remove(hook_path)\n output.write_line(f'{hook_type} uninstalled')\n\n if os.path.exists(legacy_path):\n os.replace(legacy_path, hook_path)\n output.write_line(f'Restored previous hooks to {hook_path}')\n\n\ndef uninstall(hook_types: Sequence[str]) -> int:\n for hook_type in hook_types:\n _uninstall_hook_script(hook_type)\n return 0\n", "path": "pre_commit/commands/install_uninstall.py"}], "after_files": [{"content": "import logging\nimport os.path\nimport shlex\nimport shutil\nimport sys\nfrom typing import Optional\nfrom typing import Sequence\nfrom typing import Tuple\n\nfrom pre_commit import git\nfrom pre_commit import output\nfrom pre_commit.clientlib import load_config\nfrom pre_commit.repository import all_hooks\nfrom pre_commit.repository import install_hook_envs\nfrom pre_commit.store import Store\nfrom pre_commit.util import make_executable\nfrom pre_commit.util import resource_text\n\n\nlogger = logging.getLogger(__name__)\n\n# This is used to identify the hook file we install\nPRIOR_HASHES = (\n b'4d9958c90bc262f47553e2c073f14cfe',\n b'd8ee923c46731b42cd95cc869add4062',\n b'49fd668cb42069aa1b6048464be5d395',\n b'79f09a650522a87b0da915d0d983b2de',\n b'e358c9dae00eac5d06b38dfdb1e33a8c',\n)\nCURRENT_HASH = b'138fd403232d2ddd5efb44317e38bf03'\nTEMPLATE_START = '# start templated\\n'\nTEMPLATE_END = '# end templated\\n'\n\n\ndef _hook_paths(\n hook_type: str,\n git_dir: Optional[str] = None,\n) -> Tuple[str, str]:\n git_dir = git_dir if git_dir is not None else git.get_git_dir()\n pth = os.path.join(git_dir, 'hooks', hook_type)\n return pth, f'{pth}.legacy'\n\n\ndef is_our_script(filename: str) -> bool:\n if not os.path.exists(filename): # pragma: win32 no cover (symlink)\n return False\n with open(filename, 'rb') as f:\n contents = f.read()\n return any(h in contents for h in (CURRENT_HASH,) + PRIOR_HASHES)\n\n\ndef _install_hook_script(\n config_file: str,\n hook_type: str,\n overwrite: bool = False,\n skip_on_missing_config: bool = False,\n git_dir: Optional[str] = None,\n) -> None:\n hook_path, legacy_path = _hook_paths(hook_type, git_dir=git_dir)\n\n os.makedirs(os.path.dirname(hook_path), exist_ok=True)\n\n # If we have an existing hook, move it to pre-commit.legacy\n if os.path.lexists(hook_path) and not is_our_script(hook_path):\n shutil.move(hook_path, legacy_path)\n\n # If we specify overwrite, we simply delete the legacy file\n if overwrite and os.path.exists(legacy_path):\n os.remove(legacy_path)\n elif os.path.exists(legacy_path):\n output.write_line(\n f'Running in migration mode with existing hooks at {legacy_path}\\n'\n f'Use -f to use only pre-commit.',\n )\n\n args = ['hook-impl', f'--config={config_file}', f'--hook-type={hook_type}']\n if skip_on_missing_config:\n args.append('--skip-on-missing-config')\n\n with open(hook_path, 'w') as hook_file:\n contents = resource_text('hook-tmpl')\n before, rest = contents.split(TEMPLATE_START)\n _, after = rest.split(TEMPLATE_END)\n\n # on windows always use `/bin/sh` since `bash` might not be on PATH\n # though we use bash-specific features `sh` on windows is actually\n # bash in \"POSIXLY_CORRECT\" mode which still supports the features we\n # use: subshells / arrays\n if sys.platform == 'win32': # pragma: win32 cover\n hook_file.write('#!/bin/sh\\n')\n\n hook_file.write(before + TEMPLATE_START)\n hook_file.write(f'INSTALL_PYTHON={shlex.quote(sys.executable)}\\n')\n # TODO: python3.8+: shlex.join\n args_s = ' '.join(shlex.quote(part) for part in args)\n hook_file.write(f'ARGS=({args_s})\\n')\n hook_file.write(TEMPLATE_END + after)\n make_executable(hook_path)\n\n output.write_line(f'pre-commit installed at {hook_path}')\n\n\ndef install(\n config_file: str,\n store: Store,\n hook_types: Sequence[str],\n overwrite: bool = False,\n hooks: bool = False,\n skip_on_missing_config: bool = False,\n git_dir: Optional[str] = None,\n) -> int:\n if git_dir is None and git.has_core_hookpaths_set():\n logger.error(\n 'Cowardly refusing to install hooks with `core.hooksPath` set.\\n'\n 'hint: `git config --unset-all core.hooksPath`',\n )\n return 1\n\n for hook_type in hook_types:\n _install_hook_script(\n config_file, hook_type,\n overwrite=overwrite,\n skip_on_missing_config=skip_on_missing_config,\n git_dir=git_dir,\n )\n\n if hooks:\n install_hooks(config_file, store)\n\n return 0\n\n\ndef install_hooks(config_file: str, store: Store) -> int:\n install_hook_envs(all_hooks(load_config(config_file), store), store)\n return 0\n\n\ndef _uninstall_hook_script(hook_type: str) -> None:\n hook_path, legacy_path = _hook_paths(hook_type)\n\n # If our file doesn't exist or it isn't ours, gtfo.\n if not os.path.exists(hook_path) or not is_our_script(hook_path):\n return\n\n os.remove(hook_path)\n output.write_line(f'{hook_type} uninstalled')\n\n if os.path.exists(legacy_path):\n os.replace(legacy_path, hook_path)\n output.write_line(f'Restored previous hooks to {hook_path}')\n\n\ndef uninstall(hook_types: Sequence[str]) -> int:\n for hook_type in hook_types:\n _uninstall_hook_script(hook_type)\n return 0\n", "path": "pre_commit/commands/install_uninstall.py"}]} | 2,097 | 222 |
gh_patches_debug_23069 | rasdani/github-patches | git_diff | bokeh__bokeh-6911 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bokeh DateRangeSlider returns tuple of integers instead of dates
In 0.12.7 the DateRangeSlider `.value` returns a tuple of integers. Additionally, in order to convert them back to dates you have to divide by 1000 (i.e. `datetime.fromtimestamp(the_tuple[0]/1000)`).
Expected behavior: Return a tuple of Date objects.
```
from datetime import datetime
from bokeh.models.widgets import DateRangeSlider
from bokeh.io import curdoc
def date_range_update(attrname, old, new):
print('-- range values:', date_slider.value)
# Works
d1 = datetime.fromtimestamp(date_slider.value[0] / 1000)
# Does not Work, gives error
d2 = datetime.fromtimestamp(date_slider.value[0])
date_slider = DateRangeSlider(value=(date_start,date_end), start=date_start, end=date_end)
date_slider.on_change('value', date_range_update)
curdoc().add_root(date_slider)
```
#### Stack traceback and/or browser JavaScript console output
#### Screenshots or screencasts of the bug in action
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bokeh/models/widgets/sliders.py`
Content:
```
1 """ Various kinds of slider widgets.
2
3 """
4 from __future__ import absolute_import
5
6 from ...core.has_props import abstract
7 from ...core.properties import Bool, Int, Float, String, Date, Enum, Tuple, Instance, Color, Override
8 from ...core.enums import SliderCallbackPolicy
9 from ..callbacks import Callback
10 from .widget import Widget
11
12 @abstract
13 class AbstractSlider(Widget):
14 """ """
15
16 title = String(default="", help="""
17 Slider's label.
18 """)
19
20 show_value = Bool(default=True, help="""
21 Whether or not show slider's value.
22 """)
23
24 format = String(help="""
25 """)
26
27 orientation = Enum("horizontal", "vertical", help="""
28 Orient the slider either horizontally (default) or vertically.
29 """)
30
31 direction = Enum("ltr", "rtl", help="""
32 """)
33
34 tooltips = Bool(default=True, help="""
35 """)
36
37 callback = Instance(Callback, help="""
38 A callback to run in the browser whenever the current Slider value changes.
39 """)
40
41 callback_throttle = Float(default=200, help="""
42 Number of millseconds to pause between callback calls as the slider is moved.
43 """)
44
45 callback_policy = Enum(SliderCallbackPolicy, default="throttle", help="""
46 When the callback is initiated. This parameter can take on only one of three options:
47
48 * "continuous": the callback will be executed immediately for each movement of the slider
49 * "throttle": the callback will be executed at most every ``callback_throttle`` milliseconds.
50 * "mouseup": the callback will be executed only once when the slider is released.
51
52 The "mouseup" policy is intended for scenarios in which the callback is expensive in time.
53 """)
54
55 bar_color = Color(default="#e6e6e6", help="""
56 """)
57
58 class Slider(AbstractSlider):
59 """ Slider-based number selection widget. """
60
61 start = Float(help="""
62 The minimum allowable value.
63 """)
64
65 end = Float(help="""
66 The maximum allowable value.
67 """)
68
69 value = Float(help="""
70 Initial or selected value.
71 """)
72
73 step = Float(default=1, help="""
74 The step between consecutive values.
75 """)
76
77 format = Override(default="0[.]00")
78
79 class RangeSlider(AbstractSlider):
80 """ Range-slider based number range selection widget. """
81
82 value = Tuple(Float, Float, help="""
83 Initial or selected range.
84 """)
85
86 start = Float(help="""
87 The minimum allowable value.
88 """)
89
90 end = Float(help="""
91 The maximum allowable value.
92 """)
93
94 step = Float(default=1, help="""
95 The step between consecutive values.
96 """)
97
98 format = Override(default="0[.]00")
99
100 class DateSlider(AbstractSlider):
101 """ Slider-based date selection widget. """
102
103 value = Date(help="""
104 Initial or selected value.
105 """)
106
107 start = Date(help="""
108 The minimum allowable value.
109 """)
110
111 end = Date(help="""
112 The maximum allowable value.
113 """)
114
115 step = Int(default=1, help="""
116 The step between consecutive values.
117 """)
118
119 format = Override(default="%d %b %G")
120
121 class DateRangeSlider(AbstractSlider):
122 """ Slider-based date range selection widget. """
123
124 value = Tuple(Date, Date, help="""
125 Initial or selected range.
126 """)
127
128 start = Date(help="""
129 The minimum allowable value.
130 """)
131
132 end = Date(help="""
133 The maximum allowable value.
134 """)
135
136 step = Int(default=1, help="""
137 The step between consecutive values.
138 """)
139
140 format = Override(default="%d %b %G")
141
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bokeh/models/widgets/sliders.py b/bokeh/models/widgets/sliders.py
--- a/bokeh/models/widgets/sliders.py
+++ b/bokeh/models/widgets/sliders.py
@@ -3,6 +3,9 @@
"""
from __future__ import absolute_import
+from datetime import datetime
+import numbers
+
from ...core.has_props import abstract
from ...core.properties import Bool, Int, Float, String, Date, Enum, Tuple, Instance, Color, Override
from ...core.enums import SliderCallbackPolicy
@@ -121,6 +124,25 @@
class DateRangeSlider(AbstractSlider):
""" Slider-based date range selection widget. """
+ @property
+ def value_as_datetime(self):
+ ''' Convenience property to retrieve the value tuple as a tuple of
+ datetime objects.
+
+ '''
+ if self.value is None:
+ return None
+ v1, v2 = self.value
+ if isinstance(v1, numbers.Number):
+ d1 = datetime.utcfromtimestamp(v1 / 1000)
+ else:
+ d1 = v1
+ if isinstance(v2, numbers.Number):
+ d2 = datetime.utcfromtimestamp(v2 / 1000)
+ else:
+ d2 = v2
+ return d1, d2
+
value = Tuple(Date, Date, help="""
Initial or selected range.
""")
| {"golden_diff": "diff --git a/bokeh/models/widgets/sliders.py b/bokeh/models/widgets/sliders.py\n--- a/bokeh/models/widgets/sliders.py\n+++ b/bokeh/models/widgets/sliders.py\n@@ -3,6 +3,9 @@\n \"\"\"\n from __future__ import absolute_import\n \n+from datetime import datetime\n+import numbers\n+\n from ...core.has_props import abstract\n from ...core.properties import Bool, Int, Float, String, Date, Enum, Tuple, Instance, Color, Override\n from ...core.enums import SliderCallbackPolicy\n@@ -121,6 +124,25 @@\n class DateRangeSlider(AbstractSlider):\n \"\"\" Slider-based date range selection widget. \"\"\"\n \n+ @property\n+ def value_as_datetime(self):\n+ ''' Convenience property to retrieve the value tuple as a tuple of\n+ datetime objects.\n+\n+ '''\n+ if self.value is None:\n+ return None\n+ v1, v2 = self.value\n+ if isinstance(v1, numbers.Number):\n+ d1 = datetime.utcfromtimestamp(v1 / 1000)\n+ else:\n+ d1 = v1\n+ if isinstance(v2, numbers.Number):\n+ d2 = datetime.utcfromtimestamp(v2 / 1000)\n+ else:\n+ d2 = v2\n+ return d1, d2\n+\n value = Tuple(Date, Date, help=\"\"\"\n Initial or selected range.\n \"\"\")\n", "issue": "Bokeh DateRangeSlider returns tuple of integers instead of dates\nIn 0.12.7 the DateRangeSlider `.value` returns a tuple of integers. Additionally, in order to convert them back to dates you have to divide by 1000 (i.e. `datetime.fromtimestamp(the_tuple[0]/1000)`).\r\n\r\nExpected behavior: Return a tuple of Date objects.\r\n\r\n```\r\nfrom datetime import datetime\r\nfrom bokeh.models.widgets import DateRangeSlider\r\nfrom bokeh.io import curdoc\r\n\r\ndef date_range_update(attrname, old, new):\r\n print('-- range values:', date_slider.value)\r\n # Works\r\n d1 = datetime.fromtimestamp(date_slider.value[0] / 1000) \r\n # Does not Work, gives error\r\n d2 = datetime.fromtimestamp(date_slider.value[0])\r\n\r\ndate_slider = DateRangeSlider(value=(date_start,date_end), start=date_start, end=date_end)\r\ndate_slider.on_change('value', date_range_update)\r\n\r\ncurdoc().add_root(date_slider)\r\n\r\n```\r\n\r\n#### Stack traceback and/or browser JavaScript console output\r\n\r\n#### Screenshots or screencasts of the bug in action\r\n\n", "before_files": [{"content": "\"\"\" Various kinds of slider widgets.\n\n\"\"\"\nfrom __future__ import absolute_import\n\nfrom ...core.has_props import abstract\nfrom ...core.properties import Bool, Int, Float, String, Date, Enum, Tuple, Instance, Color, Override\nfrom ...core.enums import SliderCallbackPolicy\nfrom ..callbacks import Callback\nfrom .widget import Widget\n\n@abstract\nclass AbstractSlider(Widget):\n \"\"\" \"\"\"\n\n title = String(default=\"\", help=\"\"\"\n Slider's label.\n \"\"\")\n\n show_value = Bool(default=True, help=\"\"\"\n Whether or not show slider's value.\n \"\"\")\n\n format = String(help=\"\"\"\n \"\"\")\n\n orientation = Enum(\"horizontal\", \"vertical\", help=\"\"\"\n Orient the slider either horizontally (default) or vertically.\n \"\"\")\n\n direction = Enum(\"ltr\", \"rtl\", help=\"\"\"\n \"\"\")\n\n tooltips = Bool(default=True, help=\"\"\"\n \"\"\")\n\n callback = Instance(Callback, help=\"\"\"\n A callback to run in the browser whenever the current Slider value changes.\n \"\"\")\n\n callback_throttle = Float(default=200, help=\"\"\"\n Number of millseconds to pause between callback calls as the slider is moved.\n \"\"\")\n\n callback_policy = Enum(SliderCallbackPolicy, default=\"throttle\", help=\"\"\"\n When the callback is initiated. This parameter can take on only one of three options:\n\n * \"continuous\": the callback will be executed immediately for each movement of the slider\n * \"throttle\": the callback will be executed at most every ``callback_throttle`` milliseconds.\n * \"mouseup\": the callback will be executed only once when the slider is released.\n\n The \"mouseup\" policy is intended for scenarios in which the callback is expensive in time.\n \"\"\")\n\n bar_color = Color(default=\"#e6e6e6\", help=\"\"\"\n \"\"\")\n\nclass Slider(AbstractSlider):\n \"\"\" Slider-based number selection widget. \"\"\"\n\n start = Float(help=\"\"\"\n The minimum allowable value.\n \"\"\")\n\n end = Float(help=\"\"\"\n The maximum allowable value.\n \"\"\")\n\n value = Float(help=\"\"\"\n Initial or selected value.\n \"\"\")\n\n step = Float(default=1, help=\"\"\"\n The step between consecutive values.\n \"\"\")\n\n format = Override(default=\"0[.]00\")\n\nclass RangeSlider(AbstractSlider):\n \"\"\" Range-slider based number range selection widget. \"\"\"\n\n value = Tuple(Float, Float, help=\"\"\"\n Initial or selected range.\n \"\"\")\n\n start = Float(help=\"\"\"\n The minimum allowable value.\n \"\"\")\n\n end = Float(help=\"\"\"\n The maximum allowable value.\n \"\"\")\n\n step = Float(default=1, help=\"\"\"\n The step between consecutive values.\n \"\"\")\n\n format = Override(default=\"0[.]00\")\n\nclass DateSlider(AbstractSlider):\n \"\"\" Slider-based date selection widget. \"\"\"\n\n value = Date(help=\"\"\"\n Initial or selected value.\n \"\"\")\n\n start = Date(help=\"\"\"\n The minimum allowable value.\n \"\"\")\n\n end = Date(help=\"\"\"\n The maximum allowable value.\n \"\"\")\n\n step = Int(default=1, help=\"\"\"\n The step between consecutive values.\n \"\"\")\n\n format = Override(default=\"%d %b %G\")\n\nclass DateRangeSlider(AbstractSlider):\n \"\"\" Slider-based date range selection widget. \"\"\"\n\n value = Tuple(Date, Date, help=\"\"\"\n Initial or selected range.\n \"\"\")\n\n start = Date(help=\"\"\"\n The minimum allowable value.\n \"\"\")\n\n end = Date(help=\"\"\"\n The maximum allowable value.\n \"\"\")\n\n step = Int(default=1, help=\"\"\"\n The step between consecutive values.\n \"\"\")\n\n format = Override(default=\"%d %b %G\")\n", "path": "bokeh/models/widgets/sliders.py"}], "after_files": [{"content": "\"\"\" Various kinds of slider widgets.\n\n\"\"\"\nfrom __future__ import absolute_import\n\nfrom datetime import datetime\nimport numbers\n\nfrom ...core.has_props import abstract\nfrom ...core.properties import Bool, Int, Float, String, Date, Enum, Tuple, Instance, Color, Override\nfrom ...core.enums import SliderCallbackPolicy\nfrom ..callbacks import Callback\nfrom .widget import Widget\n\n@abstract\nclass AbstractSlider(Widget):\n \"\"\" \"\"\"\n\n title = String(default=\"\", help=\"\"\"\n Slider's label.\n \"\"\")\n\n show_value = Bool(default=True, help=\"\"\"\n Whether or not show slider's value.\n \"\"\")\n\n format = String(help=\"\"\"\n \"\"\")\n\n orientation = Enum(\"horizontal\", \"vertical\", help=\"\"\"\n Orient the slider either horizontally (default) or vertically.\n \"\"\")\n\n direction = Enum(\"ltr\", \"rtl\", help=\"\"\"\n \"\"\")\n\n tooltips = Bool(default=True, help=\"\"\"\n \"\"\")\n\n callback = Instance(Callback, help=\"\"\"\n A callback to run in the browser whenever the current Slider value changes.\n \"\"\")\n\n callback_throttle = Float(default=200, help=\"\"\"\n Number of millseconds to pause between callback calls as the slider is moved.\n \"\"\")\n\n callback_policy = Enum(SliderCallbackPolicy, default=\"throttle\", help=\"\"\"\n When the callback is initiated. This parameter can take on only one of three options:\n\n * \"continuous\": the callback will be executed immediately for each movement of the slider\n * \"throttle\": the callback will be executed at most every ``callback_throttle`` milliseconds.\n * \"mouseup\": the callback will be executed only once when the slider is released.\n\n The \"mouseup\" policy is intended for scenarios in which the callback is expensive in time.\n \"\"\")\n\n bar_color = Color(default=\"#e6e6e6\", help=\"\"\"\n \"\"\")\n\nclass Slider(AbstractSlider):\n \"\"\" Slider-based number selection widget. \"\"\"\n\n start = Float(help=\"\"\"\n The minimum allowable value.\n \"\"\")\n\n end = Float(help=\"\"\"\n The maximum allowable value.\n \"\"\")\n\n value = Float(help=\"\"\"\n Initial or selected value.\n \"\"\")\n\n step = Float(default=1, help=\"\"\"\n The step between consecutive values.\n \"\"\")\n\n format = Override(default=\"0[.]00\")\n\nclass RangeSlider(AbstractSlider):\n \"\"\" Range-slider based number range selection widget. \"\"\"\n\n value = Tuple(Float, Float, help=\"\"\"\n Initial or selected range.\n \"\"\")\n\n start = Float(help=\"\"\"\n The minimum allowable value.\n \"\"\")\n\n end = Float(help=\"\"\"\n The maximum allowable value.\n \"\"\")\n\n step = Float(default=1, help=\"\"\"\n The step between consecutive values.\n \"\"\")\n\n format = Override(default=\"0[.]00\")\n\nclass DateSlider(AbstractSlider):\n \"\"\" Slider-based date selection widget. \"\"\"\n\n value = Date(help=\"\"\"\n Initial or selected value.\n \"\"\")\n\n start = Date(help=\"\"\"\n The minimum allowable value.\n \"\"\")\n\n end = Date(help=\"\"\"\n The maximum allowable value.\n \"\"\")\n\n step = Int(default=1, help=\"\"\"\n The step between consecutive values.\n \"\"\")\n\n format = Override(default=\"%d %b %G\")\n\nclass DateRangeSlider(AbstractSlider):\n \"\"\" Slider-based date range selection widget. \"\"\"\n\n @property\n def value_as_datetime(self):\n ''' Convenience property to retrieve the value tuple as a tuple of\n datetime objects.\n\n '''\n if self.value is None:\n return None\n v1, v2 = self.value\n if isinstance(v1, numbers.Number):\n d1 = datetime.utcfromtimestamp(v1 / 1000)\n else:\n d1 = v1\n if isinstance(v2, numbers.Number):\n d2 = datetime.utcfromtimestamp(v2 / 1000)\n else:\n d2 = v2\n return d1, d2\n\n value = Tuple(Date, Date, help=\"\"\"\n Initial or selected range.\n \"\"\")\n\n start = Date(help=\"\"\"\n The minimum allowable value.\n \"\"\")\n\n end = Date(help=\"\"\"\n The maximum allowable value.\n \"\"\")\n\n step = Int(default=1, help=\"\"\"\n The step between consecutive values.\n \"\"\")\n\n format = Override(default=\"%d %b %G\")\n", "path": "bokeh/models/widgets/sliders.py"}]} | 1,639 | 320 |
gh_patches_debug_10102 | rasdani/github-patches | git_diff | fossasia__open-event-server-4882 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Session allowed to be created without an associated Track
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
Sessions can be created without being associated with a Track
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
It shouldn't be possible
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/api/sessions.py`
Content:
```
1 from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship
2
3 from app.api.bootstrap import api
4 from app.api.events import Event
5 from app.api.helpers.db import safe_query, get_count
6 from app.api.helpers.exceptions import ForbiddenException
7 from app.api.helpers.mail import send_email_new_session, send_email_session_accept_reject
8 from app.api.helpers.notification import send_notif_new_session_organizer, send_notif_session_accept_reject
9 from app.api.helpers.permissions import current_identity
10 from app.api.helpers.query import event_query
11 from app.api.helpers.utilities import require_relationship
12 from app.api.schema.sessions import SessionSchema
13 from app.models import db
14 from app.models.microlocation import Microlocation
15 from app.models.session import Session
16 from app.models.session_type import SessionType
17 from app.models.speaker import Speaker
18 from app.models.track import Track
19 from app.models.user import User
20 from app.settings import get_settings
21
22
23 class SessionListPost(ResourceList):
24 """
25 List Sessions
26 """
27 def before_post(self, args, kwargs, data):
28 """
29 before post method to check for required relationship and proper permission
30 :param args:
31 :param kwargs:
32 :param data:
33 :return:
34 """
35 require_relationship(['event'], data)
36 data['creator_id'] = current_identity.id
37 if get_count(db.session.query(Event).filter_by(id=int(data['event']), is_sessions_speakers_enabled=False)) > 0:
38 raise ForbiddenException({'pointer': ''}, "Sessions are disabled for this Event")
39
40 def after_create_object(self, session, data, view_kwargs):
41 """
42 method to send email for creation of new session
43 mails session link to the concerned user
44 :param session:
45 :param data:
46 :param view_kwargs:
47 :return:
48 """
49 if session.event.get_organizer():
50 event_name = session.event.name
51 organizer = session.event.get_organizer()
52 organizer_email = organizer.email
53 frontend_url = get_settings()['frontend_url']
54 link = "{}/events/{}/sessions/{}"\
55 .format(frontend_url, session.event_id, session.id)
56 send_email_new_session(organizer_email, event_name, link)
57 send_notif_new_session_organizer(organizer, event_name, link)
58
59 decorators = (api.has_permission('create_event'),)
60 schema = SessionSchema
61 data_layer = {'session': db.session,
62 'model': Session,
63 'methods': {'after_create_object': after_create_object
64 }}
65
66
67 class SessionList(ResourceList):
68 """
69 List Sessions
70 """
71
72 def query(self, view_kwargs):
73 """
74 query method for SessionList class
75 :param view_kwargs:
76 :return:
77 """
78 query_ = self.session.query(Session)
79 if view_kwargs.get('track_id') is not None:
80 track = safe_query(self, Track, 'id', view_kwargs['track_id'], 'track_id')
81 query_ = query_.join(Track).filter(Track.id == track.id)
82 if view_kwargs.get('session_type_id') is not None:
83 session_type = safe_query(self, SessionType, 'id', view_kwargs['session_type_id'], 'session_type_id')
84 query_ = query_.join(SessionType).filter(SessionType.id == session_type.id)
85 if view_kwargs.get('microlocation_id') is not None:
86 microlocation = safe_query(self, Microlocation, 'id', view_kwargs['microlocation_id'], 'microlocation_id')
87 query_ = query_.join(Microlocation).filter(Microlocation.id == microlocation.id)
88 if view_kwargs.get('user_id') is not None:
89 user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')
90 query_ = query_.join(User).filter(User.id == user.id)
91 query_ = event_query(self, query_, view_kwargs)
92 if view_kwargs.get('speaker_id'):
93 speaker = safe_query(self, Speaker, 'id', view_kwargs['speaker_id'], 'speaker_id')
94 # session-speaker :: many-to-many relationship
95 query_ = Session.query.filter(Session.speakers.any(id=speaker.id))
96
97 return query_
98
99 view_kwargs = True
100 methods = ['GET']
101 schema = SessionSchema
102 data_layer = {'session': db.session,
103 'model': Session,
104 'methods': {
105 'query': query
106 }}
107
108
109 class SessionDetail(ResourceDetail):
110 """
111 Session detail by id
112 """
113 def before_get_object(self, view_kwargs):
114 """
115 before get method to get the resource id for fetching details
116 :param view_kwargs:
117 :return:
118 """
119 if view_kwargs.get('event_identifier'):
120 event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'identifier')
121 view_kwargs['event_id'] = event.id
122
123 def after_update_object(self, session, data, view_kwargs):
124 """ Send email if session accepted or rejected """
125
126 if 'state' in data and data.get('send_email', None) and (session.state == 'accepted' or
127 session.state == 'rejected'):
128 # Email for speaker
129 speakers = session.speakers
130 for speaker in speakers:
131 frontend_url = get_settings()['frontend_url']
132 link = "{}/events/{}/sessions/{}" \
133 .format(frontend_url, session.event_id, session.id)
134 send_email_session_accept_reject(speaker.email, session, link)
135 send_notif_session_accept_reject(speaker, session.title, session.state, link)
136
137 # Email for organizer
138 if session.event.get_organizer():
139 organizer = session.event.get_organizer()
140 organizer_email = organizer.email
141 frontend_url = get_settings()['frontend_url']
142 link = "{}/events/{}/sessions/{}" \
143 .format(frontend_url, session.event_id, session.id)
144 send_email_session_accept_reject(organizer_email, session,
145 link)
146 send_notif_session_accept_reject(organizer, session.title,
147 session.state, link)
148
149 decorators = (api.has_permission('is_speaker_for_session', methods="PATCH,DELETE"),)
150 schema = SessionSchema
151 data_layer = {'session': db.session,
152 'model': Session,
153 'methods': {'before_get_object': before_get_object,
154 'after_update_object': after_update_object}}
155
156
157 class SessionRelationshipRequired(ResourceRelationship):
158 """
159 Session Relationship
160 """
161 schema = SessionSchema
162 decorators = (api.has_permission('is_speaker_for_session', methods="PATCH,DELETE"),)
163 methods = ['GET', 'PATCH']
164 data_layer = {'session': db.session,
165 'model': Session}
166
167
168 class SessionRelationshipOptional(ResourceRelationship):
169 """
170 Session Relationship
171 """
172 schema = SessionSchema
173 decorators = (api.has_permission('is_speaker_for_session', methods="PATCH,DELETE"),)
174 data_layer = {'session': db.session,
175 'model': Session}
176
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/api/sessions.py b/app/api/sessions.py
--- a/app/api/sessions.py
+++ b/app/api/sessions.py
@@ -32,7 +32,7 @@
:param data:
:return:
"""
- require_relationship(['event'], data)
+ require_relationship(['event', 'track'], data)
data['creator_id'] = current_identity.id
if get_count(db.session.query(Event).filter_by(id=int(data['event']), is_sessions_speakers_enabled=False)) > 0:
raise ForbiddenException({'pointer': ''}, "Sessions are disabled for this Event")
| {"golden_diff": "diff --git a/app/api/sessions.py b/app/api/sessions.py\n--- a/app/api/sessions.py\n+++ b/app/api/sessions.py\n@@ -32,7 +32,7 @@\n :param data:\n :return:\n \"\"\"\n- require_relationship(['event'], data)\n+ require_relationship(['event', 'track'], data)\n data['creator_id'] = current_identity.id\n if get_count(db.session.query(Event).filter_by(id=int(data['event']), is_sessions_speakers_enabled=False)) > 0:\n raise ForbiddenException({'pointer': ''}, \"Sessions are disabled for this Event\")\n", "issue": "Session allowed to be created without an associated Track\n**Describe the bug**\r\n<!-- A clear and concise description of what the bug is. -->\r\nSessions can be created without being associated with a Track \r\n\r\n**Expected behavior**\r\n<!-- A clear and concise description of what you expected to happen. -->\r\nIt shouldn't be possible\n", "before_files": [{"content": "from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\n\nfrom app.api.bootstrap import api\nfrom app.api.events import Event\nfrom app.api.helpers.db import safe_query, get_count\nfrom app.api.helpers.exceptions import ForbiddenException\nfrom app.api.helpers.mail import send_email_new_session, send_email_session_accept_reject\nfrom app.api.helpers.notification import send_notif_new_session_organizer, send_notif_session_accept_reject\nfrom app.api.helpers.permissions import current_identity\nfrom app.api.helpers.query import event_query\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.schema.sessions import SessionSchema\nfrom app.models import db\nfrom app.models.microlocation import Microlocation\nfrom app.models.session import Session\nfrom app.models.session_type import SessionType\nfrom app.models.speaker import Speaker\nfrom app.models.track import Track\nfrom app.models.user import User\nfrom app.settings import get_settings\n\n\nclass SessionListPost(ResourceList):\n \"\"\"\n List Sessions\n \"\"\"\n def before_post(self, args, kwargs, data):\n \"\"\"\n before post method to check for required relationship and proper permission\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['event'], data)\n data['creator_id'] = current_identity.id\n if get_count(db.session.query(Event).filter_by(id=int(data['event']), is_sessions_speakers_enabled=False)) > 0:\n raise ForbiddenException({'pointer': ''}, \"Sessions are disabled for this Event\")\n\n def after_create_object(self, session, data, view_kwargs):\n \"\"\"\n method to send email for creation of new session\n mails session link to the concerned user\n :param session:\n :param data:\n :param view_kwargs:\n :return:\n \"\"\"\n if session.event.get_organizer():\n event_name = session.event.name\n organizer = session.event.get_organizer()\n organizer_email = organizer.email\n frontend_url = get_settings()['frontend_url']\n link = \"{}/events/{}/sessions/{}\"\\\n .format(frontend_url, session.event_id, session.id)\n send_email_new_session(organizer_email, event_name, link)\n send_notif_new_session_organizer(organizer, event_name, link)\n\n decorators = (api.has_permission('create_event'),)\n schema = SessionSchema\n data_layer = {'session': db.session,\n 'model': Session,\n 'methods': {'after_create_object': after_create_object\n }}\n\n\nclass SessionList(ResourceList):\n \"\"\"\n List Sessions\n \"\"\"\n\n def query(self, view_kwargs):\n \"\"\"\n query method for SessionList class\n :param view_kwargs:\n :return:\n \"\"\"\n query_ = self.session.query(Session)\n if view_kwargs.get('track_id') is not None:\n track = safe_query(self, Track, 'id', view_kwargs['track_id'], 'track_id')\n query_ = query_.join(Track).filter(Track.id == track.id)\n if view_kwargs.get('session_type_id') is not None:\n session_type = safe_query(self, SessionType, 'id', view_kwargs['session_type_id'], 'session_type_id')\n query_ = query_.join(SessionType).filter(SessionType.id == session_type.id)\n if view_kwargs.get('microlocation_id') is not None:\n microlocation = safe_query(self, Microlocation, 'id', view_kwargs['microlocation_id'], 'microlocation_id')\n query_ = query_.join(Microlocation).filter(Microlocation.id == microlocation.id)\n if view_kwargs.get('user_id') is not None:\n user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')\n query_ = query_.join(User).filter(User.id == user.id)\n query_ = event_query(self, query_, view_kwargs)\n if view_kwargs.get('speaker_id'):\n speaker = safe_query(self, Speaker, 'id', view_kwargs['speaker_id'], 'speaker_id')\n # session-speaker :: many-to-many relationship\n query_ = Session.query.filter(Session.speakers.any(id=speaker.id))\n\n return query_\n\n view_kwargs = True\n methods = ['GET']\n schema = SessionSchema\n data_layer = {'session': db.session,\n 'model': Session,\n 'methods': {\n 'query': query\n }}\n\n\nclass SessionDetail(ResourceDetail):\n \"\"\"\n Session detail by id\n \"\"\"\n def before_get_object(self, view_kwargs):\n \"\"\"\n before get method to get the resource id for fetching details\n :param view_kwargs:\n :return:\n \"\"\"\n if view_kwargs.get('event_identifier'):\n event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'identifier')\n view_kwargs['event_id'] = event.id\n\n def after_update_object(self, session, data, view_kwargs):\n \"\"\" Send email if session accepted or rejected \"\"\"\n\n if 'state' in data and data.get('send_email', None) and (session.state == 'accepted' or\n session.state == 'rejected'):\n # Email for speaker\n speakers = session.speakers\n for speaker in speakers:\n frontend_url = get_settings()['frontend_url']\n link = \"{}/events/{}/sessions/{}\" \\\n .format(frontend_url, session.event_id, session.id)\n send_email_session_accept_reject(speaker.email, session, link)\n send_notif_session_accept_reject(speaker, session.title, session.state, link)\n\n # Email for organizer\n if session.event.get_organizer():\n organizer = session.event.get_organizer()\n organizer_email = organizer.email\n frontend_url = get_settings()['frontend_url']\n link = \"{}/events/{}/sessions/{}\" \\\n .format(frontend_url, session.event_id, session.id)\n send_email_session_accept_reject(organizer_email, session,\n link)\n send_notif_session_accept_reject(organizer, session.title,\n session.state, link)\n\n decorators = (api.has_permission('is_speaker_for_session', methods=\"PATCH,DELETE\"),)\n schema = SessionSchema\n data_layer = {'session': db.session,\n 'model': Session,\n 'methods': {'before_get_object': before_get_object,\n 'after_update_object': after_update_object}}\n\n\nclass SessionRelationshipRequired(ResourceRelationship):\n \"\"\"\n Session Relationship\n \"\"\"\n schema = SessionSchema\n decorators = (api.has_permission('is_speaker_for_session', methods=\"PATCH,DELETE\"),)\n methods = ['GET', 'PATCH']\n data_layer = {'session': db.session,\n 'model': Session}\n\n\nclass SessionRelationshipOptional(ResourceRelationship):\n \"\"\"\n Session Relationship\n \"\"\"\n schema = SessionSchema\n decorators = (api.has_permission('is_speaker_for_session', methods=\"PATCH,DELETE\"),)\n data_layer = {'session': db.session,\n 'model': Session}\n", "path": "app/api/sessions.py"}], "after_files": [{"content": "from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\n\nfrom app.api.bootstrap import api\nfrom app.api.events import Event\nfrom app.api.helpers.db import safe_query, get_count\nfrom app.api.helpers.exceptions import ForbiddenException\nfrom app.api.helpers.mail import send_email_new_session, send_email_session_accept_reject\nfrom app.api.helpers.notification import send_notif_new_session_organizer, send_notif_session_accept_reject\nfrom app.api.helpers.permissions import current_identity\nfrom app.api.helpers.query import event_query\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.schema.sessions import SessionSchema\nfrom app.models import db\nfrom app.models.microlocation import Microlocation\nfrom app.models.session import Session\nfrom app.models.session_type import SessionType\nfrom app.models.speaker import Speaker\nfrom app.models.track import Track\nfrom app.models.user import User\nfrom app.settings import get_settings\n\n\nclass SessionListPost(ResourceList):\n \"\"\"\n List Sessions\n \"\"\"\n def before_post(self, args, kwargs, data):\n \"\"\"\n before post method to check for required relationship and proper permission\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['event', 'track'], data)\n data['creator_id'] = current_identity.id\n if get_count(db.session.query(Event).filter_by(id=int(data['event']), is_sessions_speakers_enabled=False)) > 0:\n raise ForbiddenException({'pointer': ''}, \"Sessions are disabled for this Event\")\n\n def after_create_object(self, session, data, view_kwargs):\n \"\"\"\n method to send email for creation of new session\n mails session link to the concerned user\n :param session:\n :param data:\n :param view_kwargs:\n :return:\n \"\"\"\n if session.event.get_organizer():\n event_name = session.event.name\n organizer = session.event.get_organizer()\n organizer_email = organizer.email\n frontend_url = get_settings()['frontend_url']\n link = \"{}/events/{}/sessions/{}\"\\\n .format(frontend_url, session.event_id, session.id)\n send_email_new_session(organizer_email, event_name, link)\n send_notif_new_session_organizer(organizer, event_name, link)\n\n decorators = (api.has_permission('create_event'),)\n schema = SessionSchema\n data_layer = {'session': db.session,\n 'model': Session,\n 'methods': {'after_create_object': after_create_object\n }}\n\n\nclass SessionList(ResourceList):\n \"\"\"\n List Sessions\n \"\"\"\n\n def query(self, view_kwargs):\n \"\"\"\n query method for SessionList class\n :param view_kwargs:\n :return:\n \"\"\"\n query_ = self.session.query(Session)\n if view_kwargs.get('track_id') is not None:\n track = safe_query(self, Track, 'id', view_kwargs['track_id'], 'track_id')\n query_ = query_.join(Track).filter(Track.id == track.id)\n if view_kwargs.get('session_type_id') is not None:\n session_type = safe_query(self, SessionType, 'id', view_kwargs['session_type_id'], 'session_type_id')\n query_ = query_.join(SessionType).filter(SessionType.id == session_type.id)\n if view_kwargs.get('microlocation_id') is not None:\n microlocation = safe_query(self, Microlocation, 'id', view_kwargs['microlocation_id'], 'microlocation_id')\n query_ = query_.join(Microlocation).filter(Microlocation.id == microlocation.id)\n if view_kwargs.get('user_id') is not None:\n user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')\n query_ = query_.join(User).filter(User.id == user.id)\n query_ = event_query(self, query_, view_kwargs)\n if view_kwargs.get('speaker_id'):\n speaker = safe_query(self, Speaker, 'id', view_kwargs['speaker_id'], 'speaker_id')\n # session-speaker :: many-to-many relationship\n query_ = Session.query.filter(Session.speakers.any(id=speaker.id))\n\n return query_\n\n view_kwargs = True\n methods = ['GET']\n schema = SessionSchema\n data_layer = {'session': db.session,\n 'model': Session,\n 'methods': {\n 'query': query\n }}\n\n\nclass SessionDetail(ResourceDetail):\n \"\"\"\n Session detail by id\n \"\"\"\n def before_get_object(self, view_kwargs):\n \"\"\"\n before get method to get the resource id for fetching details\n :param view_kwargs:\n :return:\n \"\"\"\n if view_kwargs.get('event_identifier'):\n event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'identifier')\n view_kwargs['event_id'] = event.id\n\n def after_update_object(self, session, data, view_kwargs):\n \"\"\" Send email if session accepted or rejected \"\"\"\n\n if 'state' in data and data.get('send_email', None) and (session.state == 'accepted' or\n session.state == 'rejected'):\n # Email for speaker\n speakers = session.speakers\n for speaker in speakers:\n frontend_url = get_settings()['frontend_url']\n link = \"{}/events/{}/sessions/{}\" \\\n .format(frontend_url, session.event_id, session.id)\n send_email_session_accept_reject(speaker.email, session, link)\n send_notif_session_accept_reject(speaker, session.title, session.state, link)\n\n # Email for organizer\n if session.event.get_organizer():\n organizer = session.event.get_organizer()\n organizer_email = organizer.email\n frontend_url = get_settings()['frontend_url']\n link = \"{}/events/{}/sessions/{}\" \\\n .format(frontend_url, session.event_id, session.id)\n send_email_session_accept_reject(organizer_email, session,\n link)\n send_notif_session_accept_reject(organizer, session.title,\n session.state, link)\n\n decorators = (api.has_permission('is_speaker_for_session', methods=\"PATCH,DELETE\"),)\n schema = SessionSchema\n data_layer = {'session': db.session,\n 'model': Session,\n 'methods': {'before_get_object': before_get_object,\n 'after_update_object': after_update_object}}\n\n\nclass SessionRelationshipRequired(ResourceRelationship):\n \"\"\"\n Session Relationship\n \"\"\"\n schema = SessionSchema\n decorators = (api.has_permission('is_speaker_for_session', methods=\"PATCH,DELETE\"),)\n methods = ['GET', 'PATCH']\n data_layer = {'session': db.session,\n 'model': Session}\n\n\nclass SessionRelationshipOptional(ResourceRelationship):\n \"\"\"\n Session Relationship\n \"\"\"\n schema = SessionSchema\n decorators = (api.has_permission('is_speaker_for_session', methods=\"PATCH,DELETE\"),)\n data_layer = {'session': db.session,\n 'model': Session}\n", "path": "app/api/sessions.py"}]} | 2,224 | 133 |
gh_patches_debug_13467 | rasdani/github-patches | git_diff | python-telegram-bot__python-telegram-bot-155 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
httplib exceptions
Running my bot continuously, I sometimes get rare exceptions. This traceback was with Python 2.7 and python-telegram-bot 3.2:
```
File "/home/rahiel/BismillahBot/bismillah.py", line 99, in send_quran
bot.sendMessage(chat_id=chat_id, text=text)
File "/home/rahiel/BismillahBot/venv/local/lib/python2.7/site-packages/telegram/bot.py", line 127, in decorator
result = func(self, *args, **kwargs)
File "/home/rahiel/BismillahBot/venv/local/lib/python2.7/site-packages/telegram/bot.py", line 159, in decorator
result = request.post(url, data)
File "/home/rahiel/BismillahBot/venv/local/lib/python2.7/site-packages/telegram/utils/request.py", line 67, in decorator
return func(*args, **kwargs)
File "/home/rahiel/BismillahBot/venv/local/lib/python2.7/site-packages/telegram/utils/request.py", line 140, in post
result = urlopen(request, timeout=timeout).read()
File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 404, in open
response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 422, in _open
'_open', req)
File "/usr/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1222, in https_open
return self.do_open(httplib.HTTPSConnection, req)
File "/usr/lib/python2.7/urllib2.py", line 1187, in do_open
r = h.getresponse(buffering=True)
File "/usr/lib/python2.7/httplib.py", line 1051, in getresponse
response.begin()
File "/usr/lib/python2.7/httplib.py", line 415, in begin
version, status, reason = self._read_status()
File "/usr/lib/python2.7/httplib.py", line 379, in _read_status
raise BadStatusLine(line)
httplib.BadStatusLine: ''
```
Using version 2.9 of the library I got this traceback in the past:
```
File "/home/rahiel/BismillahBot/bismillah.py", line 122, in upload
v = bot.sendAudio(audio=f, **kwargs)["audio"]["file_id"]
File "/home/rahiel/BismillahBot/venv/local/lib/python2.7/site-packages/telegram/bot.py", line 126, in decorator
result = func(self, *args, **kwargs)
File "/home/rahiel/BismillahBot/venv/local/lib/python2.7/site-packages/telegram/bot.py", line 158, in decorator
result = request.post(url, data)
File "/home/rahiel/BismillahBot/venv/local/lib/python2.7/site-packages/telegram/utils/request.py", line 108, in post
message = _parse(error.read())
File "/usr/lib/python2.7/socket.py", line 351, in read
data = self._sock.recv(rbufsize)
File "/usr/lib/python2.7/httplib.py", line 549, in read
return self._read_chunked(amt)
File "/usr/lib/python2.7/httplib.py", line 603, in _read_chunked
raise IncompleteRead(''.join(value))
httplib.IncompleteRead: IncompleteRead(0 bytes read)
```
Maybe we should catch these exceptions and reraise them as a TelegramError for http errors? I was puzzled by the second traceback, because I was getting it frequently, but that stopped since I updated the library.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `telegram/utils/request.py`
Content:
```
1 #!/usr/bin/env python
2 # pylint: disable=no-name-in-module,unused-import
3 #
4 # A library that provides a Python interface to the Telegram Bot API
5 # Copyright (C) 2015-2016
6 # Leandro Toledo de Souza <[email protected]>
7 #
8 # This program is free software: you can redistribute it and/or modify
9 # it under the terms of the GNU Lesser Public License as published by
10 # the Free Software Foundation, either version 3 of the License, or
11 # (at your option) any later version.
12 #
13 # This program is distributed in the hope that it will be useful,
14 # but WITHOUT ANY WARRANTY; without even the implied warranty of
15 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 # GNU Lesser Public License for more details.
17 #
18 # You should have received a copy of the GNU Lesser Public License
19 # along with this program. If not, see [http://www.gnu.org/licenses/].
20
21 """This module contains methods to make POST and GET requests"""
22
23 import functools
24 import json
25 import socket
26 from ssl import SSLError
27
28 try:
29 from urllib.request import urlopen, urlretrieve, Request
30 from urllib.error import HTTPError
31 except ImportError:
32 from urllib import urlretrieve
33 from urllib2 import urlopen, Request
34 from urllib2 import HTTPError
35
36 from telegram import (InputFile, TelegramError)
37
38
39 def _parse(json_data):
40 """Try and parse the JSON returned from Telegram and return an empty
41 dictionary if there is any error.
42
43 Args:
44 url:
45 urllib.urlopen object
46
47 Returns:
48 A JSON parsed as Python dict with results.
49 """
50 decoded_s = json_data.decode('utf-8')
51 try:
52 data = json.loads(decoded_s)
53 except ValueError:
54 raise TelegramError('Invalid server response')
55
56 if not data.get('ok') and data.get('description'):
57 return data['description']
58
59 return data['result']
60
61
62 def _try_except_req(func):
63 """Decorator for requests to handle known exceptions"""
64 @functools.wraps(func)
65 def decorator(*args, **kwargs):
66 try:
67 return func(*args, **kwargs)
68 except HTTPError as error:
69 if error.getcode() == 403:
70 raise TelegramError('Unauthorized')
71 if error.getcode() == 502:
72 raise TelegramError('Bad Gateway')
73
74 try:
75 message = _parse(error.read())
76 except ValueError:
77 message = 'Unknown HTTPError {0}'.format(error.getcode())
78
79 raise TelegramError(message)
80 except (SSLError, socket.timeout) as error:
81 if "operation timed out" in str(error):
82 raise TelegramError("Timed out")
83
84 raise TelegramError(str(error))
85
86 return decorator
87
88
89 @_try_except_req
90 def get(url):
91 """Request an URL.
92 Args:
93 url:
94 The web location we want to retrieve.
95
96 Returns:
97 A JSON object.
98 """
99 result = urlopen(url).read()
100
101 return _parse(result)
102
103
104 @_try_except_req
105 def post(url,
106 data,
107 network_delay=2.):
108 """Request an URL.
109 Args:
110 url:
111 The web location we want to retrieve.
112 data:
113 A dict of (str, unicode) key/value pairs.
114 network_delay:
115 Additional timeout in seconds to allow the response from Telegram to
116 take some time.
117
118 Returns:
119 A JSON object.
120 """
121
122 # Add time to the timeout of urlopen to allow data to be transferred over
123 # the network.
124 if 'timeout' in data:
125 timeout = data['timeout'] + network_delay
126 else:
127 timeout = None
128
129 if InputFile.is_inputfile(data):
130 data = InputFile(data)
131 request = Request(url,
132 data=data.to_form(),
133 headers=data.headers)
134 else:
135 data = json.dumps(data)
136 request = Request(url,
137 data=data.encode(),
138 headers={'Content-Type': 'application/json'})
139
140 result = urlopen(request, timeout=timeout).read()
141 return _parse(result)
142
143
144 @_try_except_req
145 def download(url,
146 filename):
147 """Download a file by its URL.
148 Args:
149 url:
150 The web location we want to retrieve.
151
152 filename:
153 The filename wihtin the path to download the file.
154 """
155
156 urlretrieve(url, filename)
157
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/telegram/utils/request.py b/telegram/utils/request.py
--- a/telegram/utils/request.py
+++ b/telegram/utils/request.py
@@ -25,6 +25,13 @@
import socket
from ssl import SSLError
+try:
+ # python2
+ from httplib import HTTPException
+except ImportError:
+ # python3
+ from http.client import HTTPException
+
try:
from urllib.request import urlopen, urlretrieve, Request
from urllib.error import HTTPError
@@ -82,6 +89,8 @@
raise TelegramError("Timed out")
raise TelegramError(str(error))
+ except HTTPException as error:
+ raise TelegramError('HTTPException: {0!r}'.format(error))
return decorator
| {"golden_diff": "diff --git a/telegram/utils/request.py b/telegram/utils/request.py\n--- a/telegram/utils/request.py\n+++ b/telegram/utils/request.py\n@@ -25,6 +25,13 @@\n import socket\n from ssl import SSLError\n \n+try:\n+ # python2\n+ from httplib import HTTPException\n+except ImportError:\n+ # python3\n+ from http.client import HTTPException\n+\n try:\n from urllib.request import urlopen, urlretrieve, Request\n from urllib.error import HTTPError\n@@ -82,6 +89,8 @@\n raise TelegramError(\"Timed out\")\n \n raise TelegramError(str(error))\n+ except HTTPException as error:\n+ raise TelegramError('HTTPException: {0!r}'.format(error))\n \n return decorator\n", "issue": "httplib exceptions\nRunning my bot continuously, I sometimes get rare exceptions. This traceback was with Python 2.7 and python-telegram-bot 3.2:\n\n```\n File \"/home/rahiel/BismillahBot/bismillah.py\", line 99, in send_quran \n bot.sendMessage(chat_id=chat_id, text=text)\n File \"/home/rahiel/BismillahBot/venv/local/lib/python2.7/site-packages/telegram/bot.py\", line 127, in decorator \n result = func(self, *args, **kwargs) \n File \"/home/rahiel/BismillahBot/venv/local/lib/python2.7/site-packages/telegram/bot.py\", line 159, in decorator \n result = request.post(url, data) \n File \"/home/rahiel/BismillahBot/venv/local/lib/python2.7/site-packages/telegram/utils/request.py\", line 67, in decorator \n return func(*args, **kwargs) \n File \"/home/rahiel/BismillahBot/venv/local/lib/python2.7/site-packages/telegram/utils/request.py\", line 140, in post \n result = urlopen(request, timeout=timeout).read() \n File \"/usr/lib/python2.7/urllib2.py\", line 127, in urlopen \n return _opener.open(url, data, timeout) \n File \"/usr/lib/python2.7/urllib2.py\", line 404, in open \n response = self._open(req, data) \n File \"/usr/lib/python2.7/urllib2.py\", line 422, in _open \n '_open', req) \n File \"/usr/lib/python2.7/urllib2.py\", line 382, in _call_chain \n result = func(*args) \n File \"/usr/lib/python2.7/urllib2.py\", line 1222, in https_open \n return self.do_open(httplib.HTTPSConnection, req) \n File \"/usr/lib/python2.7/urllib2.py\", line 1187, in do_open \n r = h.getresponse(buffering=True) \n File \"/usr/lib/python2.7/httplib.py\", line 1051, in getresponse \n response.begin() \n File \"/usr/lib/python2.7/httplib.py\", line 415, in begin \n version, status, reason = self._read_status() \n File \"/usr/lib/python2.7/httplib.py\", line 379, in _read_status \n raise BadStatusLine(line) \nhttplib.BadStatusLine: '' \n```\n\nUsing version 2.9 of the library I got this traceback in the past:\n\n```\n File \"/home/rahiel/BismillahBot/bismillah.py\", line 122, in upload\n v = bot.sendAudio(audio=f, **kwargs)[\"audio\"][\"file_id\"]\n File \"/home/rahiel/BismillahBot/venv/local/lib/python2.7/site-packages/telegram/bot.py\", line 126, in decorator\n result = func(self, *args, **kwargs)\n File \"/home/rahiel/BismillahBot/venv/local/lib/python2.7/site-packages/telegram/bot.py\", line 158, in decorator\n result = request.post(url, data)\n File \"/home/rahiel/BismillahBot/venv/local/lib/python2.7/site-packages/telegram/utils/request.py\", line 108, in post\n message = _parse(error.read())\n File \"/usr/lib/python2.7/socket.py\", line 351, in read\n data = self._sock.recv(rbufsize)\n File \"/usr/lib/python2.7/httplib.py\", line 549, in read\n return self._read_chunked(amt)\n File \"/usr/lib/python2.7/httplib.py\", line 603, in _read_chunked\n raise IncompleteRead(''.join(value))\nhttplib.IncompleteRead: IncompleteRead(0 bytes read)\n```\n\nMaybe we should catch these exceptions and reraise them as a TelegramError for http errors? I was puzzled by the second traceback, because I was getting it frequently, but that stopped since I updated the library.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# pylint: disable=no-name-in-module,unused-import\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2016\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\n\"\"\"This module contains methods to make POST and GET requests\"\"\"\n\nimport functools\nimport json\nimport socket\nfrom ssl import SSLError\n\ntry:\n from urllib.request import urlopen, urlretrieve, Request\n from urllib.error import HTTPError\nexcept ImportError:\n from urllib import urlretrieve\n from urllib2 import urlopen, Request\n from urllib2 import HTTPError\n\nfrom telegram import (InputFile, TelegramError)\n\n\ndef _parse(json_data):\n \"\"\"Try and parse the JSON returned from Telegram and return an empty\n dictionary if there is any error.\n\n Args:\n url:\n urllib.urlopen object\n\n Returns:\n A JSON parsed as Python dict with results.\n \"\"\"\n decoded_s = json_data.decode('utf-8')\n try:\n data = json.loads(decoded_s)\n except ValueError:\n raise TelegramError('Invalid server response')\n\n if not data.get('ok') and data.get('description'):\n return data['description']\n\n return data['result']\n\n\ndef _try_except_req(func):\n \"\"\"Decorator for requests to handle known exceptions\"\"\"\n @functools.wraps(func)\n def decorator(*args, **kwargs):\n try:\n return func(*args, **kwargs)\n except HTTPError as error:\n if error.getcode() == 403:\n raise TelegramError('Unauthorized')\n if error.getcode() == 502:\n raise TelegramError('Bad Gateway')\n\n try:\n message = _parse(error.read())\n except ValueError:\n message = 'Unknown HTTPError {0}'.format(error.getcode())\n\n raise TelegramError(message)\n except (SSLError, socket.timeout) as error:\n if \"operation timed out\" in str(error):\n raise TelegramError(\"Timed out\")\n\n raise TelegramError(str(error))\n\n return decorator\n\n\n@_try_except_req\ndef get(url):\n \"\"\"Request an URL.\n Args:\n url:\n The web location we want to retrieve.\n\n Returns:\n A JSON object.\n \"\"\"\n result = urlopen(url).read()\n\n return _parse(result)\n\n\n@_try_except_req\ndef post(url,\n data,\n network_delay=2.):\n \"\"\"Request an URL.\n Args:\n url:\n The web location we want to retrieve.\n data:\n A dict of (str, unicode) key/value pairs.\n network_delay:\n Additional timeout in seconds to allow the response from Telegram to\n take some time.\n\n Returns:\n A JSON object.\n \"\"\"\n\n # Add time to the timeout of urlopen to allow data to be transferred over\n # the network.\n if 'timeout' in data:\n timeout = data['timeout'] + network_delay\n else:\n timeout = None\n\n if InputFile.is_inputfile(data):\n data = InputFile(data)\n request = Request(url,\n data=data.to_form(),\n headers=data.headers)\n else:\n data = json.dumps(data)\n request = Request(url,\n data=data.encode(),\n headers={'Content-Type': 'application/json'})\n\n result = urlopen(request, timeout=timeout).read()\n return _parse(result)\n\n\n@_try_except_req\ndef download(url,\n filename):\n \"\"\"Download a file by its URL.\n Args:\n url:\n The web location we want to retrieve.\n\n filename:\n The filename wihtin the path to download the file.\n \"\"\"\n\n urlretrieve(url, filename)\n", "path": "telegram/utils/request.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# pylint: disable=no-name-in-module,unused-import\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2016\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\n\"\"\"This module contains methods to make POST and GET requests\"\"\"\n\nimport functools\nimport json\nimport socket\nfrom ssl import SSLError\n\ntry:\n # python2\n from httplib import HTTPException\nexcept ImportError:\n # python3\n from http.client import HTTPException\n\ntry:\n from urllib.request import urlopen, urlretrieve, Request\n from urllib.error import HTTPError\nexcept ImportError:\n from urllib import urlretrieve\n from urllib2 import urlopen, Request\n from urllib2 import HTTPError\n\nfrom telegram import (InputFile, TelegramError)\n\n\ndef _parse(json_data):\n \"\"\"Try and parse the JSON returned from Telegram and return an empty\n dictionary if there is any error.\n\n Args:\n url:\n urllib.urlopen object\n\n Returns:\n A JSON parsed as Python dict with results.\n \"\"\"\n decoded_s = json_data.decode('utf-8')\n try:\n data = json.loads(decoded_s)\n except ValueError:\n raise TelegramError('Invalid server response')\n\n if not data.get('ok') and data.get('description'):\n return data['description']\n\n return data['result']\n\n\ndef _try_except_req(func):\n \"\"\"Decorator for requests to handle known exceptions\"\"\"\n @functools.wraps(func)\n def decorator(*args, **kwargs):\n try:\n return func(*args, **kwargs)\n except HTTPError as error:\n if error.getcode() == 403:\n raise TelegramError('Unauthorized')\n if error.getcode() == 502:\n raise TelegramError('Bad Gateway')\n\n try:\n message = _parse(error.read())\n except ValueError:\n message = 'Unknown HTTPError {0}'.format(error.getcode())\n\n raise TelegramError(message)\n except (SSLError, socket.timeout) as error:\n if \"operation timed out\" in str(error):\n raise TelegramError(\"Timed out\")\n\n raise TelegramError(str(error))\n except HTTPException as error:\n raise TelegramError('HTTPException: {0!r}'.format(error))\n\n return decorator\n\n\n@_try_except_req\ndef get(url):\n \"\"\"Request an URL.\n Args:\n url:\n The web location we want to retrieve.\n\n Returns:\n A JSON object.\n \"\"\"\n result = urlopen(url).read()\n\n return _parse(result)\n\n\n@_try_except_req\ndef post(url,\n data,\n network_delay=2.):\n \"\"\"Request an URL.\n Args:\n url:\n The web location we want to retrieve.\n data:\n A dict of (str, unicode) key/value pairs.\n network_delay:\n Additional timeout in seconds to allow the response from Telegram to\n take some time.\n\n Returns:\n A JSON object.\n \"\"\"\n\n # Add time to the timeout of urlopen to allow data to be transferred over\n # the network.\n if 'timeout' in data:\n timeout = data['timeout'] + network_delay\n else:\n timeout = None\n\n if InputFile.is_inputfile(data):\n data = InputFile(data)\n request = Request(url,\n data=data.to_form(),\n headers=data.headers)\n else:\n data = json.dumps(data)\n request = Request(url,\n data=data.encode(),\n headers={'Content-Type': 'application/json'})\n\n result = urlopen(request, timeout=timeout).read()\n return _parse(result)\n\n\n@_try_except_req\ndef download(url,\n filename):\n \"\"\"Download a file by its URL.\n Args:\n url:\n The web location we want to retrieve.\n\n filename:\n The filename wihtin the path to download the file.\n \"\"\"\n\n urlretrieve(url, filename)\n", "path": "telegram/utils/request.py"}]} | 2,549 | 174 |
gh_patches_debug_30011 | rasdani/github-patches | git_diff | microsoft__botbuilder-python-1889 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Teams Task Module - Deserialization Error on Teams mobile app for iOS
## Version
botbuilder-integration-aiohttp 4.14.0
Python 3.8.6
## Describe the bug
Error when loading Task Module on iOS iOS 14.8.1 / MS Teams v3.20.0
## To Reproduce
1. Deploy [sample bot 54.teams-task-module](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/python/54.teams-task-module)
2. Say hello and click on _Adaptive Card_ button
3. Deserialization Error when on iOS iOS 14.8.1 / Microsoft Teams v3.20.0

## Traceback
_(file locations prefix intentionally removed)_
```
File "test_teams_task/env/lib/site-packages/msrest/serialization.py", line 1293, in _deserialize
found_value = key_extractor(attr, attr_desc, data)
File "test_teams_task/env/lib/site-packages/msrest/serialization.py", line 1064, in rest_key_extractor
return working_data.get(key)
AttributeError: 'str' object has no attribute 'get'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test_teams_task/env/lib/site-packages/botbuilder/core/bot_adapter.py", line 129, in run_pipeline
context, callback
File "test_teams_task/env/lib/site-packages/botbuilder/core/middleware_set.py", line 69, in receive_activity_with_status
return await self.receive_activity_internal(context, callback)
File "test_teams_task/env/lib/site-packages/botbuilder/core/middleware_set.py", line 79, in receive_activity_internal
return await callback(context)
File "test_teams_task/env/lib/site-packages/botbuilder/core/activity_handler.py", line 78, in on_turn
invoke_response = await self.on_invoke_activity(turn_context)
File "test_teams_task/env/lib/site-packages/botbuilder/core/teams/teams_activity_handler.py", line 155, in on_invoke_activity
TaskModuleRequest, turn_context.activity.value
File "test_teams_task/env/lib/site-packages/botbuilder/core/serializer_helper.py", line 28, in deserializer_helper
return deserializer(msrest_cls.__name__, dict_to_deserialize)
File "test_teams_task/env/lib/site-packages/msrest/serialization.py", line 1233, in __call__
return self._deserialize(target_obj, data)
File "test_teams_task/env/lib/site-packages/msrest/serialization.py", line 1299, in _deserialize
value = self.deserialize_data(raw_value, attr_desc['type'])
File "test_teams_task/env/lib/site-packages/msrest/serialization.py", line 1468, in deserialize_data
return self._deserialize(obj_type, data)
File "test_teams_task/env/lib/site-packages/msrest/serialization.py", line 1303, in _deserialize
raise_with_traceback(DeserializationError, msg, err)
File "test_teams_task/env/lib/site-packages/msrest/exceptions.py", line 51, in raise_with_traceback
raise error.with_traceback(exc_traceback)
File "test_teams_task/env/lib/site-packages/msrest/serialization.py", line 1293, in _deserialize
found_value = key_extractor(attr, attr_desc, data)
File "test_teams_task/env/lib/site-packages/msrest/serialization.py", line 1064, in rest_key_extractor
return working_data.get(key)
msrest.exceptions.DeserializationError: Unable to deserialize to object: type, AttributeError: 'str' object has no attribute 'get'
```
## Expected behavior
This sample bot raises no error when interacting on the following platforms:
- Windows 10 (Desktop app, Firefox, Chrome)
- macOS (Chrome)
- Android (Mobile app)

It was possible to interact with Task Module on iOS iOS 14.8.1 / Microsoft Teams v3.20.0 when deploying these samples (python not included):
https://docs.microsoft.com/en-us/samples/officedev/microsoft-teams-samples/ms-teams-task-sample/
## Additional context
Initially the error was detected on a bot in production currently deployed in Azure. Since the error message is the same when running [bot sample 54.teams-task-module](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/python/54.teams-task-module), for the sake of repro we can take this example.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libraries/botbuilder-core/botbuilder/core/serializer_helper.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 from inspect import getmembers
5 from typing import Type
6 from enum import Enum
7
8 from msrest.serialization import Model, Deserializer, Serializer
9
10 import botbuilder.schema as schema
11 import botbuilder.schema.teams as teams_schema
12
13 DEPENDICIES = [
14 schema_cls
15 for key, schema_cls in getmembers(schema)
16 if isinstance(schema_cls, type) and issubclass(schema_cls, (Model, Enum))
17 ]
18 DEPENDICIES += [
19 schema_cls
20 for key, schema_cls in getmembers(teams_schema)
21 if isinstance(schema_cls, type) and issubclass(schema_cls, (Model, Enum))
22 ]
23 DEPENDICIES_DICT = {dependency.__name__: dependency for dependency in DEPENDICIES}
24
25
26 def deserializer_helper(msrest_cls: Type[Model], dict_to_deserialize: dict) -> Model:
27 deserializer = Deserializer(DEPENDICIES_DICT)
28 return deserializer(msrest_cls.__name__, dict_to_deserialize)
29
30
31 def serializer_helper(object_to_serialize: Model) -> dict:
32 if object_to_serialize is None:
33 return None
34
35 serializer = Serializer(DEPENDICIES_DICT)
36 # pylint: disable=protected-access
37 return serializer._serialize(object_to_serialize)
38
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/libraries/botbuilder-core/botbuilder/core/serializer_helper.py b/libraries/botbuilder-core/botbuilder/core/serializer_helper.py
--- a/libraries/botbuilder-core/botbuilder/core/serializer_helper.py
+++ b/libraries/botbuilder-core/botbuilder/core/serializer_helper.py
@@ -1,6 +1,6 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
-
+from copy import copy
from inspect import getmembers
from typing import Type
from enum import Enum
@@ -25,6 +25,9 @@
def deserializer_helper(msrest_cls: Type[Model], dict_to_deserialize: dict) -> Model:
deserializer = Deserializer(DEPENDICIES_DICT)
+ _clean_data_for_serialization(
+ deserializer.dependencies[msrest_cls.__name__], dict_to_deserialize
+ )
return deserializer(msrest_cls.__name__, dict_to_deserialize)
@@ -35,3 +38,21 @@
serializer = Serializer(DEPENDICIES_DICT)
# pylint: disable=protected-access
return serializer._serialize(object_to_serialize)
+
+
+def _clean_data_for_serialization(msrest_cls: Type[Model], dict_to_deserialize: dict):
+ # pylint: disable=protected-access
+ # Clean channel response of empty strings for expected objects.
+ if not isinstance(dict_to_deserialize, dict):
+ return
+ serialization_model = copy(msrest_cls._attribute_map)
+ for key, value in msrest_cls._attribute_map.items():
+ if key != value["key"]:
+ serialization_model[value["key"]] = value
+ for prop, prop_value in dict_to_deserialize.items():
+ if (
+ prop in serialization_model
+ and serialization_model[prop]["type"] in DEPENDICIES_DICT
+ and not prop_value
+ ):
+ dict_to_deserialize[prop] = None
| {"golden_diff": "diff --git a/libraries/botbuilder-core/botbuilder/core/serializer_helper.py b/libraries/botbuilder-core/botbuilder/core/serializer_helper.py\n--- a/libraries/botbuilder-core/botbuilder/core/serializer_helper.py\n+++ b/libraries/botbuilder-core/botbuilder/core/serializer_helper.py\n@@ -1,6 +1,6 @@\n # Copyright (c) Microsoft Corporation. All rights reserved.\n # Licensed under the MIT License.\n-\n+from copy import copy\n from inspect import getmembers\n from typing import Type\n from enum import Enum\n@@ -25,6 +25,9 @@\n \n def deserializer_helper(msrest_cls: Type[Model], dict_to_deserialize: dict) -> Model:\n deserializer = Deserializer(DEPENDICIES_DICT)\n+ _clean_data_for_serialization(\n+ deserializer.dependencies[msrest_cls.__name__], dict_to_deserialize\n+ )\n return deserializer(msrest_cls.__name__, dict_to_deserialize)\n \n \n@@ -35,3 +38,21 @@\n serializer = Serializer(DEPENDICIES_DICT)\n # pylint: disable=protected-access\n return serializer._serialize(object_to_serialize)\n+\n+\n+def _clean_data_for_serialization(msrest_cls: Type[Model], dict_to_deserialize: dict):\n+ # pylint: disable=protected-access\n+ # Clean channel response of empty strings for expected objects.\n+ if not isinstance(dict_to_deserialize, dict):\n+ return\n+ serialization_model = copy(msrest_cls._attribute_map)\n+ for key, value in msrest_cls._attribute_map.items():\n+ if key != value[\"key\"]:\n+ serialization_model[value[\"key\"]] = value\n+ for prop, prop_value in dict_to_deserialize.items():\n+ if (\n+ prop in serialization_model\n+ and serialization_model[prop][\"type\"] in DEPENDICIES_DICT\n+ and not prop_value\n+ ):\n+ dict_to_deserialize[prop] = None\n", "issue": "Teams Task Module - Deserialization Error on Teams mobile app for iOS\n## Version\r\nbotbuilder-integration-aiohttp 4.14.0\r\nPython 3.8.6 \r\n\r\n## Describe the bug\r\nError when loading Task Module on iOS iOS 14.8.1 / MS Teams v3.20.0\r\n\r\n## To Reproduce\r\n1. Deploy [sample bot 54.teams-task-module](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/python/54.teams-task-module) \r\n2. Say hello and click on _Adaptive Card_ button\r\n3. Deserialization Error when on iOS iOS 14.8.1 / Microsoft Teams v3.20.0\r\n\r\n\r\n## Traceback\r\n_(file locations prefix intentionally removed)_\r\n```\r\n File \"test_teams_task/env/lib/site-packages/msrest/serialization.py\", line 1293, in _deserialize\r\n found_value = key_extractor(attr, attr_desc, data)\r\n File \"test_teams_task/env/lib/site-packages/msrest/serialization.py\", line 1064, in rest_key_extractor\r\n return working_data.get(key)\r\nAttributeError: 'str' object has no attribute 'get'\r\nDuring handling of the above exception, another exception occurred:\r\nTraceback (most recent call last):\r\n File \"test_teams_task/env/lib/site-packages/botbuilder/core/bot_adapter.py\", line 129, in run_pipeline\r\n context, callback\r\n File \"test_teams_task/env/lib/site-packages/botbuilder/core/middleware_set.py\", line 69, in receive_activity_with_status\r\n return await self.receive_activity_internal(context, callback)\r\n File \"test_teams_task/env/lib/site-packages/botbuilder/core/middleware_set.py\", line 79, in receive_activity_internal\r\n return await callback(context)\r\n File \"test_teams_task/env/lib/site-packages/botbuilder/core/activity_handler.py\", line 78, in on_turn\r\n invoke_response = await self.on_invoke_activity(turn_context)\r\n File \"test_teams_task/env/lib/site-packages/botbuilder/core/teams/teams_activity_handler.py\", line 155, in on_invoke_activity\r\n TaskModuleRequest, turn_context.activity.value\r\n File \"test_teams_task/env/lib/site-packages/botbuilder/core/serializer_helper.py\", line 28, in deserializer_helper\r\n return deserializer(msrest_cls.__name__, dict_to_deserialize)\r\n File \"test_teams_task/env/lib/site-packages/msrest/serialization.py\", line 1233, in __call__\r\n return self._deserialize(target_obj, data)\r\n File \"test_teams_task/env/lib/site-packages/msrest/serialization.py\", line 1299, in _deserialize\r\n value = self.deserialize_data(raw_value, attr_desc['type'])\r\n File \"test_teams_task/env/lib/site-packages/msrest/serialization.py\", line 1468, in deserialize_data\r\n return self._deserialize(obj_type, data)\r\n File \"test_teams_task/env/lib/site-packages/msrest/serialization.py\", line 1303, in _deserialize\r\n raise_with_traceback(DeserializationError, msg, err)\r\n File \"test_teams_task/env/lib/site-packages/msrest/exceptions.py\", line 51, in raise_with_traceback\r\n raise error.with_traceback(exc_traceback)\r\n File \"test_teams_task/env/lib/site-packages/msrest/serialization.py\", line 1293, in _deserialize\r\n found_value = key_extractor(attr, attr_desc, data)\r\n File \"test_teams_task/env/lib/site-packages/msrest/serialization.py\", line 1064, in rest_key_extractor\r\n return working_data.get(key)\r\nmsrest.exceptions.DeserializationError: Unable to deserialize to object: type, AttributeError: 'str' object has no attribute 'get'\r\n```\r\n\r\n## Expected behavior\r\nThis sample bot raises no error when interacting on the following platforms:\r\n- Windows 10 (Desktop app, Firefox, Chrome)\r\n- macOS (Chrome)\r\n- Android (Mobile app)\r\n\r\n\r\nIt was possible to interact with Task Module on iOS iOS 14.8.1 / Microsoft Teams v3.20.0 when deploying these samples (python not included):\r\nhttps://docs.microsoft.com/en-us/samples/officedev/microsoft-teams-samples/ms-teams-task-sample/\r\n\r\n## Additional context\r\nInitially the error was detected on a bot in production currently deployed in Azure. Since the error message is the same when running [bot sample 54.teams-task-module](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/python/54.teams-task-module), for the sake of repro we can take this example. \r\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nfrom inspect import getmembers\nfrom typing import Type\nfrom enum import Enum\n\nfrom msrest.serialization import Model, Deserializer, Serializer\n\nimport botbuilder.schema as schema\nimport botbuilder.schema.teams as teams_schema\n\nDEPENDICIES = [\n schema_cls\n for key, schema_cls in getmembers(schema)\n if isinstance(schema_cls, type) and issubclass(schema_cls, (Model, Enum))\n]\nDEPENDICIES += [\n schema_cls\n for key, schema_cls in getmembers(teams_schema)\n if isinstance(schema_cls, type) and issubclass(schema_cls, (Model, Enum))\n]\nDEPENDICIES_DICT = {dependency.__name__: dependency for dependency in DEPENDICIES}\n\n\ndef deserializer_helper(msrest_cls: Type[Model], dict_to_deserialize: dict) -> Model:\n deserializer = Deserializer(DEPENDICIES_DICT)\n return deserializer(msrest_cls.__name__, dict_to_deserialize)\n\n\ndef serializer_helper(object_to_serialize: Model) -> dict:\n if object_to_serialize is None:\n return None\n\n serializer = Serializer(DEPENDICIES_DICT)\n # pylint: disable=protected-access\n return serializer._serialize(object_to_serialize)\n", "path": "libraries/botbuilder-core/botbuilder/core/serializer_helper.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\nfrom copy import copy\nfrom inspect import getmembers\nfrom typing import Type\nfrom enum import Enum\n\nfrom msrest.serialization import Model, Deserializer, Serializer\n\nimport botbuilder.schema as schema\nimport botbuilder.schema.teams as teams_schema\n\nDEPENDICIES = [\n schema_cls\n for key, schema_cls in getmembers(schema)\n if isinstance(schema_cls, type) and issubclass(schema_cls, (Model, Enum))\n]\nDEPENDICIES += [\n schema_cls\n for key, schema_cls in getmembers(teams_schema)\n if isinstance(schema_cls, type) and issubclass(schema_cls, (Model, Enum))\n]\nDEPENDICIES_DICT = {dependency.__name__: dependency for dependency in DEPENDICIES}\n\n\ndef deserializer_helper(msrest_cls: Type[Model], dict_to_deserialize: dict) -> Model:\n deserializer = Deserializer(DEPENDICIES_DICT)\n _clean_data_for_serialization(\n deserializer.dependencies[msrest_cls.__name__], dict_to_deserialize\n )\n return deserializer(msrest_cls.__name__, dict_to_deserialize)\n\n\ndef serializer_helper(object_to_serialize: Model) -> dict:\n if object_to_serialize is None:\n return None\n\n serializer = Serializer(DEPENDICIES_DICT)\n # pylint: disable=protected-access\n return serializer._serialize(object_to_serialize)\n\n\ndef _clean_data_for_serialization(msrest_cls: Type[Model], dict_to_deserialize: dict):\n # pylint: disable=protected-access\n # Clean channel response of empty strings for expected objects.\n if not isinstance(dict_to_deserialize, dict):\n return\n serialization_model = copy(msrest_cls._attribute_map)\n for key, value in msrest_cls._attribute_map.items():\n if key != value[\"key\"]:\n serialization_model[value[\"key\"]] = value\n for prop, prop_value in dict_to_deserialize.items():\n if (\n prop in serialization_model\n and serialization_model[prop][\"type\"] in DEPENDICIES_DICT\n and not prop_value\n ):\n dict_to_deserialize[prop] = None\n", "path": "libraries/botbuilder-core/botbuilder/core/serializer_helper.py"}]} | 1,722 | 428 |
gh_patches_debug_2288 | rasdani/github-patches | git_diff | bookwyrm-social__bookwyrm-1577 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
"Your Books" doesn't display my most recent title
**Describe the bug**
For the last couple of months, your books is not showing anything I've read recently
**To Reproduce**

**Expected behavior**
Expect to see the mostly recently read book on your books
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Instance**
Bookwrym.social
**Additional context**
Add any other context about the problem here.
---
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
"Your Books" doesn't display my most recent title
**Describe the bug**
For the last couple of months, your books is not showing anything I've read recently
**To Reproduce**

**Expected behavior**
Expect to see the mostly recently read book on your books
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Instance**
Bookwrym.social
**Additional context**
Add any other context about the problem here.
---
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bookwyrm/views/feed.py`
Content:
```
1 """ non-interactive pages """
2 from django.contrib.auth.decorators import login_required
3 from django.core.paginator import Paginator
4 from django.db.models import Q
5 from django.http import HttpResponseNotFound, Http404
6 from django.shortcuts import get_object_or_404
7 from django.template.response import TemplateResponse
8 from django.utils import timezone
9 from django.utils.decorators import method_decorator
10 from django.views import View
11
12 from bookwyrm import activitystreams, forms, models
13 from bookwyrm.activitypub import ActivitypubResponse
14 from bookwyrm.settings import PAGE_LENGTH, STREAMS
15 from bookwyrm.suggested_users import suggested_users
16 from .helpers import get_user_from_username
17 from .helpers import is_api_request, is_bookwyrm_request
18
19
20 # pylint: disable= no-self-use
21 @method_decorator(login_required, name="dispatch")
22 class Feed(View):
23 """activity stream"""
24
25 def get(self, request, tab):
26 """user's homepage with activity feed"""
27 tab = [s for s in STREAMS if s["key"] == tab]
28 tab = tab[0] if tab else STREAMS[0]
29
30 activities = activitystreams.streams[tab["key"]].get_activity_stream(
31 request.user
32 )
33 paginated = Paginator(activities, PAGE_LENGTH)
34
35 suggestions = suggested_users.get_suggestions(request.user)
36
37 data = {
38 **feed_page_data(request.user),
39 **{
40 "user": request.user,
41 "activities": paginated.get_page(request.GET.get("page")),
42 "suggested_users": suggestions,
43 "tab": tab,
44 "streams": STREAMS,
45 "goal_form": forms.GoalForm(),
46 "path": f"/{tab['key']}",
47 },
48 }
49 return TemplateResponse(request, "feed/feed.html", data)
50
51
52 @method_decorator(login_required, name="dispatch")
53 class DirectMessage(View):
54 """dm view"""
55
56 def get(self, request, username=None):
57 """like a feed but for dms only"""
58 # remove fancy subclasses of status, keep just good ol' notes
59 activities = (
60 models.Status.privacy_filter(request.user, privacy_levels=["direct"])
61 .filter(
62 review__isnull=True,
63 comment__isnull=True,
64 quotation__isnull=True,
65 generatednote__isnull=True,
66 )
67 .order_by("-published_date")
68 )
69
70 user = None
71 if username:
72 try:
73 user = get_user_from_username(request.user, username)
74 except Http404:
75 pass
76 if user:
77 activities = activities.filter(Q(user=user) | Q(mention_users=user))
78
79 paginated = Paginator(activities, PAGE_LENGTH)
80 data = {
81 **feed_page_data(request.user),
82 **{
83 "user": request.user,
84 "partner": user,
85 "activities": paginated.get_page(request.GET.get("page")),
86 "path": "/direct-messages",
87 },
88 }
89 return TemplateResponse(request, "feed/direct_messages.html", data)
90
91
92 class Status(View):
93 """get posting"""
94
95 def get(self, request, username, status_id):
96 """display a particular status (and replies, etc)"""
97 user = get_user_from_username(request.user, username)
98 status = get_object_or_404(
99 models.Status.objects.select_subclasses(),
100 user=user,
101 id=status_id,
102 deleted=False,
103 )
104 # make sure the user is authorized to see the status
105 status.raise_visible_to_user(request.user)
106
107 if is_api_request(request):
108 return ActivitypubResponse(
109 status.to_activity(pure=not is_bookwyrm_request(request))
110 )
111
112 visible_thread = (
113 models.Status.privacy_filter(request.user)
114 .filter(thread_id=status.thread_id)
115 .values_list("id", flat=True)
116 )
117 visible_thread = list(visible_thread)
118
119 ancestors = models.Status.objects.select_subclasses().raw(
120 """
121 WITH RECURSIVE get_thread(depth, id, path) AS (
122
123 SELECT 1, st.id, ARRAY[st.id]
124 FROM bookwyrm_status st
125 WHERE id = '%s' AND id = ANY(%s)
126
127 UNION
128
129 SELECT (gt.depth + 1), st.reply_parent_id, path || st.id
130 FROM get_thread gt, bookwyrm_status st
131
132 WHERE st.id = gt.id AND depth < 5 AND st.id = ANY(%s)
133
134 )
135
136 SELECT * FROM get_thread ORDER BY path DESC;
137 """,
138 params=[status.reply_parent_id or 0, visible_thread, visible_thread],
139 )
140 children = models.Status.objects.select_subclasses().raw(
141 """
142 WITH RECURSIVE get_thread(depth, id, path) AS (
143
144 SELECT 1, st.id, ARRAY[st.id]
145 FROM bookwyrm_status st
146 WHERE reply_parent_id = '%s' AND id = ANY(%s)
147
148 UNION
149
150 SELECT (gt.depth + 1), st.id, path || st.id
151 FROM get_thread gt, bookwyrm_status st
152
153 WHERE st.reply_parent_id = gt.id AND depth < 5 AND st.id = ANY(%s)
154
155 )
156
157 SELECT * FROM get_thread ORDER BY path;
158 """,
159 params=[status.id, visible_thread, visible_thread],
160 )
161
162 data = {
163 **feed_page_data(request.user),
164 **{
165 "status": status,
166 "children": children,
167 "ancestors": ancestors,
168 },
169 }
170 return TemplateResponse(request, "feed/status.html", data)
171
172
173 class Replies(View):
174 """replies page (a json view of status)"""
175
176 def get(self, request, username, status_id):
177 """ordered collection of replies to a status"""
178 # the html view is the same as Status
179 if not is_api_request(request):
180 status_view = Status.as_view()
181 return status_view(request, username, status_id)
182
183 # the json view is different than Status
184 status = models.Status.objects.get(id=status_id)
185 if status.user.localname != username:
186 return HttpResponseNotFound()
187 status.raise_visible_to_user(request.user)
188
189 return ActivitypubResponse(status.to_replies(**request.GET))
190
191
192 def feed_page_data(user):
193 """info we need for every feed page"""
194 if not user.is_authenticated:
195 return {}
196
197 goal = models.AnnualGoal.objects.filter(user=user, year=timezone.now().year).first()
198 return {
199 "suggested_books": get_suggested_books(user),
200 "goal": goal,
201 "goal_form": forms.GoalForm(),
202 }
203
204
205 def get_suggested_books(user, max_books=5):
206 """helper to get a user's recent books"""
207 book_count = 0
208 preset_shelves = [("reading", max_books), ("read", 2), ("to-read", max_books)]
209 suggested_books = []
210 for (preset, shelf_max) in preset_shelves:
211 limit = (
212 shelf_max
213 if shelf_max < (max_books - book_count)
214 else max_books - book_count
215 )
216 shelf = user.shelf_set.get(identifier=preset)
217 if not shelf.books.exists():
218 continue
219
220 shelf_preview = {
221 "name": shelf.name,
222 "identifier": shelf.identifier,
223 "books": models.Edition.viewer_aware_objects(user)
224 .filter(
225 shelfbook__shelf=shelf,
226 )
227 .prefetch_related("authors")[:limit],
228 }
229 suggested_books.append(shelf_preview)
230 book_count += len(shelf_preview["books"])
231 return suggested_books
232
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bookwyrm/views/feed.py b/bookwyrm/views/feed.py
--- a/bookwyrm/views/feed.py
+++ b/bookwyrm/views/feed.py
@@ -224,6 +224,7 @@
.filter(
shelfbook__shelf=shelf,
)
+ .order_by("-shelfbook__shelved_date")
.prefetch_related("authors")[:limit],
}
suggested_books.append(shelf_preview)
| {"golden_diff": "diff --git a/bookwyrm/views/feed.py b/bookwyrm/views/feed.py\n--- a/bookwyrm/views/feed.py\n+++ b/bookwyrm/views/feed.py\n@@ -224,6 +224,7 @@\n .filter(\n shelfbook__shelf=shelf,\n )\n+ .order_by(\"-shelfbook__shelved_date\")\n .prefetch_related(\"authors\")[:limit],\n }\n suggested_books.append(shelf_preview)\n", "issue": "\"Your Books\" doesn't display my most recent title\n**Describe the bug**\r\nFor the last couple of months, your books is not showing anything I've read recently \r\n\r\n**To Reproduce**\r\n\r\n\r\n**Expected behavior**\r\nExpect to see the mostly recently read book on your books\r\n\r\n**Screenshots**\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n**Instance**\r\nBookwrym.social\r\n\r\n**Additional context**\r\nAdd any other context about the problem here.\r\n\r\n---\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: [e.g. iOS]\r\n - Browser [e.g. chrome, safari]\r\n - Version [e.g. 22]\r\n\r\n**Smartphone (please complete the following information):**\r\n - Device: [e.g. iPhone6]\r\n - OS: [e.g. iOS8.1]\r\n - Browser [e.g. stock browser, safari]\r\n - Version [e.g. 22]\r\n\n\"Your Books\" doesn't display my most recent title\n**Describe the bug**\r\nFor the last couple of months, your books is not showing anything I've read recently \r\n\r\n**To Reproduce**\r\n\r\n\r\n**Expected behavior**\r\nExpect to see the mostly recently read book on your books\r\n\r\n**Screenshots**\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n**Instance**\r\nBookwrym.social\r\n\r\n**Additional context**\r\nAdd any other context about the problem here.\r\n\r\n---\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: [e.g. iOS]\r\n - Browser [e.g. chrome, safari]\r\n - Version [e.g. 22]\r\n\r\n**Smartphone (please complete the following information):**\r\n - Device: [e.g. iPhone6]\r\n - OS: [e.g. iOS8.1]\r\n - Browser [e.g. stock browser, safari]\r\n - Version [e.g. 22]\r\n\n", "before_files": [{"content": "\"\"\" non-interactive pages \"\"\"\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.paginator import Paginator\nfrom django.db.models import Q\nfrom django.http import HttpResponseNotFound, Http404\nfrom django.shortcuts import get_object_or_404\nfrom django.template.response import TemplateResponse\nfrom django.utils import timezone\nfrom django.utils.decorators import method_decorator\nfrom django.views import View\n\nfrom bookwyrm import activitystreams, forms, models\nfrom bookwyrm.activitypub import ActivitypubResponse\nfrom bookwyrm.settings import PAGE_LENGTH, STREAMS\nfrom bookwyrm.suggested_users import suggested_users\nfrom .helpers import get_user_from_username\nfrom .helpers import is_api_request, is_bookwyrm_request\n\n\n# pylint: disable= no-self-use\n@method_decorator(login_required, name=\"dispatch\")\nclass Feed(View):\n \"\"\"activity stream\"\"\"\n\n def get(self, request, tab):\n \"\"\"user's homepage with activity feed\"\"\"\n tab = [s for s in STREAMS if s[\"key\"] == tab]\n tab = tab[0] if tab else STREAMS[0]\n\n activities = activitystreams.streams[tab[\"key\"]].get_activity_stream(\n request.user\n )\n paginated = Paginator(activities, PAGE_LENGTH)\n\n suggestions = suggested_users.get_suggestions(request.user)\n\n data = {\n **feed_page_data(request.user),\n **{\n \"user\": request.user,\n \"activities\": paginated.get_page(request.GET.get(\"page\")),\n \"suggested_users\": suggestions,\n \"tab\": tab,\n \"streams\": STREAMS,\n \"goal_form\": forms.GoalForm(),\n \"path\": f\"/{tab['key']}\",\n },\n }\n return TemplateResponse(request, \"feed/feed.html\", data)\n\n\n@method_decorator(login_required, name=\"dispatch\")\nclass DirectMessage(View):\n \"\"\"dm view\"\"\"\n\n def get(self, request, username=None):\n \"\"\"like a feed but for dms only\"\"\"\n # remove fancy subclasses of status, keep just good ol' notes\n activities = (\n models.Status.privacy_filter(request.user, privacy_levels=[\"direct\"])\n .filter(\n review__isnull=True,\n comment__isnull=True,\n quotation__isnull=True,\n generatednote__isnull=True,\n )\n .order_by(\"-published_date\")\n )\n\n user = None\n if username:\n try:\n user = get_user_from_username(request.user, username)\n except Http404:\n pass\n if user:\n activities = activities.filter(Q(user=user) | Q(mention_users=user))\n\n paginated = Paginator(activities, PAGE_LENGTH)\n data = {\n **feed_page_data(request.user),\n **{\n \"user\": request.user,\n \"partner\": user,\n \"activities\": paginated.get_page(request.GET.get(\"page\")),\n \"path\": \"/direct-messages\",\n },\n }\n return TemplateResponse(request, \"feed/direct_messages.html\", data)\n\n\nclass Status(View):\n \"\"\"get posting\"\"\"\n\n def get(self, request, username, status_id):\n \"\"\"display a particular status (and replies, etc)\"\"\"\n user = get_user_from_username(request.user, username)\n status = get_object_or_404(\n models.Status.objects.select_subclasses(),\n user=user,\n id=status_id,\n deleted=False,\n )\n # make sure the user is authorized to see the status\n status.raise_visible_to_user(request.user)\n\n if is_api_request(request):\n return ActivitypubResponse(\n status.to_activity(pure=not is_bookwyrm_request(request))\n )\n\n visible_thread = (\n models.Status.privacy_filter(request.user)\n .filter(thread_id=status.thread_id)\n .values_list(\"id\", flat=True)\n )\n visible_thread = list(visible_thread)\n\n ancestors = models.Status.objects.select_subclasses().raw(\n \"\"\"\n WITH RECURSIVE get_thread(depth, id, path) AS (\n\n SELECT 1, st.id, ARRAY[st.id]\n FROM bookwyrm_status st\n WHERE id = '%s' AND id = ANY(%s)\n\n UNION\n\n SELECT (gt.depth + 1), st.reply_parent_id, path || st.id\n FROM get_thread gt, bookwyrm_status st\n\n WHERE st.id = gt.id AND depth < 5 AND st.id = ANY(%s)\n\n )\n\n SELECT * FROM get_thread ORDER BY path DESC;\n \"\"\",\n params=[status.reply_parent_id or 0, visible_thread, visible_thread],\n )\n children = models.Status.objects.select_subclasses().raw(\n \"\"\"\n WITH RECURSIVE get_thread(depth, id, path) AS (\n\n SELECT 1, st.id, ARRAY[st.id]\n FROM bookwyrm_status st\n WHERE reply_parent_id = '%s' AND id = ANY(%s)\n\n UNION\n\n SELECT (gt.depth + 1), st.id, path || st.id\n FROM get_thread gt, bookwyrm_status st\n\n WHERE st.reply_parent_id = gt.id AND depth < 5 AND st.id = ANY(%s)\n\n )\n\n SELECT * FROM get_thread ORDER BY path;\n \"\"\",\n params=[status.id, visible_thread, visible_thread],\n )\n\n data = {\n **feed_page_data(request.user),\n **{\n \"status\": status,\n \"children\": children,\n \"ancestors\": ancestors,\n },\n }\n return TemplateResponse(request, \"feed/status.html\", data)\n\n\nclass Replies(View):\n \"\"\"replies page (a json view of status)\"\"\"\n\n def get(self, request, username, status_id):\n \"\"\"ordered collection of replies to a status\"\"\"\n # the html view is the same as Status\n if not is_api_request(request):\n status_view = Status.as_view()\n return status_view(request, username, status_id)\n\n # the json view is different than Status\n status = models.Status.objects.get(id=status_id)\n if status.user.localname != username:\n return HttpResponseNotFound()\n status.raise_visible_to_user(request.user)\n\n return ActivitypubResponse(status.to_replies(**request.GET))\n\n\ndef feed_page_data(user):\n \"\"\"info we need for every feed page\"\"\"\n if not user.is_authenticated:\n return {}\n\n goal = models.AnnualGoal.objects.filter(user=user, year=timezone.now().year).first()\n return {\n \"suggested_books\": get_suggested_books(user),\n \"goal\": goal,\n \"goal_form\": forms.GoalForm(),\n }\n\n\ndef get_suggested_books(user, max_books=5):\n \"\"\"helper to get a user's recent books\"\"\"\n book_count = 0\n preset_shelves = [(\"reading\", max_books), (\"read\", 2), (\"to-read\", max_books)]\n suggested_books = []\n for (preset, shelf_max) in preset_shelves:\n limit = (\n shelf_max\n if shelf_max < (max_books - book_count)\n else max_books - book_count\n )\n shelf = user.shelf_set.get(identifier=preset)\n if not shelf.books.exists():\n continue\n\n shelf_preview = {\n \"name\": shelf.name,\n \"identifier\": shelf.identifier,\n \"books\": models.Edition.viewer_aware_objects(user)\n .filter(\n shelfbook__shelf=shelf,\n )\n .prefetch_related(\"authors\")[:limit],\n }\n suggested_books.append(shelf_preview)\n book_count += len(shelf_preview[\"books\"])\n return suggested_books\n", "path": "bookwyrm/views/feed.py"}], "after_files": [{"content": "\"\"\" non-interactive pages \"\"\"\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.paginator import Paginator\nfrom django.db.models import Q\nfrom django.http import HttpResponseNotFound, Http404\nfrom django.shortcuts import get_object_or_404\nfrom django.template.response import TemplateResponse\nfrom django.utils import timezone\nfrom django.utils.decorators import method_decorator\nfrom django.views import View\n\nfrom bookwyrm import activitystreams, forms, models\nfrom bookwyrm.activitypub import ActivitypubResponse\nfrom bookwyrm.settings import PAGE_LENGTH, STREAMS\nfrom bookwyrm.suggested_users import suggested_users\nfrom .helpers import get_user_from_username\nfrom .helpers import is_api_request, is_bookwyrm_request\n\n\n# pylint: disable= no-self-use\n@method_decorator(login_required, name=\"dispatch\")\nclass Feed(View):\n \"\"\"activity stream\"\"\"\n\n def get(self, request, tab):\n \"\"\"user's homepage with activity feed\"\"\"\n tab = [s for s in STREAMS if s[\"key\"] == tab]\n tab = tab[0] if tab else STREAMS[0]\n\n activities = activitystreams.streams[tab[\"key\"]].get_activity_stream(\n request.user\n )\n paginated = Paginator(activities, PAGE_LENGTH)\n\n suggestions = suggested_users.get_suggestions(request.user)\n\n data = {\n **feed_page_data(request.user),\n **{\n \"user\": request.user,\n \"activities\": paginated.get_page(request.GET.get(\"page\")),\n \"suggested_users\": suggestions,\n \"tab\": tab,\n \"streams\": STREAMS,\n \"goal_form\": forms.GoalForm(),\n \"path\": f\"/{tab['key']}\",\n },\n }\n return TemplateResponse(request, \"feed/feed.html\", data)\n\n\n@method_decorator(login_required, name=\"dispatch\")\nclass DirectMessage(View):\n \"\"\"dm view\"\"\"\n\n def get(self, request, username=None):\n \"\"\"like a feed but for dms only\"\"\"\n # remove fancy subclasses of status, keep just good ol' notes\n activities = (\n models.Status.privacy_filter(request.user, privacy_levels=[\"direct\"])\n .filter(\n review__isnull=True,\n comment__isnull=True,\n quotation__isnull=True,\n generatednote__isnull=True,\n )\n .order_by(\"-published_date\")\n )\n\n user = None\n if username:\n try:\n user = get_user_from_username(request.user, username)\n except Http404:\n pass\n if user:\n activities = activities.filter(Q(user=user) | Q(mention_users=user))\n\n paginated = Paginator(activities, PAGE_LENGTH)\n data = {\n **feed_page_data(request.user),\n **{\n \"user\": request.user,\n \"partner\": user,\n \"activities\": paginated.get_page(request.GET.get(\"page\")),\n \"path\": \"/direct-messages\",\n },\n }\n return TemplateResponse(request, \"feed/direct_messages.html\", data)\n\n\nclass Status(View):\n \"\"\"get posting\"\"\"\n\n def get(self, request, username, status_id):\n \"\"\"display a particular status (and replies, etc)\"\"\"\n user = get_user_from_username(request.user, username)\n status = get_object_or_404(\n models.Status.objects.select_subclasses(),\n user=user,\n id=status_id,\n deleted=False,\n )\n # make sure the user is authorized to see the status\n status.raise_visible_to_user(request.user)\n\n if is_api_request(request):\n return ActivitypubResponse(\n status.to_activity(pure=not is_bookwyrm_request(request))\n )\n\n visible_thread = (\n models.Status.privacy_filter(request.user)\n .filter(thread_id=status.thread_id)\n .values_list(\"id\", flat=True)\n )\n visible_thread = list(visible_thread)\n\n ancestors = models.Status.objects.select_subclasses().raw(\n \"\"\"\n WITH RECURSIVE get_thread(depth, id, path) AS (\n\n SELECT 1, st.id, ARRAY[st.id]\n FROM bookwyrm_status st\n WHERE id = '%s' AND id = ANY(%s)\n\n UNION\n\n SELECT (gt.depth + 1), st.reply_parent_id, path || st.id\n FROM get_thread gt, bookwyrm_status st\n\n WHERE st.id = gt.id AND depth < 5 AND st.id = ANY(%s)\n\n )\n\n SELECT * FROM get_thread ORDER BY path DESC;\n \"\"\",\n params=[status.reply_parent_id or 0, visible_thread, visible_thread],\n )\n children = models.Status.objects.select_subclasses().raw(\n \"\"\"\n WITH RECURSIVE get_thread(depth, id, path) AS (\n\n SELECT 1, st.id, ARRAY[st.id]\n FROM bookwyrm_status st\n WHERE reply_parent_id = '%s' AND id = ANY(%s)\n\n UNION\n\n SELECT (gt.depth + 1), st.id, path || st.id\n FROM get_thread gt, bookwyrm_status st\n\n WHERE st.reply_parent_id = gt.id AND depth < 5 AND st.id = ANY(%s)\n\n )\n\n SELECT * FROM get_thread ORDER BY path;\n \"\"\",\n params=[status.id, visible_thread, visible_thread],\n )\n\n data = {\n **feed_page_data(request.user),\n **{\n \"status\": status,\n \"children\": children,\n \"ancestors\": ancestors,\n },\n }\n return TemplateResponse(request, \"feed/status.html\", data)\n\n\nclass Replies(View):\n \"\"\"replies page (a json view of status)\"\"\"\n\n def get(self, request, username, status_id):\n \"\"\"ordered collection of replies to a status\"\"\"\n # the html view is the same as Status\n if not is_api_request(request):\n status_view = Status.as_view()\n return status_view(request, username, status_id)\n\n # the json view is different than Status\n status = models.Status.objects.get(id=status_id)\n if status.user.localname != username:\n return HttpResponseNotFound()\n status.raise_visible_to_user(request.user)\n\n return ActivitypubResponse(status.to_replies(**request.GET))\n\n\ndef feed_page_data(user):\n \"\"\"info we need for every feed page\"\"\"\n if not user.is_authenticated:\n return {}\n\n goal = models.AnnualGoal.objects.filter(user=user, year=timezone.now().year).first()\n return {\n \"suggested_books\": get_suggested_books(user),\n \"goal\": goal,\n \"goal_form\": forms.GoalForm(),\n }\n\n\ndef get_suggested_books(user, max_books=5):\n \"\"\"helper to get a user's recent books\"\"\"\n book_count = 0\n preset_shelves = [(\"reading\", max_books), (\"read\", 2), (\"to-read\", max_books)]\n suggested_books = []\n for (preset, shelf_max) in preset_shelves:\n limit = (\n shelf_max\n if shelf_max < (max_books - book_count)\n else max_books - book_count\n )\n shelf = user.shelf_set.get(identifier=preset)\n if not shelf.books.exists():\n continue\n\n shelf_preview = {\n \"name\": shelf.name,\n \"identifier\": shelf.identifier,\n \"books\": models.Edition.viewer_aware_objects(user)\n .filter(\n shelfbook__shelf=shelf,\n )\n .order_by(\"-shelfbook__shelved_date\")\n .prefetch_related(\"authors\")[:limit],\n }\n suggested_books.append(shelf_preview)\n book_count += len(shelf_preview[\"books\"])\n return suggested_books\n", "path": "bookwyrm/views/feed.py"}]} | 2,983 | 99 |
gh_patches_debug_25990 | rasdani/github-patches | git_diff | pwndbg__pwndbg-1826 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
heap_config should give an example format for GLIBC version
The UX for the glibc version setting is not great.
1. The `heap_config` should give user an example format of GLIBC version we do expect. Currently it only shows this:
```
pwndbg> heap_config
Name Value (Def) Documentation
-----------------------------------------------------------
glibc '' GLIBC version for heuristics
...
```
2. When one sets the `glibc` parameter, we should have a setter that will validate it against the expected version format. Currently it does not:
```
pwndbg> set glibc as
Set GLIBC version for heuristics to 'as'.
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pwndbg/glibc.py`
Content:
```
1 """
2 Get information about the GLibc
3 """
4
5 import functools
6 import os
7 import re
8 from typing import Optional
9 from typing import Tuple
10
11 import gdb
12 from elftools.elf.relocation import Relocation
13
14 import pwndbg.gdblib.config
15 import pwndbg.gdblib.elf
16 import pwndbg.gdblib.file
17 import pwndbg.gdblib.info
18 import pwndbg.gdblib.memory
19 import pwndbg.gdblib.proc
20 import pwndbg.gdblib.symbol
21 import pwndbg.heap
22 import pwndbg.lib.cache
23 import pwndbg.search
24
25 safe_lnk = pwndbg.gdblib.config.add_param(
26 "safe-linking",
27 None,
28 "whether glibc use safe-linking (on/off/auto)",
29 param_class=gdb.PARAM_AUTO_BOOLEAN,
30 )
31
32 glibc_version = pwndbg.gdblib.config.add_param(
33 "glibc", "", "GLIBC version for heuristics", scope="heap"
34 )
35
36
37 @pwndbg.gdblib.proc.OnlyWhenRunning
38 def get_version() -> Optional[Tuple[int, ...]]:
39 if glibc_version.value:
40 ret = re.search(r"(\d+)\.(\d+)", glibc_version.value)
41 if ret:
42 return tuple(int(_) for _ in ret.groups())
43 else:
44 raise ValueError(
45 f"Invalid GLIBC version: `{glibc_version.value}`, you should provide something like: 2.31 or 2.34"
46 )
47 return _get_version()
48
49
50 @pwndbg.gdblib.proc.OnlyWhenRunning
51 @pwndbg.lib.cache.cache_until("start", "objfile")
52 def _get_version() -> Optional[Tuple[int, ...]]:
53 if pwndbg.heap.current.libc_has_debug_syms():
54 addr = pwndbg.gdblib.symbol.address("__libc_version")
55 if addr is not None:
56 ver = pwndbg.gdblib.memory.string(addr)
57 return tuple(int(_) for _ in ver.split(b"."))
58 libc_filename = get_libc_filename_from_info_sharedlibrary()
59 if not libc_filename:
60 return None
61 result = pwndbg.gdblib.elf.dump_section_by_name(libc_filename, ".rodata", try_local_path=True)
62 if not result:
63 return None
64 _, _, data = result
65 banner_start = data.find(b"GNU C Library")
66 if banner_start == -1:
67 return None
68 banner = data[banner_start : data.find(b"\x00", banner_start)]
69 ret = re.search(rb"release version (\d+)\.(\d+)", banner)
70 return tuple(int(_) for _ in ret.groups()) if ret else None
71
72
73 @pwndbg.gdblib.proc.OnlyWhenRunning
74 @pwndbg.lib.cache.cache_until("start", "objfile")
75 def get_libc_filename_from_info_sharedlibrary() -> Optional[str]:
76 """
77 Get the filename of the libc by parsing the output of `info sharedlibrary`.
78 """
79 possible_libc_path = []
80 for path in pwndbg.gdblib.info.sharedlibrary_paths():
81 basename = os.path.basename(
82 path[7:] if path.startswith("target:") else path
83 ) # "target:" prefix is for remote debugging
84 if basename == "libc.so.6":
85 # The default filename of libc should be libc.so.6, so if we found it, we just return it directly.
86 return path
87 elif re.search(r"^libc6?[-_\.]", basename):
88 # Maybe user loaded the libc with LD_PRELOAD.
89 # Some common libc names: libc-2.36.so, libc6_2.36-0ubuntu4_amd64.so, libc.so
90 possible_libc_path.append(
91 path
92 ) # We don't return it, maybe there is a libc.so.6 and this match is just a false positive.
93 # TODO: This might fail if user use LD_PRELOAD to load libc with a weird name or there are multiple shared libraries match the pattern.
94 # (But do we really need to support this case? Maybe we can wait until users really need it :P.)
95 if possible_libc_path:
96 return possible_libc_path[0] # just return the first match for now :)
97 return None
98
99
100 @pwndbg.gdblib.proc.OnlyWhenRunning
101 @pwndbg.lib.cache.cache_until("start", "objfile")
102 def dump_elf_data_section() -> Optional[Tuple[int, int, bytes]]:
103 """
104 Dump .data section of libc ELF file
105 """
106 libc_filename = get_libc_filename_from_info_sharedlibrary()
107 if not libc_filename:
108 # libc not loaded yet, or it's static linked
109 return None
110 return pwndbg.gdblib.elf.dump_section_by_name(libc_filename, ".data", try_local_path=True)
111
112
113 @pwndbg.gdblib.proc.OnlyWhenRunning
114 @pwndbg.lib.cache.cache_until("start", "objfile")
115 def dump_relocations_by_section_name(section_name: str) -> Optional[Tuple[Relocation, ...]]:
116 """
117 Dump relocations of a section by section name of libc ELF file
118 """
119 libc_filename = get_libc_filename_from_info_sharedlibrary()
120 if not libc_filename:
121 # libc not loaded yet, or it's static linked
122 return None
123 return pwndbg.gdblib.elf.dump_relocations_by_section_name(
124 libc_filename, section_name, try_local_path=True
125 )
126
127
128 @pwndbg.gdblib.proc.OnlyWhenRunning
129 @pwndbg.lib.cache.cache_until("start", "objfile")
130 def get_data_section_address() -> int:
131 """
132 Find .data section address of libc
133 """
134 libc_filename = get_libc_filename_from_info_sharedlibrary()
135 if not libc_filename:
136 # libc not loaded yet, or it's static linked
137 return 0
138 # TODO: If we are debugging a remote process, this might not work if GDB cannot load the so file
139 out = pwndbg.gdblib.info.files()
140 for line in out.splitlines():
141 if line.endswith(" is .data in " + libc_filename):
142 return int(line.split()[0], 16)
143 return 0
144
145
146 @pwndbg.gdblib.proc.OnlyWhenRunning
147 @pwndbg.lib.cache.cache_until("start", "objfile")
148 def get_got_section_address() -> int:
149 """
150 Find .got section address of libc
151 """
152 libc_filename = get_libc_filename_from_info_sharedlibrary()
153 if not libc_filename:
154 # libc not loaded yet, or it's static linked
155 return 0
156 # TODO: If we are debugging a remote process, this might not work if GDB cannot load the so file
157 out = pwndbg.gdblib.info.files()
158 for line in out.splitlines():
159 if line.endswith(" is .got in " + libc_filename):
160 return int(line.split()[0], 16)
161 return 0
162
163
164 def OnlyWhenGlibcLoaded(function):
165 @functools.wraps(function)
166 def _OnlyWhenGlibcLoaded(*a, **kw):
167 if get_version() is not None:
168 return function(*a, **kw)
169 else:
170 print(f"{function.__name__}: GLibc not loaded yet.")
171
172 return _OnlyWhenGlibcLoaded
173
174
175 @OnlyWhenGlibcLoaded
176 def check_safe_linking():
177 """
178 Safe-linking is a glibc 2.32 mitigation; see:
179 - https://lanph3re.blogspot.com/2020/08/blog-post.html
180 - https://research.checkpoint.com/2020/safe-linking-eliminating-a-20-year-old-malloc-exploit-primitive/
181 """
182 return (get_version() >= (2, 32) or safe_lnk) and safe_lnk is not False
183
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pwndbg/glibc.py b/pwndbg/glibc.py
--- a/pwndbg/glibc.py
+++ b/pwndbg/glibc.py
@@ -21,6 +21,7 @@
import pwndbg.heap
import pwndbg.lib.cache
import pwndbg.search
+from pwndbg.color import message
safe_lnk = pwndbg.gdblib.config.add_param(
"safe-linking",
@@ -30,21 +31,28 @@
)
glibc_version = pwndbg.gdblib.config.add_param(
- "glibc", "", "GLIBC version for heuristics", scope="heap"
+ "glibc", "", "GLIBC version for heap heuristics resolution (e.g. 2.31)", scope="heap"
)
[email protected](glibc_version)
+def set_glibc_version() -> None:
+ ret = re.search(r"(\d+)\.(\d+)", glibc_version.value)
+ if ret:
+ glibc_version.value = tuple(map(int, ret.groups()))
+ return
+
+ print(
+ message.warn(
+ f"Invalid GLIBC version: `{glibc_version.value}`, you should provide something like: 2.31 or 2.34"
+ )
+ )
+ glibc_version.revert_default()
+
+
@pwndbg.gdblib.proc.OnlyWhenRunning
def get_version() -> Optional[Tuple[int, ...]]:
- if glibc_version.value:
- ret = re.search(r"(\d+)\.(\d+)", glibc_version.value)
- if ret:
- return tuple(int(_) for _ in ret.groups())
- else:
- raise ValueError(
- f"Invalid GLIBC version: `{glibc_version.value}`, you should provide something like: 2.31 or 2.34"
- )
- return _get_version()
+ return glibc_version or _get_version()
@pwndbg.gdblib.proc.OnlyWhenRunning
| {"golden_diff": "diff --git a/pwndbg/glibc.py b/pwndbg/glibc.py\n--- a/pwndbg/glibc.py\n+++ b/pwndbg/glibc.py\n@@ -21,6 +21,7 @@\n import pwndbg.heap\n import pwndbg.lib.cache\n import pwndbg.search\n+from pwndbg.color import message\n \n safe_lnk = pwndbg.gdblib.config.add_param(\n \"safe-linking\",\n@@ -30,21 +31,28 @@\n )\n \n glibc_version = pwndbg.gdblib.config.add_param(\n- \"glibc\", \"\", \"GLIBC version for heuristics\", scope=\"heap\"\n+ \"glibc\", \"\", \"GLIBC version for heap heuristics resolution (e.g. 2.31)\", scope=\"heap\"\n )\n \n \[email protected](glibc_version)\n+def set_glibc_version() -> None:\n+ ret = re.search(r\"(\\d+)\\.(\\d+)\", glibc_version.value)\n+ if ret:\n+ glibc_version.value = tuple(map(int, ret.groups()))\n+ return\n+\n+ print(\n+ message.warn(\n+ f\"Invalid GLIBC version: `{glibc_version.value}`, you should provide something like: 2.31 or 2.34\"\n+ )\n+ )\n+ glibc_version.revert_default()\n+\n+\n @pwndbg.gdblib.proc.OnlyWhenRunning\n def get_version() -> Optional[Tuple[int, ...]]:\n- if glibc_version.value:\n- ret = re.search(r\"(\\d+)\\.(\\d+)\", glibc_version.value)\n- if ret:\n- return tuple(int(_) for _ in ret.groups())\n- else:\n- raise ValueError(\n- f\"Invalid GLIBC version: `{glibc_version.value}`, you should provide something like: 2.31 or 2.34\"\n- )\n- return _get_version()\n+ return glibc_version or _get_version()\n \n \n @pwndbg.gdblib.proc.OnlyWhenRunning\n", "issue": "heap_config should give an example format for GLIBC version\nThe UX for the glibc version setting is not great.\r\n\r\n1. The `heap_config` should give user an example format of GLIBC version we do expect. Currently it only shows this:\r\n```\r\npwndbg> heap_config\r\nName Value (Def) Documentation\r\n-----------------------------------------------------------\r\nglibc '' GLIBC version for heuristics\r\n...\r\n```\r\n\r\n2. When one sets the `glibc` parameter, we should have a setter that will validate it against the expected version format. Currently it does not:\r\n```\r\npwndbg> set glibc as\r\nSet GLIBC version for heuristics to 'as'.\r\n```\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nGet information about the GLibc\n\"\"\"\n\nimport functools\nimport os\nimport re\nfrom typing import Optional\nfrom typing import Tuple\n\nimport gdb\nfrom elftools.elf.relocation import Relocation\n\nimport pwndbg.gdblib.config\nimport pwndbg.gdblib.elf\nimport pwndbg.gdblib.file\nimport pwndbg.gdblib.info\nimport pwndbg.gdblib.memory\nimport pwndbg.gdblib.proc\nimport pwndbg.gdblib.symbol\nimport pwndbg.heap\nimport pwndbg.lib.cache\nimport pwndbg.search\n\nsafe_lnk = pwndbg.gdblib.config.add_param(\n \"safe-linking\",\n None,\n \"whether glibc use safe-linking (on/off/auto)\",\n param_class=gdb.PARAM_AUTO_BOOLEAN,\n)\n\nglibc_version = pwndbg.gdblib.config.add_param(\n \"glibc\", \"\", \"GLIBC version for heuristics\", scope=\"heap\"\n)\n\n\[email protected]\ndef get_version() -> Optional[Tuple[int, ...]]:\n if glibc_version.value:\n ret = re.search(r\"(\\d+)\\.(\\d+)\", glibc_version.value)\n if ret:\n return tuple(int(_) for _ in ret.groups())\n else:\n raise ValueError(\n f\"Invalid GLIBC version: `{glibc_version.value}`, you should provide something like: 2.31 or 2.34\"\n )\n return _get_version()\n\n\[email protected]\[email protected]_until(\"start\", \"objfile\")\ndef _get_version() -> Optional[Tuple[int, ...]]:\n if pwndbg.heap.current.libc_has_debug_syms():\n addr = pwndbg.gdblib.symbol.address(\"__libc_version\")\n if addr is not None:\n ver = pwndbg.gdblib.memory.string(addr)\n return tuple(int(_) for _ in ver.split(b\".\"))\n libc_filename = get_libc_filename_from_info_sharedlibrary()\n if not libc_filename:\n return None\n result = pwndbg.gdblib.elf.dump_section_by_name(libc_filename, \".rodata\", try_local_path=True)\n if not result:\n return None\n _, _, data = result\n banner_start = data.find(b\"GNU C Library\")\n if banner_start == -1:\n return None\n banner = data[banner_start : data.find(b\"\\x00\", banner_start)]\n ret = re.search(rb\"release version (\\d+)\\.(\\d+)\", banner)\n return tuple(int(_) for _ in ret.groups()) if ret else None\n\n\[email protected]\[email protected]_until(\"start\", \"objfile\")\ndef get_libc_filename_from_info_sharedlibrary() -> Optional[str]:\n \"\"\"\n Get the filename of the libc by parsing the output of `info sharedlibrary`.\n \"\"\"\n possible_libc_path = []\n for path in pwndbg.gdblib.info.sharedlibrary_paths():\n basename = os.path.basename(\n path[7:] if path.startswith(\"target:\") else path\n ) # \"target:\" prefix is for remote debugging\n if basename == \"libc.so.6\":\n # The default filename of libc should be libc.so.6, so if we found it, we just return it directly.\n return path\n elif re.search(r\"^libc6?[-_\\.]\", basename):\n # Maybe user loaded the libc with LD_PRELOAD.\n # Some common libc names: libc-2.36.so, libc6_2.36-0ubuntu4_amd64.so, libc.so\n possible_libc_path.append(\n path\n ) # We don't return it, maybe there is a libc.so.6 and this match is just a false positive.\n # TODO: This might fail if user use LD_PRELOAD to load libc with a weird name or there are multiple shared libraries match the pattern.\n # (But do we really need to support this case? Maybe we can wait until users really need it :P.)\n if possible_libc_path:\n return possible_libc_path[0] # just return the first match for now :)\n return None\n\n\[email protected]\[email protected]_until(\"start\", \"objfile\")\ndef dump_elf_data_section() -> Optional[Tuple[int, int, bytes]]:\n \"\"\"\n Dump .data section of libc ELF file\n \"\"\"\n libc_filename = get_libc_filename_from_info_sharedlibrary()\n if not libc_filename:\n # libc not loaded yet, or it's static linked\n return None\n return pwndbg.gdblib.elf.dump_section_by_name(libc_filename, \".data\", try_local_path=True)\n\n\[email protected]\[email protected]_until(\"start\", \"objfile\")\ndef dump_relocations_by_section_name(section_name: str) -> Optional[Tuple[Relocation, ...]]:\n \"\"\"\n Dump relocations of a section by section name of libc ELF file\n \"\"\"\n libc_filename = get_libc_filename_from_info_sharedlibrary()\n if not libc_filename:\n # libc not loaded yet, or it's static linked\n return None\n return pwndbg.gdblib.elf.dump_relocations_by_section_name(\n libc_filename, section_name, try_local_path=True\n )\n\n\[email protected]\[email protected]_until(\"start\", \"objfile\")\ndef get_data_section_address() -> int:\n \"\"\"\n Find .data section address of libc\n \"\"\"\n libc_filename = get_libc_filename_from_info_sharedlibrary()\n if not libc_filename:\n # libc not loaded yet, or it's static linked\n return 0\n # TODO: If we are debugging a remote process, this might not work if GDB cannot load the so file\n out = pwndbg.gdblib.info.files()\n for line in out.splitlines():\n if line.endswith(\" is .data in \" + libc_filename):\n return int(line.split()[0], 16)\n return 0\n\n\[email protected]\[email protected]_until(\"start\", \"objfile\")\ndef get_got_section_address() -> int:\n \"\"\"\n Find .got section address of libc\n \"\"\"\n libc_filename = get_libc_filename_from_info_sharedlibrary()\n if not libc_filename:\n # libc not loaded yet, or it's static linked\n return 0\n # TODO: If we are debugging a remote process, this might not work if GDB cannot load the so file\n out = pwndbg.gdblib.info.files()\n for line in out.splitlines():\n if line.endswith(\" is .got in \" + libc_filename):\n return int(line.split()[0], 16)\n return 0\n\n\ndef OnlyWhenGlibcLoaded(function):\n @functools.wraps(function)\n def _OnlyWhenGlibcLoaded(*a, **kw):\n if get_version() is not None:\n return function(*a, **kw)\n else:\n print(f\"{function.__name__}: GLibc not loaded yet.\")\n\n return _OnlyWhenGlibcLoaded\n\n\n@OnlyWhenGlibcLoaded\ndef check_safe_linking():\n \"\"\"\n Safe-linking is a glibc 2.32 mitigation; see:\n - https://lanph3re.blogspot.com/2020/08/blog-post.html\n - https://research.checkpoint.com/2020/safe-linking-eliminating-a-20-year-old-malloc-exploit-primitive/\n \"\"\"\n return (get_version() >= (2, 32) or safe_lnk) and safe_lnk is not False\n", "path": "pwndbg/glibc.py"}], "after_files": [{"content": "\"\"\"\nGet information about the GLibc\n\"\"\"\n\nimport functools\nimport os\nimport re\nfrom typing import Optional\nfrom typing import Tuple\n\nimport gdb\nfrom elftools.elf.relocation import Relocation\n\nimport pwndbg.gdblib.config\nimport pwndbg.gdblib.elf\nimport pwndbg.gdblib.file\nimport pwndbg.gdblib.info\nimport pwndbg.gdblib.memory\nimport pwndbg.gdblib.proc\nimport pwndbg.gdblib.symbol\nimport pwndbg.heap\nimport pwndbg.lib.cache\nimport pwndbg.search\nfrom pwndbg.color import message\n\nsafe_lnk = pwndbg.gdblib.config.add_param(\n \"safe-linking\",\n None,\n \"whether glibc use safe-linking (on/off/auto)\",\n param_class=gdb.PARAM_AUTO_BOOLEAN,\n)\n\nglibc_version = pwndbg.gdblib.config.add_param(\n \"glibc\", \"\", \"GLIBC version for heap heuristics resolution (e.g. 2.31)\", scope=\"heap\"\n)\n\n\[email protected](glibc_version)\ndef set_glibc_version() -> None:\n ret = re.search(r\"(\\d+)\\.(\\d+)\", glibc_version.value)\n if ret:\n glibc_version.value = tuple(map(int, ret.groups()))\n return\n\n print(\n message.warn(\n f\"Invalid GLIBC version: `{glibc_version.value}`, you should provide something like: 2.31 or 2.34\"\n )\n )\n glibc_version.revert_default()\n\n\[email protected]\ndef get_version() -> Optional[Tuple[int, ...]]:\n return glibc_version or _get_version()\n\n\[email protected]\[email protected]_until(\"start\", \"objfile\")\ndef _get_version() -> Optional[Tuple[int, ...]]:\n if pwndbg.heap.current.libc_has_debug_syms():\n addr = pwndbg.gdblib.symbol.address(\"__libc_version\")\n if addr is not None:\n ver = pwndbg.gdblib.memory.string(addr)\n return tuple(int(_) for _ in ver.split(b\".\"))\n libc_filename = get_libc_filename_from_info_sharedlibrary()\n if not libc_filename:\n return None\n result = pwndbg.gdblib.elf.dump_section_by_name(libc_filename, \".rodata\", try_local_path=True)\n if not result:\n return None\n _, _, data = result\n banner_start = data.find(b\"GNU C Library\")\n if banner_start == -1:\n return None\n banner = data[banner_start : data.find(b\"\\x00\", banner_start)]\n ret = re.search(rb\"release version (\\d+)\\.(\\d+)\", banner)\n return tuple(int(_) for _ in ret.groups()) if ret else None\n\n\[email protected]\[email protected]_until(\"start\", \"objfile\")\ndef get_libc_filename_from_info_sharedlibrary() -> Optional[str]:\n \"\"\"\n Get the filename of the libc by parsing the output of `info sharedlibrary`.\n \"\"\"\n possible_libc_path = []\n for path in pwndbg.gdblib.info.sharedlibrary_paths():\n basename = os.path.basename(\n path[7:] if path.startswith(\"target:\") else path\n ) # \"target:\" prefix is for remote debugging\n if basename == \"libc.so.6\":\n # The default filename of libc should be libc.so.6, so if we found it, we just return it directly.\n return path\n elif re.search(r\"^libc6?[-_\\.]\", basename):\n # Maybe user loaded the libc with LD_PRELOAD.\n # Some common libc names: libc-2.36.so, libc6_2.36-0ubuntu4_amd64.so, libc.so\n possible_libc_path.append(\n path\n ) # We don't return it, maybe there is a libc.so.6 and this match is just a false positive.\n # TODO: This might fail if user use LD_PRELOAD to load libc with a weird name or there are multiple shared libraries match the pattern.\n # (But do we really need to support this case? Maybe we can wait until users really need it :P.)\n if possible_libc_path:\n return possible_libc_path[0] # just return the first match for now :)\n return None\n\n\[email protected]\[email protected]_until(\"start\", \"objfile\")\ndef dump_elf_data_section() -> Optional[Tuple[int, int, bytes]]:\n \"\"\"\n Dump .data section of libc ELF file\n \"\"\"\n libc_filename = get_libc_filename_from_info_sharedlibrary()\n if not libc_filename:\n # libc not loaded yet, or it's static linked\n return None\n return pwndbg.gdblib.elf.dump_section_by_name(libc_filename, \".data\", try_local_path=True)\n\n\[email protected]\[email protected]_until(\"start\", \"objfile\")\ndef dump_relocations_by_section_name(section_name: str) -> Optional[Tuple[Relocation, ...]]:\n \"\"\"\n Dump relocations of a section by section name of libc ELF file\n \"\"\"\n libc_filename = get_libc_filename_from_info_sharedlibrary()\n if not libc_filename:\n # libc not loaded yet, or it's static linked\n return None\n return pwndbg.gdblib.elf.dump_relocations_by_section_name(\n libc_filename, section_name, try_local_path=True\n )\n\n\[email protected]\[email protected]_until(\"start\", \"objfile\")\ndef get_data_section_address() -> int:\n \"\"\"\n Find .data section address of libc\n \"\"\"\n libc_filename = get_libc_filename_from_info_sharedlibrary()\n if not libc_filename:\n # libc not loaded yet, or it's static linked\n return 0\n # TODO: If we are debugging a remote process, this might not work if GDB cannot load the so file\n out = pwndbg.gdblib.info.files()\n for line in out.splitlines():\n if line.endswith(\" is .data in \" + libc_filename):\n return int(line.split()[0], 16)\n return 0\n\n\[email protected]\[email protected]_until(\"start\", \"objfile\")\ndef get_got_section_address() -> int:\n \"\"\"\n Find .got section address of libc\n \"\"\"\n libc_filename = get_libc_filename_from_info_sharedlibrary()\n if not libc_filename:\n # libc not loaded yet, or it's static linked\n return 0\n # TODO: If we are debugging a remote process, this might not work if GDB cannot load the so file\n out = pwndbg.gdblib.info.files()\n for line in out.splitlines():\n if line.endswith(\" is .got in \" + libc_filename):\n return int(line.split()[0], 16)\n return 0\n\n\ndef OnlyWhenGlibcLoaded(function):\n @functools.wraps(function)\n def _OnlyWhenGlibcLoaded(*a, **kw):\n if get_version() is not None:\n return function(*a, **kw)\n else:\n print(f\"{function.__name__}: GLibc not loaded yet.\")\n\n return _OnlyWhenGlibcLoaded\n\n\n@OnlyWhenGlibcLoaded\ndef check_safe_linking():\n \"\"\"\n Safe-linking is a glibc 2.32 mitigation; see:\n - https://lanph3re.blogspot.com/2020/08/blog-post.html\n - https://research.checkpoint.com/2020/safe-linking-eliminating-a-20-year-old-malloc-exploit-primitive/\n \"\"\"\n return (get_version() >= (2, 32) or safe_lnk) and safe_lnk is not False\n", "path": "pwndbg/glibc.py"}]} | 2,561 | 455 |
gh_patches_debug_35119 | rasdani/github-patches | git_diff | Azure__azure-cli-extensions-74 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
az webapp new throws an exception when it can't detect a project type.
### Extension name (the extension in question)
webapp
### Description of issue (in as much detail as possible)
This should fail gracefully:
```
bash-4.3# cd /
bash-4.3# ls
appp bin etc lib media proc run srv tmp var
azure-cli dev home linuxrc mnt root sbin sys usr
bash-4.3# az webapp new -n foo
list index out of range
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/knack/cli.py", line 194, in invoke
cmd_result = self.invocation.execute(args)
File "/usr/local/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py", line 331, in execute
six.reraise(*sys.exc_info())
File "/usr/local/lib/python3.6/site-packages/six.py", line 693, in reraise
raise value
File "/usr/local/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py", line 304, in execute
result = cmd(params)
File "/usr/local/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py", line 168, in __call__
return super(AzCliCommand, self).__call__(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/knack/commands.py", line 109, in __call__
return self.handler(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/azure/cli/core/__init__.py", line 348, in default_command_handler
result = op(**command_args)
File "/root/.azure/cliextensions/webapp/azext_webapp/custom.py", line 50, in create_deploy_webapp
lang_details = get_lang_from_content(src_dir)
File "/root/.azure/cliextensions/webapp/azext_webapp/create_util.py", line 113, in get_lang_from_content
package_netcore_file = os.path.join(src_path, glob.glob("*.csproj")[0])
IndexError: list index out of range
bash-4.3#
```
Steps to reproduce: run `az webapp new -n foo` in a *non-empty* directory that does not contain a `package.json` or a `*.csproj` file.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/webapp/azext_webapp/create_util.py`
Content:
```
1 # --------------------------------------------------------------------------------------------
2 # Copyright (c) Microsoft Corporation. All rights reserved.
3 # Licensed under the MIT License. See License.txt in the project root for license information.
4 # --------------------------------------------------------------------------------------------
5
6 import os
7 import zipfile
8 from azure.cli.core.commands.client_factory import get_mgmt_service_client
9 from azure.mgmt.resource.resources.models import ResourceGroup
10 from ._constants import (
11 NETCORE_VERSION_DEFAULT,
12 NETCORE_VERSIONS,
13 NODE_VERSION_DEFAULT,
14 NODE_VERSIONS,
15 NETCORE_RUNTIME_NAME,
16 NODE_RUNTIME_NAME)
17
18
19 def _resource_client_factory(cli_ctx, **_):
20 from azure.cli.core.profiles import ResourceType
21 return get_mgmt_service_client(cli_ctx, ResourceType.MGMT_RESOURCE_RESOURCES)
22
23
24 def web_client_factory(cli_ctx, **_):
25 from azure.mgmt.web import WebSiteManagementClient
26 return get_mgmt_service_client(cli_ctx, WebSiteManagementClient)
27
28
29 def zip_contents_from_dir(dirPath, lang):
30 relroot = os.path.abspath(os.path.join(dirPath, os.pardir))
31 path_and_file = os.path.splitdrive(dirPath)[1]
32 file_val = os.path.split(path_and_file)[1]
33 zip_file_path = relroot + "\\" + file_val + ".zip"
34 abs_src = os.path.abspath(dirPath)
35 with zipfile.ZipFile("{}".format(zip_file_path), "w", zipfile.ZIP_DEFLATED) as zf:
36 for dirname, subdirs, files in os.walk(dirPath):
37 # skip node_modules folder for Node apps,
38 # since zip_deployment will perfom the build operation
39 if lang.lower() == NODE_RUNTIME_NAME and 'node_modules' in subdirs:
40 subdirs.remove('node_modules')
41 elif lang.lower() == NETCORE_RUNTIME_NAME:
42 if 'bin' in subdirs:
43 subdirs.remove('bin')
44 elif 'obj' in subdirs:
45 subdirs.remove('obj')
46 for filename in files:
47 absname = os.path.abspath(os.path.join(dirname, filename))
48 arcname = absname[len(abs_src) + 1:]
49 zf.write(absname, arcname)
50 return zip_file_path
51
52
53 def get_runtime_version_details(file_path, lang_name):
54 version_detected = None
55 version_to_create = None
56 if lang_name.lower() == NETCORE_RUNTIME_NAME:
57 # method returns list in DESC, pick the first
58 version_detected = parse_netcore_version(file_path)[0]
59 version_to_create = detect_netcore_version_tocreate(version_detected)
60 elif lang_name.lower() == NODE_RUNTIME_NAME:
61 version_detected = parse_node_version(file_path)[0]
62 version_to_create = detect_node_version_tocreate(version_detected)
63 return {'detected': version_detected, 'to_create': version_to_create}
64
65
66 def create_resource_group(cmd, rg_name, location):
67 rcf = _resource_client_factory(cmd.cli_ctx)
68 rg_params = ResourceGroup(location=location)
69 return rcf.resource_groups.create_or_update(rg_name, rg_params)
70
71
72 def check_resource_group_exists(cmd, rg_name):
73 rcf = _resource_client_factory(cmd.cli_ctx)
74 return rcf.resource_groups.check_existence(rg_name)
75
76
77 def check_resource_group_supports_os(cmd, rg_name, location, is_linux):
78 # get all appservice plans from RG
79 client = web_client_factory(cmd.cli_ctx)
80 plans = list(client.app_service_plans.list_by_resource_group(rg_name))
81 for item in plans:
82 # for Linux if an app with reserved==False exists, ASP doesn't support Linux
83 if is_linux and item.location == location and not item.reserved:
84 return False
85 elif not is_linux and item.location == location and item.reserved:
86 return False
87 return True
88
89
90 def check_if_asp_exists(cmd, rg_name, asp_name):
91 # get all appservice plans from RG
92 client = web_client_factory(cmd.cli_ctx)
93 for item in list(client.app_service_plans.list_by_resource_group(rg_name)):
94 if item.name == asp_name:
95 return True
96 return False
97
98
99 def check_app_exists(cmd, rg_name, app_name):
100 client = web_client_factory(cmd.cli_ctx)
101 for item in list(client.web_apps.list_by_resource_group(rg_name)):
102 if item.name == app_name:
103 return True
104 return False
105
106
107 def get_lang_from_content(src_path):
108 import glob
109 # NODE: package.json should exisit in the application root dir
110 # NETCORE: NETCORE.csproj should exist in the root dir
111 runtime_details_dict = dict.fromkeys(['language', 'file_loc', 'default_sku'])
112 package_json_file = os.path.join(src_path, 'package.json')
113 package_netcore_file = os.path.join(src_path, glob.glob("*.csproj")[0])
114 if os.path.isfile(package_json_file):
115 runtime_details_dict['language'] = NODE_RUNTIME_NAME
116 runtime_details_dict['file_loc'] = package_json_file
117 runtime_details_dict['default_sku'] = 'S1'
118 elif os.path.isfile(package_netcore_file):
119 runtime_details_dict['language'] = NETCORE_RUNTIME_NAME
120 runtime_details_dict['file_loc'] = package_netcore_file
121 runtime_details_dict['default_sku'] = 'F1'
122 return runtime_details_dict
123
124
125 def parse_netcore_version(file_path):
126 import xml.etree.ElementTree as ET
127 import re
128 version_detected = ['0.0']
129 parsed_file = ET.parse(file_path)
130 root = parsed_file.getroot()
131 for target_ver in root.iter('TargetFramework'):
132 version_detected = re.findall(r"\d+\.\d+", target_ver.text)
133 # incase of multiple versions detected, return list in descending order
134 version_detected = sorted(version_detected, key=float, reverse=True)
135 return version_detected
136
137
138 def parse_node_version(file_path):
139 import json
140 import re
141 version_detected = ['0.0']
142 with open(file_path) as data_file:
143 data = []
144 for d in find_key_in_json(json.load(data_file), 'node'):
145 non_decimal = re.compile(r'[^\d.]+')
146 # remove the string ~ or > that sometimes exists in version value
147 c = non_decimal.sub('', d)
148 # reduce the version to '6.0' from '6.0.0'
149 data.append(c[:3])
150 version_detected = sorted(data, key=float, reverse=True)
151 return version_detected
152
153
154 def detect_netcore_version_tocreate(detected_ver):
155 if detected_ver in NETCORE_VERSIONS:
156 return detected_ver
157 return NETCORE_VERSION_DEFAULT
158
159
160 def detect_node_version_tocreate(detected_ver):
161 if detected_ver in NODE_VERSIONS:
162 return detected_ver
163 # get major version & get the closest version from supported list
164 major_ver = float(detected_ver.split('.')[0])
165 if major_ver < 4:
166 return NODE_VERSION_DEFAULT
167 elif major_ver >= 4 and major_ver < 6:
168 return '4.5'
169 elif major_ver >= 6 and major_ver < 8:
170 return '6.9'
171 return NODE_VERSION_DEFAULT
172
173
174 def find_key_in_json(json_data, key):
175 for k, v in json_data.items():
176 if key in k:
177 yield v
178 elif isinstance(v, dict):
179 for id_val in find_key_in_json(v, key):
180 yield id_val
181
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/webapp/azext_webapp/create_util.py b/src/webapp/azext_webapp/create_util.py
--- a/src/webapp/azext_webapp/create_util.py
+++ b/src/webapp/azext_webapp/create_util.py
@@ -106,16 +106,17 @@
def get_lang_from_content(src_path):
import glob
- # NODE: package.json should exisit in the application root dir
- # NETCORE: NETCORE.csproj should exist in the root dir
+ # NODE: package.json should exist in the application root dir
+ # NETCORE: *.csproj should exist in the application root dir
runtime_details_dict = dict.fromkeys(['language', 'file_loc', 'default_sku'])
package_json_file = os.path.join(src_path, 'package.json')
- package_netcore_file = os.path.join(src_path, glob.glob("*.csproj")[0])
+ package_netcore_glob = glob.glob("*.csproj")
if os.path.isfile(package_json_file):
runtime_details_dict['language'] = NODE_RUNTIME_NAME
runtime_details_dict['file_loc'] = package_json_file
runtime_details_dict['default_sku'] = 'S1'
- elif os.path.isfile(package_netcore_file):
+ elif package_netcore_glob:
+ package_netcore_file = os.path.join(src_path, package_netcore_glob[0])
runtime_details_dict['language'] = NETCORE_RUNTIME_NAME
runtime_details_dict['file_loc'] = package_netcore_file
runtime_details_dict['default_sku'] = 'F1'
@@ -138,7 +139,6 @@
def parse_node_version(file_path):
import json
import re
- version_detected = ['0.0']
with open(file_path) as data_file:
data = []
for d in find_key_in_json(json.load(data_file), 'node'):
@@ -148,7 +148,7 @@
# reduce the version to '6.0' from '6.0.0'
data.append(c[:3])
version_detected = sorted(data, key=float, reverse=True)
- return version_detected
+ return version_detected or ['0.0']
def detect_netcore_version_tocreate(detected_ver):
| {"golden_diff": "diff --git a/src/webapp/azext_webapp/create_util.py b/src/webapp/azext_webapp/create_util.py\n--- a/src/webapp/azext_webapp/create_util.py\n+++ b/src/webapp/azext_webapp/create_util.py\n@@ -106,16 +106,17 @@\n \n def get_lang_from_content(src_path):\n import glob\n- # NODE: package.json should exisit in the application root dir\n- # NETCORE: NETCORE.csproj should exist in the root dir\n+ # NODE: package.json should exist in the application root dir\n+ # NETCORE: *.csproj should exist in the application root dir\n runtime_details_dict = dict.fromkeys(['language', 'file_loc', 'default_sku'])\n package_json_file = os.path.join(src_path, 'package.json')\n- package_netcore_file = os.path.join(src_path, glob.glob(\"*.csproj\")[0])\n+ package_netcore_glob = glob.glob(\"*.csproj\")\n if os.path.isfile(package_json_file):\n runtime_details_dict['language'] = NODE_RUNTIME_NAME\n runtime_details_dict['file_loc'] = package_json_file\n runtime_details_dict['default_sku'] = 'S1'\n- elif os.path.isfile(package_netcore_file):\n+ elif package_netcore_glob:\n+ package_netcore_file = os.path.join(src_path, package_netcore_glob[0])\n runtime_details_dict['language'] = NETCORE_RUNTIME_NAME\n runtime_details_dict['file_loc'] = package_netcore_file\n runtime_details_dict['default_sku'] = 'F1'\n@@ -138,7 +139,6 @@\n def parse_node_version(file_path):\n import json\n import re\n- version_detected = ['0.0']\n with open(file_path) as data_file:\n data = []\n for d in find_key_in_json(json.load(data_file), 'node'):\n@@ -148,7 +148,7 @@\n # reduce the version to '6.0' from '6.0.0'\n data.append(c[:3])\n version_detected = sorted(data, key=float, reverse=True)\n- return version_detected\n+ return version_detected or ['0.0']\n \n \n def detect_netcore_version_tocreate(detected_ver):\n", "issue": "az webapp new throws an exception when it can't detect a project type.\n### Extension name (the extension in question)\r\nwebapp\r\n\r\n### Description of issue (in as much detail as possible)\r\nThis should fail gracefully:\r\n\r\n```\r\nbash-4.3# cd /\r\nbash-4.3# ls\r\nappp bin etc lib media proc run srv tmp var\r\nazure-cli dev home linuxrc mnt root sbin sys usr\r\nbash-4.3# az webapp new -n foo\r\nlist index out of range\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/site-packages/knack/cli.py\", line 194, in invoke\r\n cmd_result = self.invocation.execute(args)\r\n File \"/usr/local/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py\", line 331, in execute\r\n six.reraise(*sys.exc_info())\r\n File \"/usr/local/lib/python3.6/site-packages/six.py\", line 693, in reraise\r\n raise value\r\n File \"/usr/local/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py\", line 304, in execute\r\n result = cmd(params)\r\n File \"/usr/local/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py\", line 168, in __call__\r\n return super(AzCliCommand, self).__call__(*args, **kwargs)\r\n File \"/usr/local/lib/python3.6/site-packages/knack/commands.py\", line 109, in __call__\r\n return self.handler(*args, **kwargs)\r\n File \"/usr/local/lib/python3.6/site-packages/azure/cli/core/__init__.py\", line 348, in default_command_handler\r\n result = op(**command_args)\r\n File \"/root/.azure/cliextensions/webapp/azext_webapp/custom.py\", line 50, in create_deploy_webapp\r\n lang_details = get_lang_from_content(src_dir)\r\n File \"/root/.azure/cliextensions/webapp/azext_webapp/create_util.py\", line 113, in get_lang_from_content\r\n package_netcore_file = os.path.join(src_path, glob.glob(\"*.csproj\")[0])\r\nIndexError: list index out of range\r\nbash-4.3# \r\n```\r\n\r\nSteps to reproduce: run `az webapp new -n foo` in a *non-empty* directory that does not contain a `package.json` or a `*.csproj` file.\r\n\n", "before_files": [{"content": "# --------------------------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for license information.\n# --------------------------------------------------------------------------------------------\n\nimport os\nimport zipfile\nfrom azure.cli.core.commands.client_factory import get_mgmt_service_client\nfrom azure.mgmt.resource.resources.models import ResourceGroup\nfrom ._constants import (\n NETCORE_VERSION_DEFAULT,\n NETCORE_VERSIONS,\n NODE_VERSION_DEFAULT,\n NODE_VERSIONS,\n NETCORE_RUNTIME_NAME,\n NODE_RUNTIME_NAME)\n\n\ndef _resource_client_factory(cli_ctx, **_):\n from azure.cli.core.profiles import ResourceType\n return get_mgmt_service_client(cli_ctx, ResourceType.MGMT_RESOURCE_RESOURCES)\n\n\ndef web_client_factory(cli_ctx, **_):\n from azure.mgmt.web import WebSiteManagementClient\n return get_mgmt_service_client(cli_ctx, WebSiteManagementClient)\n\n\ndef zip_contents_from_dir(dirPath, lang):\n relroot = os.path.abspath(os.path.join(dirPath, os.pardir))\n path_and_file = os.path.splitdrive(dirPath)[1]\n file_val = os.path.split(path_and_file)[1]\n zip_file_path = relroot + \"\\\\\" + file_val + \".zip\"\n abs_src = os.path.abspath(dirPath)\n with zipfile.ZipFile(\"{}\".format(zip_file_path), \"w\", zipfile.ZIP_DEFLATED) as zf:\n for dirname, subdirs, files in os.walk(dirPath):\n # skip node_modules folder for Node apps,\n # since zip_deployment will perfom the build operation\n if lang.lower() == NODE_RUNTIME_NAME and 'node_modules' in subdirs:\n subdirs.remove('node_modules')\n elif lang.lower() == NETCORE_RUNTIME_NAME:\n if 'bin' in subdirs:\n subdirs.remove('bin')\n elif 'obj' in subdirs:\n subdirs.remove('obj')\n for filename in files:\n absname = os.path.abspath(os.path.join(dirname, filename))\n arcname = absname[len(abs_src) + 1:]\n zf.write(absname, arcname)\n return zip_file_path\n\n\ndef get_runtime_version_details(file_path, lang_name):\n version_detected = None\n version_to_create = None\n if lang_name.lower() == NETCORE_RUNTIME_NAME:\n # method returns list in DESC, pick the first\n version_detected = parse_netcore_version(file_path)[0]\n version_to_create = detect_netcore_version_tocreate(version_detected)\n elif lang_name.lower() == NODE_RUNTIME_NAME:\n version_detected = parse_node_version(file_path)[0]\n version_to_create = detect_node_version_tocreate(version_detected)\n return {'detected': version_detected, 'to_create': version_to_create}\n\n\ndef create_resource_group(cmd, rg_name, location):\n rcf = _resource_client_factory(cmd.cli_ctx)\n rg_params = ResourceGroup(location=location)\n return rcf.resource_groups.create_or_update(rg_name, rg_params)\n\n\ndef check_resource_group_exists(cmd, rg_name):\n rcf = _resource_client_factory(cmd.cli_ctx)\n return rcf.resource_groups.check_existence(rg_name)\n\n\ndef check_resource_group_supports_os(cmd, rg_name, location, is_linux):\n # get all appservice plans from RG\n client = web_client_factory(cmd.cli_ctx)\n plans = list(client.app_service_plans.list_by_resource_group(rg_name))\n for item in plans:\n # for Linux if an app with reserved==False exists, ASP doesn't support Linux\n if is_linux and item.location == location and not item.reserved:\n return False\n elif not is_linux and item.location == location and item.reserved:\n return False\n return True\n\n\ndef check_if_asp_exists(cmd, rg_name, asp_name):\n # get all appservice plans from RG\n client = web_client_factory(cmd.cli_ctx)\n for item in list(client.app_service_plans.list_by_resource_group(rg_name)):\n if item.name == asp_name:\n return True\n return False\n\n\ndef check_app_exists(cmd, rg_name, app_name):\n client = web_client_factory(cmd.cli_ctx)\n for item in list(client.web_apps.list_by_resource_group(rg_name)):\n if item.name == app_name:\n return True\n return False\n\n\ndef get_lang_from_content(src_path):\n import glob\n # NODE: package.json should exisit in the application root dir\n # NETCORE: NETCORE.csproj should exist in the root dir\n runtime_details_dict = dict.fromkeys(['language', 'file_loc', 'default_sku'])\n package_json_file = os.path.join(src_path, 'package.json')\n package_netcore_file = os.path.join(src_path, glob.glob(\"*.csproj\")[0])\n if os.path.isfile(package_json_file):\n runtime_details_dict['language'] = NODE_RUNTIME_NAME\n runtime_details_dict['file_loc'] = package_json_file\n runtime_details_dict['default_sku'] = 'S1'\n elif os.path.isfile(package_netcore_file):\n runtime_details_dict['language'] = NETCORE_RUNTIME_NAME\n runtime_details_dict['file_loc'] = package_netcore_file\n runtime_details_dict['default_sku'] = 'F1'\n return runtime_details_dict\n\n\ndef parse_netcore_version(file_path):\n import xml.etree.ElementTree as ET\n import re\n version_detected = ['0.0']\n parsed_file = ET.parse(file_path)\n root = parsed_file.getroot()\n for target_ver in root.iter('TargetFramework'):\n version_detected = re.findall(r\"\\d+\\.\\d+\", target_ver.text)\n # incase of multiple versions detected, return list in descending order\n version_detected = sorted(version_detected, key=float, reverse=True)\n return version_detected\n\n\ndef parse_node_version(file_path):\n import json\n import re\n version_detected = ['0.0']\n with open(file_path) as data_file:\n data = []\n for d in find_key_in_json(json.load(data_file), 'node'):\n non_decimal = re.compile(r'[^\\d.]+')\n # remove the string ~ or > that sometimes exists in version value\n c = non_decimal.sub('', d)\n # reduce the version to '6.0' from '6.0.0'\n data.append(c[:3])\n version_detected = sorted(data, key=float, reverse=True)\n return version_detected\n\n\ndef detect_netcore_version_tocreate(detected_ver):\n if detected_ver in NETCORE_VERSIONS:\n return detected_ver\n return NETCORE_VERSION_DEFAULT\n\n\ndef detect_node_version_tocreate(detected_ver):\n if detected_ver in NODE_VERSIONS:\n return detected_ver\n # get major version & get the closest version from supported list\n major_ver = float(detected_ver.split('.')[0])\n if major_ver < 4:\n return NODE_VERSION_DEFAULT\n elif major_ver >= 4 and major_ver < 6:\n return '4.5'\n elif major_ver >= 6 and major_ver < 8:\n return '6.9'\n return NODE_VERSION_DEFAULT\n\n\ndef find_key_in_json(json_data, key):\n for k, v in json_data.items():\n if key in k:\n yield v\n elif isinstance(v, dict):\n for id_val in find_key_in_json(v, key):\n yield id_val\n", "path": "src/webapp/azext_webapp/create_util.py"}], "after_files": [{"content": "# --------------------------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for license information.\n# --------------------------------------------------------------------------------------------\n\nimport os\nimport zipfile\nfrom azure.cli.core.commands.client_factory import get_mgmt_service_client\nfrom azure.mgmt.resource.resources.models import ResourceGroup\nfrom ._constants import (\n NETCORE_VERSION_DEFAULT,\n NETCORE_VERSIONS,\n NODE_VERSION_DEFAULT,\n NODE_VERSIONS,\n NETCORE_RUNTIME_NAME,\n NODE_RUNTIME_NAME)\n\n\ndef _resource_client_factory(cli_ctx, **_):\n from azure.cli.core.profiles import ResourceType\n return get_mgmt_service_client(cli_ctx, ResourceType.MGMT_RESOURCE_RESOURCES)\n\n\ndef web_client_factory(cli_ctx, **_):\n from azure.mgmt.web import WebSiteManagementClient\n return get_mgmt_service_client(cli_ctx, WebSiteManagementClient)\n\n\ndef zip_contents_from_dir(dirPath, lang):\n relroot = os.path.abspath(os.path.join(dirPath, os.pardir))\n path_and_file = os.path.splitdrive(dirPath)[1]\n file_val = os.path.split(path_and_file)[1]\n zip_file_path = relroot + \"\\\\\" + file_val + \".zip\"\n abs_src = os.path.abspath(dirPath)\n with zipfile.ZipFile(\"{}\".format(zip_file_path), \"w\", zipfile.ZIP_DEFLATED) as zf:\n for dirname, subdirs, files in os.walk(dirPath):\n # skip node_modules folder for Node apps,\n # since zip_deployment will perfom the build operation\n if lang.lower() == NODE_RUNTIME_NAME and 'node_modules' in subdirs:\n subdirs.remove('node_modules')\n elif lang.lower() == NETCORE_RUNTIME_NAME:\n if 'bin' in subdirs:\n subdirs.remove('bin')\n elif 'obj' in subdirs:\n subdirs.remove('obj')\n for filename in files:\n absname = os.path.abspath(os.path.join(dirname, filename))\n arcname = absname[len(abs_src) + 1:]\n zf.write(absname, arcname)\n return zip_file_path\n\n\ndef get_runtime_version_details(file_path, lang_name):\n version_detected = None\n version_to_create = None\n if lang_name.lower() == NETCORE_RUNTIME_NAME:\n # method returns list in DESC, pick the first\n version_detected = parse_netcore_version(file_path)[0]\n version_to_create = detect_netcore_version_tocreate(version_detected)\n elif lang_name.lower() == NODE_RUNTIME_NAME:\n version_detected = parse_node_version(file_path)[0]\n version_to_create = detect_node_version_tocreate(version_detected)\n return {'detected': version_detected, 'to_create': version_to_create}\n\n\ndef create_resource_group(cmd, rg_name, location):\n rcf = _resource_client_factory(cmd.cli_ctx)\n rg_params = ResourceGroup(location=location)\n return rcf.resource_groups.create_or_update(rg_name, rg_params)\n\n\ndef check_resource_group_exists(cmd, rg_name):\n rcf = _resource_client_factory(cmd.cli_ctx)\n return rcf.resource_groups.check_existence(rg_name)\n\n\ndef check_resource_group_supports_os(cmd, rg_name, location, is_linux):\n # get all appservice plans from RG\n client = web_client_factory(cmd.cli_ctx)\n plans = list(client.app_service_plans.list_by_resource_group(rg_name))\n for item in plans:\n # for Linux if an app with reserved==False exists, ASP doesn't support Linux\n if is_linux and item.location == location and not item.reserved:\n return False\n elif not is_linux and item.location == location and item.reserved:\n return False\n return True\n\n\ndef check_if_asp_exists(cmd, rg_name, asp_name):\n # get all appservice plans from RG\n client = web_client_factory(cmd.cli_ctx)\n for item in list(client.app_service_plans.list_by_resource_group(rg_name)):\n if item.name == asp_name:\n return True\n return False\n\n\ndef check_app_exists(cmd, rg_name, app_name):\n client = web_client_factory(cmd.cli_ctx)\n for item in list(client.web_apps.list_by_resource_group(rg_name)):\n if item.name == app_name:\n return True\n return False\n\n\ndef get_lang_from_content(src_path):\n import glob\n # NODE: package.json should exist in the application root dir\n # NETCORE: *.csproj should exist in the application root dir\n runtime_details_dict = dict.fromkeys(['language', 'file_loc', 'default_sku'])\n package_json_file = os.path.join(src_path, 'package.json')\n package_netcore_glob = glob.glob(\"*.csproj\")\n if os.path.isfile(package_json_file):\n runtime_details_dict['language'] = NODE_RUNTIME_NAME\n runtime_details_dict['file_loc'] = package_json_file\n runtime_details_dict['default_sku'] = 'S1'\n elif package_netcore_glob:\n package_netcore_file = os.path.join(src_path, package_netcore_glob[0])\n runtime_details_dict['language'] = NETCORE_RUNTIME_NAME\n runtime_details_dict['file_loc'] = package_netcore_file\n runtime_details_dict['default_sku'] = 'F1'\n return runtime_details_dict\n\n\ndef parse_netcore_version(file_path):\n import xml.etree.ElementTree as ET\n import re\n version_detected = ['0.0']\n parsed_file = ET.parse(file_path)\n root = parsed_file.getroot()\n for target_ver in root.iter('TargetFramework'):\n version_detected = re.findall(r\"\\d+\\.\\d+\", target_ver.text)\n # incase of multiple versions detected, return list in descending order\n version_detected = sorted(version_detected, key=float, reverse=True)\n return version_detected\n\n\ndef parse_node_version(file_path):\n import json\n import re\n with open(file_path) as data_file:\n data = []\n for d in find_key_in_json(json.load(data_file), 'node'):\n non_decimal = re.compile(r'[^\\d.]+')\n # remove the string ~ or > that sometimes exists in version value\n c = non_decimal.sub('', d)\n # reduce the version to '6.0' from '6.0.0'\n data.append(c[:3])\n version_detected = sorted(data, key=float, reverse=True)\n return version_detected or ['0.0']\n\n\ndef detect_netcore_version_tocreate(detected_ver):\n if detected_ver in NETCORE_VERSIONS:\n return detected_ver\n return NETCORE_VERSION_DEFAULT\n\n\ndef detect_node_version_tocreate(detected_ver):\n if detected_ver in NODE_VERSIONS:\n return detected_ver\n # get major version & get the closest version from supported list\n major_ver = float(detected_ver.split('.')[0])\n if major_ver < 4:\n return NODE_VERSION_DEFAULT\n elif major_ver >= 4 and major_ver < 6:\n return '4.5'\n elif major_ver >= 6 and major_ver < 8:\n return '6.9'\n return NODE_VERSION_DEFAULT\n\n\ndef find_key_in_json(json_data, key):\n for k, v in json_data.items():\n if key in k:\n yield v\n elif isinstance(v, dict):\n for id_val in find_key_in_json(v, key):\n yield id_val\n", "path": "src/webapp/azext_webapp/create_util.py"}]} | 2,842 | 499 |
gh_patches_debug_41802 | rasdani/github-patches | git_diff | ESMCI__cime-1034 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
check_input_data not supporting input files outside of input data repo
@erichlf has asked for support for the following use case:
* Create a partition file in his home directory
* Point namelist files to use this partition file
* check_input_data should find this file but it doesn't because it insists on looking in the input data repo
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `utils/python/CIME/check_input_data.py`
Content:
```
1 """
2 API for checking input for testcase
3 """
4
5 from CIME.XML.standard_module_setup import *
6 from CIME.utils import get_model, SharedArea
7
8 import fnmatch, glob, shutil
9
10 logger = logging.getLogger(__name__)
11
12 # Should probably be in XML somewhere
13 SVN_LOCS = {
14 "acme" : "https://acme-svn2.ornl.gov/acme-repo/acme/inputdata",
15 "cesm" : "https://svn-ccsm-inputdata.cgd.ucar.edu/trunk/inputdata"
16 }
17
18 def find_files(rootdir, pattern):
19 """
20 recursively find all files matching a pattern
21 """
22 result = []
23 for root, _, files in os.walk(rootdir):
24 for filename in files:
25 if (fnmatch.fnmatch(filename, pattern)):
26 result.append(os.path.join(root, filename))
27
28 return result
29
30 def download_if_in_repo(svn_loc, input_data_root, rel_path):
31 """
32 Return True if successfully downloaded
33 """
34 rel_path = rel_path.strip('/')
35 full_url = os.path.join(svn_loc, rel_path)
36
37 full_path = os.path.join(input_data_root, rel_path)
38 logging.info("Trying to download file: '%s' to path '%s'" % (full_url, full_path))
39 # Make sure local path exists, create if it does not
40 if(not os.path.exists(os.path.dirname(full_path))):
41 os.makedirs(os.path.dirname(full_path))
42
43 stat, out, err = run_cmd("svn --non-interactive --trust-server-cert ls %s" % full_url)
44 if (stat != 0):
45 logging.warning("FAIL: SVN repo '%s' does not have file '%s'\nReason:%s\n%s\n" % (svn_loc, full_url, out, err))
46 return False
47 else:
48 # Use umask to make sure files are group read/writable. As long as parent directories
49 # have +s, then everything should work.
50 with SharedArea():
51 stat, output, errput = \
52 run_cmd("svn --non-interactive --trust-server-cert export %s %s" % (full_url, full_path))
53 if (stat != 0):
54 logging.warning("svn export failed with output: %s and errput %s\n" % (output, errput))
55 return False
56 else:
57 logging.info("SUCCESS\n")
58 return True
59
60 ###############################################################################
61 def check_all_input_data(case):
62 ###############################################################################
63
64 success = check_input_data(case=case, download=True)
65 expect(success, "Failed to download input data")
66
67 get_refcase = case.get_value("GET_REFCASE")
68 run_type = case.get_value("RUN_TYPE")
69 continue_run = case.get_value("CONTINUE_RUN")
70
71 # We do not fully populate the inputdata directory on every
72 # machine and do not expect every user to download the 3TB+ of
73 # data in our inputdata repository. This code checks for the
74 # existence of inputdata in the local inputdata directory and
75 # attempts to download data from the server if it's needed and
76 # missing.
77 if get_refcase and run_type != "startup" and not continue_run:
78 din_loc_root = case.get_value("DIN_LOC_ROOT")
79 run_refdate = case.get_value("RUN_REFDATE")
80 run_refcase = case.get_value("RUN_REFCASE")
81 run_refdir = case.get_value("RUN_REFDIR")
82 rundir = case.get_value("RUNDIR")
83
84 refdir = os.path.join(din_loc_root, run_refdir, run_refcase, run_refdate)
85 expect(os.path.isdir(refdir),
86 """
87 *****************************************************************
88 prestage ERROR: $refdir is not on local disk
89 obtain this data from the svn input data repository
90 > mkdir -p %s
91 > cd %s
92 > cd ..
93 > svn export --force https://svn-ccsm-inputdata.cgd.ucar.edu/trunk/inputdata/%s
94 or set GET_REFCASE to FALSE in env_run.xml
95 and prestage the restart data to $RUNDIR manually
96 *****************************************************************""" % (refdir, refdir, refdir))
97
98 logger.info(" - Prestaging REFCASE (%s) to %s" % (refdir, rundir))
99
100 # prestage the reference case's files.
101
102 if (not os.path.exists(rundir)):
103 logger.debug("Creating run directory: %s"%rundir)
104 os.makedirs(rundir)
105
106 for rcfile in glob.iglob(os.path.join(refdir,"*%s*"%run_refcase)):
107 logger.debug("Staging file %s"%rcfile)
108 rcbaseline = os.path.basename(rcfile)
109 if not os.path.exists("%s/%s" % (rundir, rcbaseline)):
110 os.symlink(rcfile, "%s/%s" % ((rundir, rcbaseline)))
111
112 # copy the refcases' rpointer files to the run directory
113 for rpointerfile in glob.iglob(os.path.join("%s","*rpointer*") % (refdir)):
114 logger.debug("Copy rpointer %s"%rpointerfile)
115 shutil.copy(rpointerfile, rundir)
116
117
118 for cam2file in glob.iglob(os.path.join("%s","*.cam2.*") % rundir):
119 camfile = cam2file.replace("cam2", "cam")
120 os.symlink(cam2file, camfile)
121
122 def check_input_data(case, svn_loc=None, input_data_root=None, data_list_dir="Buildconf", download=False):
123 """
124 Return True if no files missing
125 """
126 # Fill in defaults as needed
127 svn_loc = SVN_LOCS[get_model()] if svn_loc is None else svn_loc
128 input_data_root = case.get_value("DIN_LOC_ROOT") if input_data_root is None else input_data_root
129
130 expect(os.path.isdir(input_data_root), "Invalid input_data_root directory: '%s'" % input_data_root)
131 expect(os.path.isdir(data_list_dir), "Invalid data_list_dir directory: '%s'" % data_list_dir)
132
133 data_list_files = find_files(data_list_dir, "*.input_data_list")
134 expect(data_list_files, "No .input_data_list files found in dir '%s'" % data_list_dir)
135
136 no_files_missing = True
137 for data_list_file in data_list_files:
138 logging.info("Loading input file: '%s'" % data_list_file)
139 with open(data_list_file, "r") as fd:
140 lines = fd.readlines()
141
142 for line in lines:
143 line = line.strip()
144 if (line and not line.startswith("#")):
145 tokens = line.split('=')
146 description, full_path = tokens[0].strip(), tokens[1].strip()
147 if(full_path):
148 # expand xml variables
149 full_path = case.get_resolved_value(full_path)
150 rel_path = full_path.replace(input_data_root, "")
151
152 # There are some special values of rel_path that
153 # we need to ignore - some of the component models
154 # set things like 'NULL' or 'same_as_TS' -
155 # basically if rel_path does not contain '/' (a
156 # directory tree) you can assume it's a special
157 # value and ignore it (perhaps with a warning)
158 if ("/" in rel_path and not os.path.exists(full_path)):
159 model = os.path.basename(data_list_file).split('.')[0]
160 logging.warning("Model %s missing file %s = '%s'" % (model,description,full_path))
161
162 if (download):
163 success = download_if_in_repo(svn_loc, input_data_root, rel_path)
164 if (not success):
165 # If ACME, try CESM repo as backup
166 if (get_model() == "acme" and svn_loc != SVN_LOCS["cesm"]):
167 success = download_if_in_repo(SVN_LOCS["cesm"], input_data_root, rel_path)
168 if (not success):
169 no_files_missing = False
170 else:
171 no_files_missing = False
172 # if not download
173 else:
174 no_files_missing = False
175 else:
176 logging.debug("Already had input file: '%s'" % full_path)
177
178 else:
179 model = os.path.basename(data_list_file).split('.')[0]
180 logging.warning("Model %s no file specified for %s"%(model,description))
181
182 return no_files_missing
183
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/utils/python/CIME/check_input_data.py b/utils/python/CIME/check_input_data.py
--- a/utils/python/CIME/check_input_data.py
+++ b/utils/python/CIME/check_input_data.py
@@ -148,32 +148,45 @@
# expand xml variables
full_path = case.get_resolved_value(full_path)
rel_path = full_path.replace(input_data_root, "")
+ model = os.path.basename(data_list_file).split('.')[0]
- # There are some special values of rel_path that
- # we need to ignore - some of the component models
- # set things like 'NULL' or 'same_as_TS' -
- # basically if rel_path does not contain '/' (a
- # directory tree) you can assume it's a special
- # value and ignore it (perhaps with a warning)
- if ("/" in rel_path and not os.path.exists(full_path)):
- model = os.path.basename(data_list_file).split('.')[0]
- logging.warning("Model %s missing file %s = '%s'" % (model,description,full_path))
-
- if (download):
- success = download_if_in_repo(svn_loc, input_data_root, rel_path)
- if (not success):
- # If ACME, try CESM repo as backup
- if (get_model() == "acme" and svn_loc != SVN_LOCS["cesm"]):
- success = download_if_in_repo(SVN_LOCS["cesm"], input_data_root, rel_path)
- if (not success):
- no_files_missing = False
- else:
- no_files_missing = False
- # if not download
- else:
+ if ("/" in rel_path and rel_path == full_path):
+ # User pointing to a file outside of input_data_root, we cannot determine
+ # rel_path, and so cannot download the file. If it already exists, we can
+ # proceed
+ if not os.path.exists(full_path):
+ logging.warning(" Model %s missing file %s = '%s'" % (model, description, full_path))
+ if download:
+ logging.warning(" Cannot download file since it lives outside of the input_data_root '%s'" % input_data_root)
no_files_missing = False
+ else:
+ logging.info(" Found input file: '%s'" % full_path)
+
else:
- logging.debug("Already had input file: '%s'" % full_path)
+ # There are some special values of rel_path that
+ # we need to ignore - some of the component models
+ # set things like 'NULL' or 'same_as_TS' -
+ # basically if rel_path does not contain '/' (a
+ # directory tree) you can assume it's a special
+ # value and ignore it (perhaps with a warning)
+ if ("/" in rel_path and not os.path.exists(full_path)):
+ logging.warning(" Model %s missing file %s = '%s'" % (model,description,full_path))
+
+ if (download):
+ success = download_if_in_repo(svn_loc, input_data_root, rel_path)
+ if (not success):
+ # If ACME, try CESM repo as backup
+ if (get_model() == "acme" and svn_loc != SVN_LOCS["cesm"]):
+ success = download_if_in_repo(SVN_LOCS["cesm"], input_data_root, rel_path)
+ if (not success):
+ no_files_missing = False
+ else:
+ no_files_missing = False
+ # if not download
+ else:
+ no_files_missing = False
+ else:
+ logging.info(" Already had input file: '%s'" % full_path)
else:
model = os.path.basename(data_list_file).split('.')[0]
| {"golden_diff": "diff --git a/utils/python/CIME/check_input_data.py b/utils/python/CIME/check_input_data.py\n--- a/utils/python/CIME/check_input_data.py\n+++ b/utils/python/CIME/check_input_data.py\n@@ -148,32 +148,45 @@\n # expand xml variables\n full_path = case.get_resolved_value(full_path)\n rel_path = full_path.replace(input_data_root, \"\")\n+ model = os.path.basename(data_list_file).split('.')[0]\n \n- # There are some special values of rel_path that\n- # we need to ignore - some of the component models\n- # set things like 'NULL' or 'same_as_TS' -\n- # basically if rel_path does not contain '/' (a\n- # directory tree) you can assume it's a special\n- # value and ignore it (perhaps with a warning)\n- if (\"/\" in rel_path and not os.path.exists(full_path)):\n- model = os.path.basename(data_list_file).split('.')[0]\n- logging.warning(\"Model %s missing file %s = '%s'\" % (model,description,full_path))\n-\n- if (download):\n- success = download_if_in_repo(svn_loc, input_data_root, rel_path)\n- if (not success):\n- # If ACME, try CESM repo as backup\n- if (get_model() == \"acme\" and svn_loc != SVN_LOCS[\"cesm\"]):\n- success = download_if_in_repo(SVN_LOCS[\"cesm\"], input_data_root, rel_path)\n- if (not success):\n- no_files_missing = False\n- else:\n- no_files_missing = False\n- # if not download\n- else:\n+ if (\"/\" in rel_path and rel_path == full_path):\n+ # User pointing to a file outside of input_data_root, we cannot determine\n+ # rel_path, and so cannot download the file. If it already exists, we can\n+ # proceed\n+ if not os.path.exists(full_path):\n+ logging.warning(\" Model %s missing file %s = '%s'\" % (model, description, full_path))\n+ if download:\n+ logging.warning(\" Cannot download file since it lives outside of the input_data_root '%s'\" % input_data_root)\n no_files_missing = False\n+ else:\n+ logging.info(\" Found input file: '%s'\" % full_path)\n+\n else:\n- logging.debug(\"Already had input file: '%s'\" % full_path)\n+ # There are some special values of rel_path that\n+ # we need to ignore - some of the component models\n+ # set things like 'NULL' or 'same_as_TS' -\n+ # basically if rel_path does not contain '/' (a\n+ # directory tree) you can assume it's a special\n+ # value and ignore it (perhaps with a warning)\n+ if (\"/\" in rel_path and not os.path.exists(full_path)):\n+ logging.warning(\" Model %s missing file %s = '%s'\" % (model,description,full_path))\n+\n+ if (download):\n+ success = download_if_in_repo(svn_loc, input_data_root, rel_path)\n+ if (not success):\n+ # If ACME, try CESM repo as backup\n+ if (get_model() == \"acme\" and svn_loc != SVN_LOCS[\"cesm\"]):\n+ success = download_if_in_repo(SVN_LOCS[\"cesm\"], input_data_root, rel_path)\n+ if (not success):\n+ no_files_missing = False\n+ else:\n+ no_files_missing = False\n+ # if not download\n+ else:\n+ no_files_missing = False\n+ else:\n+ logging.info(\" Already had input file: '%s'\" % full_path)\n \n else:\n model = os.path.basename(data_list_file).split('.')[0]\n", "issue": "check_input_data not supporting input files outside of input data repo\n@erichlf has asked for support for the following use case:\r\n* Create a partition file in his home directory\r\n* Point namelist files to use this partition file\r\n* check_input_data should find this file but it doesn't because it insists on looking in the input data repo\n", "before_files": [{"content": "\"\"\"\nAPI for checking input for testcase\n\"\"\"\n\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.utils import get_model, SharedArea\n\nimport fnmatch, glob, shutil\n\nlogger = logging.getLogger(__name__)\n\n# Should probably be in XML somewhere\nSVN_LOCS = {\n \"acme\" : \"https://acme-svn2.ornl.gov/acme-repo/acme/inputdata\",\n \"cesm\" : \"https://svn-ccsm-inputdata.cgd.ucar.edu/trunk/inputdata\"\n}\n\ndef find_files(rootdir, pattern):\n \"\"\"\n recursively find all files matching a pattern\n \"\"\"\n result = []\n for root, _, files in os.walk(rootdir):\n for filename in files:\n if (fnmatch.fnmatch(filename, pattern)):\n result.append(os.path.join(root, filename))\n\n return result\n\ndef download_if_in_repo(svn_loc, input_data_root, rel_path):\n \"\"\"\n Return True if successfully downloaded\n \"\"\"\n rel_path = rel_path.strip('/')\n full_url = os.path.join(svn_loc, rel_path)\n\n full_path = os.path.join(input_data_root, rel_path)\n logging.info(\"Trying to download file: '%s' to path '%s'\" % (full_url, full_path))\n # Make sure local path exists, create if it does not\n if(not os.path.exists(os.path.dirname(full_path))):\n os.makedirs(os.path.dirname(full_path))\n\n stat, out, err = run_cmd(\"svn --non-interactive --trust-server-cert ls %s\" % full_url)\n if (stat != 0):\n logging.warning(\"FAIL: SVN repo '%s' does not have file '%s'\\nReason:%s\\n%s\\n\" % (svn_loc, full_url, out, err))\n return False\n else:\n # Use umask to make sure files are group read/writable. As long as parent directories\n # have +s, then everything should work.\n with SharedArea():\n stat, output, errput = \\\n run_cmd(\"svn --non-interactive --trust-server-cert export %s %s\" % (full_url, full_path))\n if (stat != 0):\n logging.warning(\"svn export failed with output: %s and errput %s\\n\" % (output, errput))\n return False\n else:\n logging.info(\"SUCCESS\\n\")\n return True\n\n###############################################################################\ndef check_all_input_data(case):\n###############################################################################\n\n success = check_input_data(case=case, download=True)\n expect(success, \"Failed to download input data\")\n\n get_refcase = case.get_value(\"GET_REFCASE\")\n run_type = case.get_value(\"RUN_TYPE\")\n continue_run = case.get_value(\"CONTINUE_RUN\")\n\n # We do not fully populate the inputdata directory on every\n # machine and do not expect every user to download the 3TB+ of\n # data in our inputdata repository. This code checks for the\n # existence of inputdata in the local inputdata directory and\n # attempts to download data from the server if it's needed and\n # missing.\n if get_refcase and run_type != \"startup\" and not continue_run:\n din_loc_root = case.get_value(\"DIN_LOC_ROOT\")\n run_refdate = case.get_value(\"RUN_REFDATE\")\n run_refcase = case.get_value(\"RUN_REFCASE\")\n run_refdir = case.get_value(\"RUN_REFDIR\")\n rundir = case.get_value(\"RUNDIR\")\n\n refdir = os.path.join(din_loc_root, run_refdir, run_refcase, run_refdate)\n expect(os.path.isdir(refdir),\n\"\"\"\n*****************************************************************\nprestage ERROR: $refdir is not on local disk\nobtain this data from the svn input data repository\n> mkdir -p %s\n> cd %s\n> cd ..\n> svn export --force https://svn-ccsm-inputdata.cgd.ucar.edu/trunk/inputdata/%s\nor set GET_REFCASE to FALSE in env_run.xml\nand prestage the restart data to $RUNDIR manually\n*****************************************************************\"\"\" % (refdir, refdir, refdir))\n\n logger.info(\" - Prestaging REFCASE (%s) to %s\" % (refdir, rundir))\n\n # prestage the reference case's files.\n\n if (not os.path.exists(rundir)):\n logger.debug(\"Creating run directory: %s\"%rundir)\n os.makedirs(rundir)\n\n for rcfile in glob.iglob(os.path.join(refdir,\"*%s*\"%run_refcase)):\n logger.debug(\"Staging file %s\"%rcfile)\n rcbaseline = os.path.basename(rcfile)\n if not os.path.exists(\"%s/%s\" % (rundir, rcbaseline)):\n os.symlink(rcfile, \"%s/%s\" % ((rundir, rcbaseline)))\n\n # copy the refcases' rpointer files to the run directory\n for rpointerfile in glob.iglob(os.path.join(\"%s\",\"*rpointer*\") % (refdir)):\n logger.debug(\"Copy rpointer %s\"%rpointerfile)\n shutil.copy(rpointerfile, rundir)\n\n\n for cam2file in glob.iglob(os.path.join(\"%s\",\"*.cam2.*\") % rundir):\n camfile = cam2file.replace(\"cam2\", \"cam\")\n os.symlink(cam2file, camfile)\n\ndef check_input_data(case, svn_loc=None, input_data_root=None, data_list_dir=\"Buildconf\", download=False):\n \"\"\"\n Return True if no files missing\n \"\"\"\n # Fill in defaults as needed\n svn_loc = SVN_LOCS[get_model()] if svn_loc is None else svn_loc\n input_data_root = case.get_value(\"DIN_LOC_ROOT\") if input_data_root is None else input_data_root\n\n expect(os.path.isdir(input_data_root), \"Invalid input_data_root directory: '%s'\" % input_data_root)\n expect(os.path.isdir(data_list_dir), \"Invalid data_list_dir directory: '%s'\" % data_list_dir)\n\n data_list_files = find_files(data_list_dir, \"*.input_data_list\")\n expect(data_list_files, \"No .input_data_list files found in dir '%s'\" % data_list_dir)\n\n no_files_missing = True\n for data_list_file in data_list_files:\n logging.info(\"Loading input file: '%s'\" % data_list_file)\n with open(data_list_file, \"r\") as fd:\n lines = fd.readlines()\n\n for line in lines:\n line = line.strip()\n if (line and not line.startswith(\"#\")):\n tokens = line.split('=')\n description, full_path = tokens[0].strip(), tokens[1].strip()\n if(full_path):\n # expand xml variables\n full_path = case.get_resolved_value(full_path)\n rel_path = full_path.replace(input_data_root, \"\")\n\n # There are some special values of rel_path that\n # we need to ignore - some of the component models\n # set things like 'NULL' or 'same_as_TS' -\n # basically if rel_path does not contain '/' (a\n # directory tree) you can assume it's a special\n # value and ignore it (perhaps with a warning)\n if (\"/\" in rel_path and not os.path.exists(full_path)):\n model = os.path.basename(data_list_file).split('.')[0]\n logging.warning(\"Model %s missing file %s = '%s'\" % (model,description,full_path))\n\n if (download):\n success = download_if_in_repo(svn_loc, input_data_root, rel_path)\n if (not success):\n # If ACME, try CESM repo as backup\n if (get_model() == \"acme\" and svn_loc != SVN_LOCS[\"cesm\"]):\n success = download_if_in_repo(SVN_LOCS[\"cesm\"], input_data_root, rel_path)\n if (not success):\n no_files_missing = False\n else:\n no_files_missing = False\n # if not download\n else:\n no_files_missing = False\n else:\n logging.debug(\"Already had input file: '%s'\" % full_path)\n\n else:\n model = os.path.basename(data_list_file).split('.')[0]\n logging.warning(\"Model %s no file specified for %s\"%(model,description))\n\n return no_files_missing\n", "path": "utils/python/CIME/check_input_data.py"}], "after_files": [{"content": "\"\"\"\nAPI for checking input for testcase\n\"\"\"\n\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.utils import get_model, SharedArea\n\nimport fnmatch, glob, shutil\n\nlogger = logging.getLogger(__name__)\n\n# Should probably be in XML somewhere\nSVN_LOCS = {\n \"acme\" : \"https://acme-svn2.ornl.gov/acme-repo/acme/inputdata\",\n \"cesm\" : \"https://svn-ccsm-inputdata.cgd.ucar.edu/trunk/inputdata\"\n}\n\ndef find_files(rootdir, pattern):\n \"\"\"\n recursively find all files matching a pattern\n \"\"\"\n result = []\n for root, _, files in os.walk(rootdir):\n for filename in files:\n if (fnmatch.fnmatch(filename, pattern)):\n result.append(os.path.join(root, filename))\n\n return result\n\ndef download_if_in_repo(svn_loc, input_data_root, rel_path):\n \"\"\"\n Return True if successfully downloaded\n \"\"\"\n rel_path = rel_path.strip('/')\n full_url = os.path.join(svn_loc, rel_path)\n\n full_path = os.path.join(input_data_root, rel_path)\n logging.info(\"Trying to download file: '%s' to path '%s'\" % (full_url, full_path))\n # Make sure local path exists, create if it does not\n if(not os.path.exists(os.path.dirname(full_path))):\n os.makedirs(os.path.dirname(full_path))\n\n stat, out, err = run_cmd(\"svn --non-interactive --trust-server-cert ls %s\" % full_url)\n if (stat != 0):\n logging.warning(\"FAIL: SVN repo '%s' does not have file '%s'\\nReason:%s\\n%s\\n\" % (svn_loc, full_url, out, err))\n return False\n else:\n # Use umask to make sure files are group read/writable. As long as parent directories\n # have +s, then everything should work.\n with SharedArea():\n stat, output, errput = \\\n run_cmd(\"svn --non-interactive --trust-server-cert export %s %s\" % (full_url, full_path))\n if (stat != 0):\n logging.warning(\"svn export failed with output: %s and errput %s\\n\" % (output, errput))\n return False\n else:\n logging.info(\"SUCCESS\\n\")\n return True\n\n###############################################################################\ndef check_all_input_data(case):\n###############################################################################\n\n success = check_input_data(case=case, download=True)\n expect(success, \"Failed to download input data\")\n\n get_refcase = case.get_value(\"GET_REFCASE\")\n run_type = case.get_value(\"RUN_TYPE\")\n continue_run = case.get_value(\"CONTINUE_RUN\")\n\n # We do not fully populate the inputdata directory on every\n # machine and do not expect every user to download the 3TB+ of\n # data in our inputdata repository. This code checks for the\n # existence of inputdata in the local inputdata directory and\n # attempts to download data from the server if it's needed and\n # missing.\n if get_refcase and run_type != \"startup\" and not continue_run:\n din_loc_root = case.get_value(\"DIN_LOC_ROOT\")\n run_refdate = case.get_value(\"RUN_REFDATE\")\n run_refcase = case.get_value(\"RUN_REFCASE\")\n run_refdir = case.get_value(\"RUN_REFDIR\")\n rundir = case.get_value(\"RUNDIR\")\n\n refdir = os.path.join(din_loc_root, run_refdir, run_refcase, run_refdate)\n expect(os.path.isdir(refdir),\n\"\"\"\n*****************************************************************\nprestage ERROR: $refdir is not on local disk\nobtain this data from the svn input data repository\n> mkdir -p %s\n> cd %s\n> cd ..\n> svn export --force https://svn-ccsm-inputdata.cgd.ucar.edu/trunk/inputdata/%s\nor set GET_REFCASE to FALSE in env_run.xml\nand prestage the restart data to $RUNDIR manually\n*****************************************************************\"\"\" % (refdir, refdir, refdir))\n\n logger.info(\" - Prestaging REFCASE (%s) to %s\" % (refdir, rundir))\n\n # prestage the reference case's files.\n\n if (not os.path.exists(rundir)):\n logger.debug(\"Creating run directory: %s\"%rundir)\n os.makedirs(rundir)\n\n for rcfile in glob.iglob(os.path.join(refdir,\"*%s*\"%run_refcase)):\n logger.debug(\"Staging file %s\"%rcfile)\n rcbaseline = os.path.basename(rcfile)\n if not os.path.exists(\"%s/%s\" % (rundir, rcbaseline)):\n os.symlink(rcfile, \"%s/%s\" % ((rundir, rcbaseline)))\n\n # copy the refcases' rpointer files to the run directory\n for rpointerfile in glob.iglob(os.path.join(\"%s\",\"*rpointer*\") % (refdir)):\n logger.debug(\"Copy rpointer %s\"%rpointerfile)\n shutil.copy(rpointerfile, rundir)\n\n\n for cam2file in glob.iglob(os.path.join(\"%s\",\"*.cam2.*\") % rundir):\n camfile = cam2file.replace(\"cam2\", \"cam\")\n os.symlink(cam2file, camfile)\n\ndef check_input_data(case, svn_loc=None, input_data_root=None, data_list_dir=\"Buildconf\", download=False):\n \"\"\"\n Return True if no files missing\n \"\"\"\n # Fill in defaults as needed\n svn_loc = SVN_LOCS[get_model()] if svn_loc is None else svn_loc\n input_data_root = case.get_value(\"DIN_LOC_ROOT\") if input_data_root is None else input_data_root\n\n expect(os.path.isdir(input_data_root), \"Invalid input_data_root directory: '%s'\" % input_data_root)\n expect(os.path.isdir(data_list_dir), \"Invalid data_list_dir directory: '%s'\" % data_list_dir)\n\n data_list_files = find_files(data_list_dir, \"*.input_data_list\")\n expect(data_list_files, \"No .input_data_list files found in dir '%s'\" % data_list_dir)\n\n no_files_missing = True\n for data_list_file in data_list_files:\n logging.info(\"Loading input file: '%s'\" % data_list_file)\n with open(data_list_file, \"r\") as fd:\n lines = fd.readlines()\n\n for line in lines:\n line = line.strip()\n if (line and not line.startswith(\"#\")):\n tokens = line.split('=')\n description, full_path = tokens[0].strip(), tokens[1].strip()\n if(full_path):\n # expand xml variables\n full_path = case.get_resolved_value(full_path)\n rel_path = full_path.replace(input_data_root, \"\")\n model = os.path.basename(data_list_file).split('.')[0]\n\n if (\"/\" in rel_path and rel_path == full_path):\n # User pointing to a file outside of input_data_root, we cannot determine\n # rel_path, and so cannot download the file. If it already exists, we can\n # proceed\n if not os.path.exists(full_path):\n logging.warning(\" Model %s missing file %s = '%s'\" % (model, description, full_path))\n if download:\n logging.warning(\" Cannot download file since it lives outside of the input_data_root '%s'\" % input_data_root)\n no_files_missing = False\n else:\n logging.info(\" Found input file: '%s'\" % full_path)\n\n else:\n # There are some special values of rel_path that\n # we need to ignore - some of the component models\n # set things like 'NULL' or 'same_as_TS' -\n # basically if rel_path does not contain '/' (a\n # directory tree) you can assume it's a special\n # value and ignore it (perhaps with a warning)\n if (\"/\" in rel_path and not os.path.exists(full_path)):\n logging.warning(\" Model %s missing file %s = '%s'\" % (model,description,full_path))\n\n if (download):\n success = download_if_in_repo(svn_loc, input_data_root, rel_path)\n if (not success):\n # If ACME, try CESM repo as backup\n if (get_model() == \"acme\" and svn_loc != SVN_LOCS[\"cesm\"]):\n success = download_if_in_repo(SVN_LOCS[\"cesm\"], input_data_root, rel_path)\n if (not success):\n no_files_missing = False\n else:\n no_files_missing = False\n # if not download\n else:\n no_files_missing = False\n else:\n logging.info(\" Already had input file: '%s'\" % full_path)\n\n else:\n model = os.path.basename(data_list_file).split('.')[0]\n logging.warning(\"Model %s no file specified for %s\"%(model,description))\n\n return no_files_missing\n", "path": "utils/python/CIME/check_input_data.py"}]} | 2,593 | 860 |
gh_patches_debug_17031 | rasdani/github-patches | git_diff | paperless-ngx__paperless-ngx-680 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Mail rule filter attachment filename is case sensitive
### Description
The "Mail rules" interface says the Filter attachment filename should be case insensitive:
> Only consume documents which entirely match this filename if specified. Wildcards such as *.pdf or \*invoice\* are allowed. Case insensitive.
The latest revision of mail.py uses fnmatch, which follows the operating system's rules for case sensitivity:
https://github.com/paperless-ngx/paperless-ngx/blob/a728502988fe26da9aa2844916dcfcc2455d0be2/src/paperless_mail/mail.py#L286-L288
Issue ported from https://github.com/jonaswinkler/paperless-ng/issues/1394
### Expected behavior
Mail attachments should be matched case insensitive to the Filter attachment filename.
### Steps to reproduce
1. Set up paperless to consume from a mailbox
2. Set up a mail rule with a filter attachment filename of *.pdf
3. Send an attachment ending in *.PDF
### Webserver logs
_No response_
### Screenshots
_No response_
### Paperless-ngx version
1.6.0
### Host OS
Oracle Linux 8
### Installation method
Docker
### Browser
_No response_
### Configuration changes
_No response_
### Other
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/paperless_mail/mail.py`
Content:
```
1 import os
2 import tempfile
3 from datetime import date
4 from datetime import timedelta
5 from fnmatch import fnmatch
6
7 import magic
8 import pathvalidate
9 from django.conf import settings
10 from django.db import DatabaseError
11 from django_q.tasks import async_task
12 from documents.loggers import LoggingMixin
13 from documents.models import Correspondent
14 from documents.parsers import is_mime_type_supported
15 from imap_tools import AND
16 from imap_tools import MailBox
17 from imap_tools import MailboxFolderSelectError
18 from imap_tools import MailBoxUnencrypted
19 from imap_tools import MailMessage
20 from imap_tools import MailMessageFlags
21 from paperless_mail.models import MailAccount
22 from paperless_mail.models import MailRule
23
24
25 class MailError(Exception):
26 pass
27
28
29 class BaseMailAction:
30 def get_criteria(self):
31 return {}
32
33 def post_consume(self, M, message_uids, parameter):
34 pass # pragma: nocover
35
36
37 class DeleteMailAction(BaseMailAction):
38 def post_consume(self, M, message_uids, parameter):
39 M.delete(message_uids)
40
41
42 class MarkReadMailAction(BaseMailAction):
43 def get_criteria(self):
44 return {"seen": False}
45
46 def post_consume(self, M, message_uids, parameter):
47 M.flag(message_uids, [MailMessageFlags.SEEN], True)
48
49
50 class MoveMailAction(BaseMailAction):
51 def post_consume(self, M, message_uids, parameter):
52 M.move(message_uids, parameter)
53
54
55 class FlagMailAction(BaseMailAction):
56 def get_criteria(self):
57 return {"flagged": False}
58
59 def post_consume(self, M, message_uids, parameter):
60 M.flag(message_uids, [MailMessageFlags.FLAGGED], True)
61
62
63 def get_rule_action(rule):
64 if rule.action == MailRule.ACTION_FLAG:
65 return FlagMailAction()
66 elif rule.action == MailRule.ACTION_DELETE:
67 return DeleteMailAction()
68 elif rule.action == MailRule.ACTION_MOVE:
69 return MoveMailAction()
70 elif rule.action == MailRule.ACTION_MARK_READ:
71 return MarkReadMailAction()
72 else:
73 raise NotImplementedError("Unknown action.") # pragma: nocover
74
75
76 def make_criterias(rule):
77 maximum_age = date.today() - timedelta(days=rule.maximum_age)
78 criterias = {}
79 if rule.maximum_age > 0:
80 criterias["date_gte"] = maximum_age
81 if rule.filter_from:
82 criterias["from_"] = rule.filter_from
83 if rule.filter_subject:
84 criterias["subject"] = rule.filter_subject
85 if rule.filter_body:
86 criterias["body"] = rule.filter_body
87
88 return {**criterias, **get_rule_action(rule).get_criteria()}
89
90
91 def get_mailbox(server, port, security):
92 if security == MailAccount.IMAP_SECURITY_NONE:
93 mailbox = MailBoxUnencrypted(server, port)
94 elif security == MailAccount.IMAP_SECURITY_STARTTLS:
95 mailbox = MailBox(server, port, starttls=True)
96 elif security == MailAccount.IMAP_SECURITY_SSL:
97 mailbox = MailBox(server, port)
98 else:
99 raise NotImplementedError("Unknown IMAP security") # pragma: nocover
100 return mailbox
101
102
103 class MailAccountHandler(LoggingMixin):
104
105 logging_name = "paperless_mail"
106
107 def _correspondent_from_name(self, name):
108 try:
109 return Correspondent.objects.get_or_create(name=name)[0]
110 except DatabaseError as e:
111 self.log("error", f"Error while retrieving correspondent {name}: {e}")
112 return None
113
114 def get_title(self, message, att, rule):
115 if rule.assign_title_from == MailRule.TITLE_FROM_SUBJECT:
116 return message.subject
117
118 elif rule.assign_title_from == MailRule.TITLE_FROM_FILENAME:
119 return os.path.splitext(os.path.basename(att.filename))[0]
120
121 else:
122 raise NotImplementedError(
123 "Unknown title selector.",
124 ) # pragma: nocover
125
126 def get_correspondent(self, message: MailMessage, rule):
127 c_from = rule.assign_correspondent_from
128
129 if c_from == MailRule.CORRESPONDENT_FROM_NOTHING:
130 return None
131
132 elif c_from == MailRule.CORRESPONDENT_FROM_EMAIL:
133 return self._correspondent_from_name(message.from_)
134
135 elif c_from == MailRule.CORRESPONDENT_FROM_NAME:
136 from_values = message.from_values
137 if from_values is not None and len(from_values.name) > 0:
138 return self._correspondent_from_name(from_values.name)
139 else:
140 return self._correspondent_from_name(message.from_)
141
142 elif c_from == MailRule.CORRESPONDENT_FROM_CUSTOM:
143 return rule.assign_correspondent
144
145 else:
146 raise NotImplementedError(
147 "Unknwown correspondent selector",
148 ) # pragma: nocover
149
150 def handle_mail_account(self, account):
151
152 self.renew_logging_group()
153
154 self.log("debug", f"Processing mail account {account}")
155
156 total_processed_files = 0
157
158 with get_mailbox(
159 account.imap_server,
160 account.imap_port,
161 account.imap_security,
162 ) as M:
163
164 try:
165 M.login(account.username, account.password)
166 except Exception:
167 raise MailError(f"Error while authenticating account {account}")
168
169 self.log(
170 "debug",
171 f"Account {account}: Processing " f"{account.rules.count()} rule(s)",
172 )
173
174 for rule in account.rules.order_by("order"):
175 try:
176 total_processed_files += self.handle_mail_rule(M, rule)
177 except Exception as e:
178 self.log(
179 "error",
180 f"Rule {rule}: Error while processing rule: {e}",
181 exc_info=True,
182 )
183
184 return total_processed_files
185
186 def handle_mail_rule(self, M, rule):
187
188 self.log("debug", f"Rule {rule}: Selecting folder {rule.folder}")
189
190 try:
191 M.folder.set(rule.folder)
192 except MailboxFolderSelectError:
193 raise MailError(
194 f"Rule {rule}: Folder {rule.folder} "
195 f"does not exist in account {rule.account}",
196 )
197
198 criterias = make_criterias(rule)
199
200 self.log(
201 "debug",
202 f"Rule {rule}: Searching folder with criteria " f"{str(AND(**criterias))}",
203 )
204
205 try:
206 messages = M.fetch(
207 criteria=AND(**criterias),
208 mark_seen=False,
209 charset=rule.account.character_set,
210 )
211 except Exception:
212 raise MailError(f"Rule {rule}: Error while fetching folder {rule.folder}")
213
214 post_consume_messages = []
215
216 mails_processed = 0
217 total_processed_files = 0
218
219 for message in messages:
220 try:
221 processed_files = self.handle_message(message, rule)
222 if processed_files > 0:
223 post_consume_messages.append(message.uid)
224
225 total_processed_files += processed_files
226 mails_processed += 1
227 except Exception as e:
228 self.log(
229 "error",
230 f"Rule {rule}: Error while processing mail " f"{message.uid}: {e}",
231 exc_info=True,
232 )
233
234 self.log("debug", f"Rule {rule}: Processed {mails_processed} matching mail(s)")
235
236 self.log(
237 "debug",
238 f"Rule {rule}: Running mail actions on "
239 f"{len(post_consume_messages)} mails",
240 )
241
242 try:
243 get_rule_action(rule).post_consume(
244 M,
245 post_consume_messages,
246 rule.action_parameter,
247 )
248
249 except Exception as e:
250 raise MailError(
251 f"Rule {rule}: Error while processing post-consume actions: " f"{e}",
252 )
253
254 return total_processed_files
255
256 def handle_message(self, message, rule):
257 if not message.attachments:
258 return 0
259
260 self.log(
261 "debug",
262 f"Rule {rule}: "
263 f"Processing mail {message.subject} from {message.from_} with "
264 f"{len(message.attachments)} attachment(s)",
265 )
266
267 correspondent = self.get_correspondent(message, rule)
268 tag = rule.assign_tag
269 doc_type = rule.assign_document_type
270
271 processed_attachments = 0
272
273 for att in message.attachments:
274
275 if (
276 not att.content_disposition == "attachment"
277 and rule.attachment_type == MailRule.ATTACHMENT_TYPE_ATTACHMENTS_ONLY
278 ):
279 self.log(
280 "debug",
281 f"Rule {rule}: "
282 f"Skipping attachment {att.filename} "
283 f"with content disposition {att.content_disposition}",
284 )
285 continue
286
287 if rule.filter_attachment_filename:
288 if not fnmatch(att.filename, rule.filter_attachment_filename):
289 continue
290
291 title = self.get_title(message, att, rule)
292
293 # don't trust the content type of the attachment. Could be
294 # generic application/octet-stream.
295 mime_type = magic.from_buffer(att.payload, mime=True)
296
297 if is_mime_type_supported(mime_type):
298
299 os.makedirs(settings.SCRATCH_DIR, exist_ok=True)
300 _, temp_filename = tempfile.mkstemp(
301 prefix="paperless-mail-",
302 dir=settings.SCRATCH_DIR,
303 )
304 with open(temp_filename, "wb") as f:
305 f.write(att.payload)
306
307 self.log(
308 "info",
309 f"Rule {rule}: "
310 f"Consuming attachment {att.filename} from mail "
311 f"{message.subject} from {message.from_}",
312 )
313
314 async_task(
315 "documents.tasks.consume_file",
316 path=temp_filename,
317 override_filename=pathvalidate.sanitize_filename(
318 att.filename,
319 ),
320 override_title=title,
321 override_correspondent_id=correspondent.id
322 if correspondent
323 else None,
324 override_document_type_id=doc_type.id if doc_type else None,
325 override_tag_ids=[tag.id] if tag else None,
326 task_name=att.filename[:100],
327 )
328
329 processed_attachments += 1
330 else:
331 self.log(
332 "debug",
333 f"Rule {rule}: "
334 f"Skipping attachment {att.filename} "
335 f"since guessed mime type {mime_type} is not supported "
336 f"by paperless",
337 )
338
339 return processed_attachments
340
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/paperless_mail/mail.py b/src/paperless_mail/mail.py
--- a/src/paperless_mail/mail.py
+++ b/src/paperless_mail/mail.py
@@ -253,7 +253,7 @@
return total_processed_files
- def handle_message(self, message, rule):
+ def handle_message(self, message, rule) -> int:
if not message.attachments:
return 0
@@ -285,7 +285,12 @@
continue
if rule.filter_attachment_filename:
- if not fnmatch(att.filename, rule.filter_attachment_filename):
+ # Force the filename and pattern to the lowercase
+ # as this is system dependent otherwise
+ if not fnmatch(
+ att.filename.lower(),
+ rule.filter_attachment_filename.lower(),
+ ):
continue
title = self.get_title(message, att, rule)
| {"golden_diff": "diff --git a/src/paperless_mail/mail.py b/src/paperless_mail/mail.py\n--- a/src/paperless_mail/mail.py\n+++ b/src/paperless_mail/mail.py\n@@ -253,7 +253,7 @@\n \n return total_processed_files\n \n- def handle_message(self, message, rule):\n+ def handle_message(self, message, rule) -> int:\n if not message.attachments:\n return 0\n \n@@ -285,7 +285,12 @@\n continue\n \n if rule.filter_attachment_filename:\n- if not fnmatch(att.filename, rule.filter_attachment_filename):\n+ # Force the filename and pattern to the lowercase\n+ # as this is system dependent otherwise\n+ if not fnmatch(\n+ att.filename.lower(),\n+ rule.filter_attachment_filename.lower(),\n+ ):\n continue\n \n title = self.get_title(message, att, rule)\n", "issue": "[BUG] Mail rule filter attachment filename is case sensitive\n### Description\r\n\r\nThe \"Mail rules\" interface says the Filter attachment filename should be case insensitive:\r\n\r\n> Only consume documents which entirely match this filename if specified. Wildcards such as *.pdf or \\*invoice\\* are allowed. Case insensitive.\r\n\r\nThe latest revision of mail.py uses fnmatch, which follows the operating system's rules for case sensitivity:\r\n\r\nhttps://github.com/paperless-ngx/paperless-ngx/blob/a728502988fe26da9aa2844916dcfcc2455d0be2/src/paperless_mail/mail.py#L286-L288\r\n\r\nIssue ported from https://github.com/jonaswinkler/paperless-ng/issues/1394\r\n\r\n### Expected behavior\r\n\r\nMail attachments should be matched case insensitive to the Filter attachment filename.\r\n\r\n### Steps to reproduce\r\n\r\n1. Set up paperless to consume from a mailbox\r\n2. Set up a mail rule with a filter attachment filename of *.pdf\r\n3. Send an attachment ending in *.PDF\r\n\r\n### Webserver logs\r\n\r\n_No response_\r\n\r\n### Screenshots\r\n\r\n_No response_\r\n\r\n### Paperless-ngx version\r\n\r\n1.6.0\r\n\r\n### Host OS\r\n\r\nOracle Linux 8\r\n\r\n### Installation method\r\n\r\nDocker\r\n\r\n### Browser\r\n\r\n_No response_\r\n\r\n### Configuration changes\r\n\r\n_No response_\r\n\r\n### Other\r\n\r\n_No response_\n", "before_files": [{"content": "import os\nimport tempfile\nfrom datetime import date\nfrom datetime import timedelta\nfrom fnmatch import fnmatch\n\nimport magic\nimport pathvalidate\nfrom django.conf import settings\nfrom django.db import DatabaseError\nfrom django_q.tasks import async_task\nfrom documents.loggers import LoggingMixin\nfrom documents.models import Correspondent\nfrom documents.parsers import is_mime_type_supported\nfrom imap_tools import AND\nfrom imap_tools import MailBox\nfrom imap_tools import MailboxFolderSelectError\nfrom imap_tools import MailBoxUnencrypted\nfrom imap_tools import MailMessage\nfrom imap_tools import MailMessageFlags\nfrom paperless_mail.models import MailAccount\nfrom paperless_mail.models import MailRule\n\n\nclass MailError(Exception):\n pass\n\n\nclass BaseMailAction:\n def get_criteria(self):\n return {}\n\n def post_consume(self, M, message_uids, parameter):\n pass # pragma: nocover\n\n\nclass DeleteMailAction(BaseMailAction):\n def post_consume(self, M, message_uids, parameter):\n M.delete(message_uids)\n\n\nclass MarkReadMailAction(BaseMailAction):\n def get_criteria(self):\n return {\"seen\": False}\n\n def post_consume(self, M, message_uids, parameter):\n M.flag(message_uids, [MailMessageFlags.SEEN], True)\n\n\nclass MoveMailAction(BaseMailAction):\n def post_consume(self, M, message_uids, parameter):\n M.move(message_uids, parameter)\n\n\nclass FlagMailAction(BaseMailAction):\n def get_criteria(self):\n return {\"flagged\": False}\n\n def post_consume(self, M, message_uids, parameter):\n M.flag(message_uids, [MailMessageFlags.FLAGGED], True)\n\n\ndef get_rule_action(rule):\n if rule.action == MailRule.ACTION_FLAG:\n return FlagMailAction()\n elif rule.action == MailRule.ACTION_DELETE:\n return DeleteMailAction()\n elif rule.action == MailRule.ACTION_MOVE:\n return MoveMailAction()\n elif rule.action == MailRule.ACTION_MARK_READ:\n return MarkReadMailAction()\n else:\n raise NotImplementedError(\"Unknown action.\") # pragma: nocover\n\n\ndef make_criterias(rule):\n maximum_age = date.today() - timedelta(days=rule.maximum_age)\n criterias = {}\n if rule.maximum_age > 0:\n criterias[\"date_gte\"] = maximum_age\n if rule.filter_from:\n criterias[\"from_\"] = rule.filter_from\n if rule.filter_subject:\n criterias[\"subject\"] = rule.filter_subject\n if rule.filter_body:\n criterias[\"body\"] = rule.filter_body\n\n return {**criterias, **get_rule_action(rule).get_criteria()}\n\n\ndef get_mailbox(server, port, security):\n if security == MailAccount.IMAP_SECURITY_NONE:\n mailbox = MailBoxUnencrypted(server, port)\n elif security == MailAccount.IMAP_SECURITY_STARTTLS:\n mailbox = MailBox(server, port, starttls=True)\n elif security == MailAccount.IMAP_SECURITY_SSL:\n mailbox = MailBox(server, port)\n else:\n raise NotImplementedError(\"Unknown IMAP security\") # pragma: nocover\n return mailbox\n\n\nclass MailAccountHandler(LoggingMixin):\n\n logging_name = \"paperless_mail\"\n\n def _correspondent_from_name(self, name):\n try:\n return Correspondent.objects.get_or_create(name=name)[0]\n except DatabaseError as e:\n self.log(\"error\", f\"Error while retrieving correspondent {name}: {e}\")\n return None\n\n def get_title(self, message, att, rule):\n if rule.assign_title_from == MailRule.TITLE_FROM_SUBJECT:\n return message.subject\n\n elif rule.assign_title_from == MailRule.TITLE_FROM_FILENAME:\n return os.path.splitext(os.path.basename(att.filename))[0]\n\n else:\n raise NotImplementedError(\n \"Unknown title selector.\",\n ) # pragma: nocover\n\n def get_correspondent(self, message: MailMessage, rule):\n c_from = rule.assign_correspondent_from\n\n if c_from == MailRule.CORRESPONDENT_FROM_NOTHING:\n return None\n\n elif c_from == MailRule.CORRESPONDENT_FROM_EMAIL:\n return self._correspondent_from_name(message.from_)\n\n elif c_from == MailRule.CORRESPONDENT_FROM_NAME:\n from_values = message.from_values\n if from_values is not None and len(from_values.name) > 0:\n return self._correspondent_from_name(from_values.name)\n else:\n return self._correspondent_from_name(message.from_)\n\n elif c_from == MailRule.CORRESPONDENT_FROM_CUSTOM:\n return rule.assign_correspondent\n\n else:\n raise NotImplementedError(\n \"Unknwown correspondent selector\",\n ) # pragma: nocover\n\n def handle_mail_account(self, account):\n\n self.renew_logging_group()\n\n self.log(\"debug\", f\"Processing mail account {account}\")\n\n total_processed_files = 0\n\n with get_mailbox(\n account.imap_server,\n account.imap_port,\n account.imap_security,\n ) as M:\n\n try:\n M.login(account.username, account.password)\n except Exception:\n raise MailError(f\"Error while authenticating account {account}\")\n\n self.log(\n \"debug\",\n f\"Account {account}: Processing \" f\"{account.rules.count()} rule(s)\",\n )\n\n for rule in account.rules.order_by(\"order\"):\n try:\n total_processed_files += self.handle_mail_rule(M, rule)\n except Exception as e:\n self.log(\n \"error\",\n f\"Rule {rule}: Error while processing rule: {e}\",\n exc_info=True,\n )\n\n return total_processed_files\n\n def handle_mail_rule(self, M, rule):\n\n self.log(\"debug\", f\"Rule {rule}: Selecting folder {rule.folder}\")\n\n try:\n M.folder.set(rule.folder)\n except MailboxFolderSelectError:\n raise MailError(\n f\"Rule {rule}: Folder {rule.folder} \"\n f\"does not exist in account {rule.account}\",\n )\n\n criterias = make_criterias(rule)\n\n self.log(\n \"debug\",\n f\"Rule {rule}: Searching folder with criteria \" f\"{str(AND(**criterias))}\",\n )\n\n try:\n messages = M.fetch(\n criteria=AND(**criterias),\n mark_seen=False,\n charset=rule.account.character_set,\n )\n except Exception:\n raise MailError(f\"Rule {rule}: Error while fetching folder {rule.folder}\")\n\n post_consume_messages = []\n\n mails_processed = 0\n total_processed_files = 0\n\n for message in messages:\n try:\n processed_files = self.handle_message(message, rule)\n if processed_files > 0:\n post_consume_messages.append(message.uid)\n\n total_processed_files += processed_files\n mails_processed += 1\n except Exception as e:\n self.log(\n \"error\",\n f\"Rule {rule}: Error while processing mail \" f\"{message.uid}: {e}\",\n exc_info=True,\n )\n\n self.log(\"debug\", f\"Rule {rule}: Processed {mails_processed} matching mail(s)\")\n\n self.log(\n \"debug\",\n f\"Rule {rule}: Running mail actions on \"\n f\"{len(post_consume_messages)} mails\",\n )\n\n try:\n get_rule_action(rule).post_consume(\n M,\n post_consume_messages,\n rule.action_parameter,\n )\n\n except Exception as e:\n raise MailError(\n f\"Rule {rule}: Error while processing post-consume actions: \" f\"{e}\",\n )\n\n return total_processed_files\n\n def handle_message(self, message, rule):\n if not message.attachments:\n return 0\n\n self.log(\n \"debug\",\n f\"Rule {rule}: \"\n f\"Processing mail {message.subject} from {message.from_} with \"\n f\"{len(message.attachments)} attachment(s)\",\n )\n\n correspondent = self.get_correspondent(message, rule)\n tag = rule.assign_tag\n doc_type = rule.assign_document_type\n\n processed_attachments = 0\n\n for att in message.attachments:\n\n if (\n not att.content_disposition == \"attachment\"\n and rule.attachment_type == MailRule.ATTACHMENT_TYPE_ATTACHMENTS_ONLY\n ):\n self.log(\n \"debug\",\n f\"Rule {rule}: \"\n f\"Skipping attachment {att.filename} \"\n f\"with content disposition {att.content_disposition}\",\n )\n continue\n\n if rule.filter_attachment_filename:\n if not fnmatch(att.filename, rule.filter_attachment_filename):\n continue\n\n title = self.get_title(message, att, rule)\n\n # don't trust the content type of the attachment. Could be\n # generic application/octet-stream.\n mime_type = magic.from_buffer(att.payload, mime=True)\n\n if is_mime_type_supported(mime_type):\n\n os.makedirs(settings.SCRATCH_DIR, exist_ok=True)\n _, temp_filename = tempfile.mkstemp(\n prefix=\"paperless-mail-\",\n dir=settings.SCRATCH_DIR,\n )\n with open(temp_filename, \"wb\") as f:\n f.write(att.payload)\n\n self.log(\n \"info\",\n f\"Rule {rule}: \"\n f\"Consuming attachment {att.filename} from mail \"\n f\"{message.subject} from {message.from_}\",\n )\n\n async_task(\n \"documents.tasks.consume_file\",\n path=temp_filename,\n override_filename=pathvalidate.sanitize_filename(\n att.filename,\n ),\n override_title=title,\n override_correspondent_id=correspondent.id\n if correspondent\n else None,\n override_document_type_id=doc_type.id if doc_type else None,\n override_tag_ids=[tag.id] if tag else None,\n task_name=att.filename[:100],\n )\n\n processed_attachments += 1\n else:\n self.log(\n \"debug\",\n f\"Rule {rule}: \"\n f\"Skipping attachment {att.filename} \"\n f\"since guessed mime type {mime_type} is not supported \"\n f\"by paperless\",\n )\n\n return processed_attachments\n", "path": "src/paperless_mail/mail.py"}], "after_files": [{"content": "import os\nimport tempfile\nfrom datetime import date\nfrom datetime import timedelta\nfrom fnmatch import fnmatch\n\nimport magic\nimport pathvalidate\nfrom django.conf import settings\nfrom django.db import DatabaseError\nfrom django_q.tasks import async_task\nfrom documents.loggers import LoggingMixin\nfrom documents.models import Correspondent\nfrom documents.parsers import is_mime_type_supported\nfrom imap_tools import AND\nfrom imap_tools import MailBox\nfrom imap_tools import MailboxFolderSelectError\nfrom imap_tools import MailBoxUnencrypted\nfrom imap_tools import MailMessage\nfrom imap_tools import MailMessageFlags\nfrom paperless_mail.models import MailAccount\nfrom paperless_mail.models import MailRule\n\n\nclass MailError(Exception):\n pass\n\n\nclass BaseMailAction:\n def get_criteria(self):\n return {}\n\n def post_consume(self, M, message_uids, parameter):\n pass # pragma: nocover\n\n\nclass DeleteMailAction(BaseMailAction):\n def post_consume(self, M, message_uids, parameter):\n M.delete(message_uids)\n\n\nclass MarkReadMailAction(BaseMailAction):\n def get_criteria(self):\n return {\"seen\": False}\n\n def post_consume(self, M, message_uids, parameter):\n M.flag(message_uids, [MailMessageFlags.SEEN], True)\n\n\nclass MoveMailAction(BaseMailAction):\n def post_consume(self, M, message_uids, parameter):\n M.move(message_uids, parameter)\n\n\nclass FlagMailAction(BaseMailAction):\n def get_criteria(self):\n return {\"flagged\": False}\n\n def post_consume(self, M, message_uids, parameter):\n M.flag(message_uids, [MailMessageFlags.FLAGGED], True)\n\n\ndef get_rule_action(rule):\n if rule.action == MailRule.ACTION_FLAG:\n return FlagMailAction()\n elif rule.action == MailRule.ACTION_DELETE:\n return DeleteMailAction()\n elif rule.action == MailRule.ACTION_MOVE:\n return MoveMailAction()\n elif rule.action == MailRule.ACTION_MARK_READ:\n return MarkReadMailAction()\n else:\n raise NotImplementedError(\"Unknown action.\") # pragma: nocover\n\n\ndef make_criterias(rule):\n maximum_age = date.today() - timedelta(days=rule.maximum_age)\n criterias = {}\n if rule.maximum_age > 0:\n criterias[\"date_gte\"] = maximum_age\n if rule.filter_from:\n criterias[\"from_\"] = rule.filter_from\n if rule.filter_subject:\n criterias[\"subject\"] = rule.filter_subject\n if rule.filter_body:\n criterias[\"body\"] = rule.filter_body\n\n return {**criterias, **get_rule_action(rule).get_criteria()}\n\n\ndef get_mailbox(server, port, security):\n if security == MailAccount.IMAP_SECURITY_NONE:\n mailbox = MailBoxUnencrypted(server, port)\n elif security == MailAccount.IMAP_SECURITY_STARTTLS:\n mailbox = MailBox(server, port, starttls=True)\n elif security == MailAccount.IMAP_SECURITY_SSL:\n mailbox = MailBox(server, port)\n else:\n raise NotImplementedError(\"Unknown IMAP security\") # pragma: nocover\n return mailbox\n\n\nclass MailAccountHandler(LoggingMixin):\n\n logging_name = \"paperless_mail\"\n\n def _correspondent_from_name(self, name):\n try:\n return Correspondent.objects.get_or_create(name=name)[0]\n except DatabaseError as e:\n self.log(\"error\", f\"Error while retrieving correspondent {name}: {e}\")\n return None\n\n def get_title(self, message, att, rule):\n if rule.assign_title_from == MailRule.TITLE_FROM_SUBJECT:\n return message.subject\n\n elif rule.assign_title_from == MailRule.TITLE_FROM_FILENAME:\n return os.path.splitext(os.path.basename(att.filename))[0]\n\n else:\n raise NotImplementedError(\n \"Unknown title selector.\",\n ) # pragma: nocover\n\n def get_correspondent(self, message: MailMessage, rule):\n c_from = rule.assign_correspondent_from\n\n if c_from == MailRule.CORRESPONDENT_FROM_NOTHING:\n return None\n\n elif c_from == MailRule.CORRESPONDENT_FROM_EMAIL:\n return self._correspondent_from_name(message.from_)\n\n elif c_from == MailRule.CORRESPONDENT_FROM_NAME:\n from_values = message.from_values\n if from_values is not None and len(from_values.name) > 0:\n return self._correspondent_from_name(from_values.name)\n else:\n return self._correspondent_from_name(message.from_)\n\n elif c_from == MailRule.CORRESPONDENT_FROM_CUSTOM:\n return rule.assign_correspondent\n\n else:\n raise NotImplementedError(\n \"Unknwown correspondent selector\",\n ) # pragma: nocover\n\n def handle_mail_account(self, account):\n\n self.renew_logging_group()\n\n self.log(\"debug\", f\"Processing mail account {account}\")\n\n total_processed_files = 0\n\n with get_mailbox(\n account.imap_server,\n account.imap_port,\n account.imap_security,\n ) as M:\n\n try:\n M.login(account.username, account.password)\n except Exception:\n raise MailError(f\"Error while authenticating account {account}\")\n\n self.log(\n \"debug\",\n f\"Account {account}: Processing \" f\"{account.rules.count()} rule(s)\",\n )\n\n for rule in account.rules.order_by(\"order\"):\n try:\n total_processed_files += self.handle_mail_rule(M, rule)\n except Exception as e:\n self.log(\n \"error\",\n f\"Rule {rule}: Error while processing rule: {e}\",\n exc_info=True,\n )\n\n return total_processed_files\n\n def handle_mail_rule(self, M, rule):\n\n self.log(\"debug\", f\"Rule {rule}: Selecting folder {rule.folder}\")\n\n try:\n M.folder.set(rule.folder)\n except MailboxFolderSelectError:\n raise MailError(\n f\"Rule {rule}: Folder {rule.folder} \"\n f\"does not exist in account {rule.account}\",\n )\n\n criterias = make_criterias(rule)\n\n self.log(\n \"debug\",\n f\"Rule {rule}: Searching folder with criteria \" f\"{str(AND(**criterias))}\",\n )\n\n try:\n messages = M.fetch(\n criteria=AND(**criterias),\n mark_seen=False,\n charset=rule.account.character_set,\n )\n except Exception:\n raise MailError(f\"Rule {rule}: Error while fetching folder {rule.folder}\")\n\n post_consume_messages = []\n\n mails_processed = 0\n total_processed_files = 0\n\n for message in messages:\n try:\n processed_files = self.handle_message(message, rule)\n if processed_files > 0:\n post_consume_messages.append(message.uid)\n\n total_processed_files += processed_files\n mails_processed += 1\n except Exception as e:\n self.log(\n \"error\",\n f\"Rule {rule}: Error while processing mail \" f\"{message.uid}: {e}\",\n exc_info=True,\n )\n\n self.log(\"debug\", f\"Rule {rule}: Processed {mails_processed} matching mail(s)\")\n\n self.log(\n \"debug\",\n f\"Rule {rule}: Running mail actions on \"\n f\"{len(post_consume_messages)} mails\",\n )\n\n try:\n get_rule_action(rule).post_consume(\n M,\n post_consume_messages,\n rule.action_parameter,\n )\n\n except Exception as e:\n raise MailError(\n f\"Rule {rule}: Error while processing post-consume actions: \" f\"{e}\",\n )\n\n return total_processed_files\n\n def handle_message(self, message, rule) -> int:\n if not message.attachments:\n return 0\n\n self.log(\n \"debug\",\n f\"Rule {rule}: \"\n f\"Processing mail {message.subject} from {message.from_} with \"\n f\"{len(message.attachments)} attachment(s)\",\n )\n\n correspondent = self.get_correspondent(message, rule)\n tag = rule.assign_tag\n doc_type = rule.assign_document_type\n\n processed_attachments = 0\n\n for att in message.attachments:\n\n if (\n not att.content_disposition == \"attachment\"\n and rule.attachment_type == MailRule.ATTACHMENT_TYPE_ATTACHMENTS_ONLY\n ):\n self.log(\n \"debug\",\n f\"Rule {rule}: \"\n f\"Skipping attachment {att.filename} \"\n f\"with content disposition {att.content_disposition}\",\n )\n continue\n\n if rule.filter_attachment_filename:\n # Force the filename and pattern to the lowercase\n # as this is system dependent otherwise\n if not fnmatch(\n att.filename.lower(),\n rule.filter_attachment_filename.lower(),\n ):\n continue\n\n title = self.get_title(message, att, rule)\n\n # don't trust the content type of the attachment. Could be\n # generic application/octet-stream.\n mime_type = magic.from_buffer(att.payload, mime=True)\n\n if is_mime_type_supported(mime_type):\n\n os.makedirs(settings.SCRATCH_DIR, exist_ok=True)\n _, temp_filename = tempfile.mkstemp(\n prefix=\"paperless-mail-\",\n dir=settings.SCRATCH_DIR,\n )\n with open(temp_filename, \"wb\") as f:\n f.write(att.payload)\n\n self.log(\n \"info\",\n f\"Rule {rule}: \"\n f\"Consuming attachment {att.filename} from mail \"\n f\"{message.subject} from {message.from_}\",\n )\n\n async_task(\n \"documents.tasks.consume_file\",\n path=temp_filename,\n override_filename=pathvalidate.sanitize_filename(\n att.filename,\n ),\n override_title=title,\n override_correspondent_id=correspondent.id\n if correspondent\n else None,\n override_document_type_id=doc_type.id if doc_type else None,\n override_tag_ids=[tag.id] if tag else None,\n task_name=att.filename[:100],\n )\n\n processed_attachments += 1\n else:\n self.log(\n \"debug\",\n f\"Rule {rule}: \"\n f\"Skipping attachment {att.filename} \"\n f\"since guessed mime type {mime_type} is not supported \"\n f\"by paperless\",\n )\n\n return processed_attachments\n", "path": "src/paperless_mail/mail.py"}]} | 3,686 | 198 |
gh_patches_debug_4889 | rasdani/github-patches | git_diff | readthedocs__readthedocs.org-4990 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove $ before shell commands in docs
The developers have a [preference](https://github.com/rtfd/readthedocs.org/pull/4676#discussion_r221400605) to not have a `$` before shell commands in the docs. This makes it easier to copy and paste from our docs. We should remove it everywhere. The following command should show it everywhere.
grep -Ri " $ " docs/*.rst docs/*/*.rst
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 from __future__ import division, print_function, unicode_literals
4
5 import os
6 import sys
7
8 import sphinx_rtd_theme
9 from recommonmark.parser import CommonMarkParser
10
11 sys.path.insert(0, os.path.abspath('..'))
12 sys.path.append(os.path.dirname(__file__))
13 os.environ.setdefault("DJANGO_SETTINGS_MODULE", "readthedocs.settings.dev")
14
15 from django.conf import settings
16 from django.utils import timezone
17
18 import django
19 django.setup()
20
21
22 sys.path.append(os.path.abspath('_ext'))
23 extensions = [
24 'sphinx.ext.autosectionlabel',
25 'sphinx.ext.autodoc',
26 'sphinx.ext.intersphinx',
27 'sphinxcontrib.httpdomain',
28 'djangodocs',
29 'doc_extensions',
30 'sphinx_tabs.tabs',
31 ]
32 templates_path = ['_templates']
33
34 source_suffix = ['.rst', '.md']
35 source_parsers = {
36 '.md': CommonMarkParser,
37 }
38
39 master_doc = 'index'
40 project = u'Read the Docs'
41 copyright = '2010-{}, Read the Docs, Inc & contributors'.format(
42 timezone.now().year
43 )
44 version = '2.7'
45 release = version
46 exclude_patterns = ['_build']
47 default_role = 'obj'
48 intersphinx_mapping = {
49 'python': ('http://python.readthedocs.io/en/latest/', None),
50 'django': ('http://django.readthedocs.io/en/1.9.x/', None),
51 'sphinx': ('http://sphinx.readthedocs.io/en/latest/', None),
52 }
53 htmlhelp_basename = 'ReadTheDocsdoc'
54 latex_documents = [
55 ('index', 'ReadTheDocs.tex', u'Read the Docs Documentation',
56 u'Eric Holscher, Charlie Leifer, Bobby Grace', 'manual'),
57 ]
58 man_pages = [
59 ('index', 'read-the-docs', u'Read the Docs Documentation',
60 [u'Eric Holscher, Charlie Leifer, Bobby Grace'], 1)
61 ]
62
63 exclude_patterns = [
64 # 'api' # needed for ``make gettext`` to not die.
65 ]
66
67 language = 'en'
68
69 locale_dirs = [
70 'locale/',
71 ]
72 gettext_compact = False
73
74 html_theme = 'sphinx_rtd_theme'
75 html_static_path = ['_static']
76 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
77 html_logo = 'img/logo.svg'
78 html_theme_options = {
79 'logo_only': True,
80 'display_version': False,
81 }
82
83 # Activate autosectionlabel plugin
84 autosectionlabel_prefix_document = True
85
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -28,6 +28,7 @@
'djangodocs',
'doc_extensions',
'sphinx_tabs.tabs',
+ 'sphinx-prompt',
]
templates_path = ['_templates']
@@ -82,3 +83,7 @@
# Activate autosectionlabel plugin
autosectionlabel_prefix_document = True
+
+
+def setup(app):
+ app.add_stylesheet('css/sphinx_prompt_css.css')
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -28,6 +28,7 @@\n 'djangodocs',\n 'doc_extensions',\n 'sphinx_tabs.tabs',\n+ 'sphinx-prompt',\n ]\n templates_path = ['_templates']\n \n@@ -82,3 +83,7 @@\n \n # Activate autosectionlabel plugin\n autosectionlabel_prefix_document = True\n+\n+\n+def setup(app):\n+ app.add_stylesheet('css/sphinx_prompt_css.css')\n", "issue": "Remove $ before shell commands in docs\nThe developers have a [preference](https://github.com/rtfd/readthedocs.org/pull/4676#discussion_r221400605) to not have a `$` before shell commands in the docs. This makes it easier to copy and paste from our docs. We should remove it everywhere. The following command should show it everywhere.\r\n\r\n grep -Ri \" $ \" docs/*.rst docs/*/*.rst\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom __future__ import division, print_function, unicode_literals\n\nimport os\nimport sys\n\nimport sphinx_rtd_theme\nfrom recommonmark.parser import CommonMarkParser\n\nsys.path.insert(0, os.path.abspath('..'))\nsys.path.append(os.path.dirname(__file__))\nos.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"readthedocs.settings.dev\")\n\nfrom django.conf import settings\nfrom django.utils import timezone\n\nimport django\ndjango.setup()\n\n\nsys.path.append(os.path.abspath('_ext'))\nextensions = [\n 'sphinx.ext.autosectionlabel',\n 'sphinx.ext.autodoc',\n 'sphinx.ext.intersphinx',\n 'sphinxcontrib.httpdomain',\n 'djangodocs',\n 'doc_extensions',\n 'sphinx_tabs.tabs',\n]\ntemplates_path = ['_templates']\n\nsource_suffix = ['.rst', '.md']\nsource_parsers = {\n '.md': CommonMarkParser,\n}\n\nmaster_doc = 'index'\nproject = u'Read the Docs'\ncopyright = '2010-{}, Read the Docs, Inc & contributors'.format(\n timezone.now().year\n)\nversion = '2.7'\nrelease = version\nexclude_patterns = ['_build']\ndefault_role = 'obj'\nintersphinx_mapping = {\n 'python': ('http://python.readthedocs.io/en/latest/', None),\n 'django': ('http://django.readthedocs.io/en/1.9.x/', None),\n 'sphinx': ('http://sphinx.readthedocs.io/en/latest/', None),\n}\nhtmlhelp_basename = 'ReadTheDocsdoc'\nlatex_documents = [\n ('index', 'ReadTheDocs.tex', u'Read the Docs Documentation',\n u'Eric Holscher, Charlie Leifer, Bobby Grace', 'manual'),\n]\nman_pages = [\n ('index', 'read-the-docs', u'Read the Docs Documentation',\n [u'Eric Holscher, Charlie Leifer, Bobby Grace'], 1)\n]\n\nexclude_patterns = [\n # 'api' # needed for ``make gettext`` to not die.\n]\n\nlanguage = 'en'\n\nlocale_dirs = [\n 'locale/',\n]\ngettext_compact = False\n\nhtml_theme = 'sphinx_rtd_theme'\nhtml_static_path = ['_static']\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\nhtml_logo = 'img/logo.svg'\nhtml_theme_options = {\n 'logo_only': True,\n 'display_version': False,\n}\n\n# Activate autosectionlabel plugin\nautosectionlabel_prefix_document = True\n", "path": "docs/conf.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom __future__ import division, print_function, unicode_literals\n\nimport os\nimport sys\n\nimport sphinx_rtd_theme\nfrom recommonmark.parser import CommonMarkParser\n\nsys.path.insert(0, os.path.abspath('..'))\nsys.path.append(os.path.dirname(__file__))\nos.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"readthedocs.settings.dev\")\n\nfrom django.conf import settings\nfrom django.utils import timezone\n\nimport django\ndjango.setup()\n\n\nsys.path.append(os.path.abspath('_ext'))\nextensions = [\n 'sphinx.ext.autosectionlabel',\n 'sphinx.ext.autodoc',\n 'sphinx.ext.intersphinx',\n 'sphinxcontrib.httpdomain',\n 'djangodocs',\n 'doc_extensions',\n 'sphinx_tabs.tabs',\n 'sphinx-prompt',\n]\ntemplates_path = ['_templates']\n\nsource_suffix = ['.rst', '.md']\nsource_parsers = {\n '.md': CommonMarkParser,\n}\n\nmaster_doc = 'index'\nproject = u'Read the Docs'\ncopyright = '2010-{}, Read the Docs, Inc & contributors'.format(\n timezone.now().year\n)\nversion = '2.7'\nrelease = version\nexclude_patterns = ['_build']\ndefault_role = 'obj'\nintersphinx_mapping = {\n 'python': ('http://python.readthedocs.io/en/latest/', None),\n 'django': ('http://django.readthedocs.io/en/1.9.x/', None),\n 'sphinx': ('http://sphinx.readthedocs.io/en/latest/', None),\n}\nhtmlhelp_basename = 'ReadTheDocsdoc'\nlatex_documents = [\n ('index', 'ReadTheDocs.tex', u'Read the Docs Documentation',\n u'Eric Holscher, Charlie Leifer, Bobby Grace', 'manual'),\n]\nman_pages = [\n ('index', 'read-the-docs', u'Read the Docs Documentation',\n [u'Eric Holscher, Charlie Leifer, Bobby Grace'], 1)\n]\n\nexclude_patterns = [\n # 'api' # needed for ``make gettext`` to not die.\n]\n\nlanguage = 'en'\n\nlocale_dirs = [\n 'locale/',\n]\ngettext_compact = False\n\nhtml_theme = 'sphinx_rtd_theme'\nhtml_static_path = ['_static']\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\nhtml_logo = 'img/logo.svg'\nhtml_theme_options = {\n 'logo_only': True,\n 'display_version': False,\n}\n\n# Activate autosectionlabel plugin\nautosectionlabel_prefix_document = True\n\n\ndef setup(app):\n app.add_stylesheet('css/sphinx_prompt_css.css')\n", "path": "docs/conf.py"}]} | 1,070 | 122 |
gh_patches_debug_36067 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-594 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
provide custom HEADERS in ical source
Hello,
I'm trying to download an ics schedule from my providers homepage. It only works when a "Referer":"<url>" Header is set. Otherwise a 403 Error occure. I manually changed the fixed Header (currently the user-agent) in the ics.py, but that will be overridden in future updates, i guess.
My question (or feature request, i suppose) is, is there a way to provide such a custom Header in the source configuration for ics?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py`
Content:
```
1 import datetime
2 import logging
3 from os import getcwd
4 from pathlib import Path
5
6 import requests
7 from waste_collection_schedule import Collection # type: ignore[attr-defined]
8 from waste_collection_schedule.service.ICS import ICS
9 from waste_collection_schedule.service.ICS_v1 import ICS_v1
10
11 TITLE = "ICS"
12 DESCRIPTION = "Source for ICS based schedules."
13 URL = None
14 TEST_CASES = {
15 "Dortmund, Dudenstr. 5": {
16 "url": "https://www.edg.de/ical/kalender.ics?Strasse=Dudenstr.&Hausnummer=5&Erinnerung=-1&Abfallart=1,2,3,4"
17 },
18 "Leipzig, Sandgrubenweg 27": {
19 "url": "https://stadtreinigung-leipzig.de/wir-kommen-zu-ihnen/abfallkalender/ical.ics?position_nos=38296&name=Sandgrubenweg%2027"
20 },
21 "Ludwigsburg": {
22 "url": "https://www.avl-ludwigsburg.de/fileadmin/Files/Abfallkalender/ICS/Privat/Privat_{%Y}_Ossweil.ics"
23 },
24 "Esslingen, Bahnhof": {
25 "url": "https://api.abfall.io/?kh=DaA02103019b46345f1998698563DaAd&t=ics&s=1a862df26f6943997cef90233877a4fe"
26 },
27 "Test File": {
28 # Path is used here to allow to call the Source from any location.
29 # This is not required in a yaml configuration!
30 "file": str(Path(__file__).resolve().parents[1].joinpath("test/test.ics"))
31 },
32 "Test File (recurring)": {
33 # Path is used here to allow to call the Source from any location.
34 # This is not required in a yaml configuration!
35 "file": str(Path(__file__).resolve().parents[1].joinpath("test/recurring.ics"))
36 },
37 "München, Bahnstr. 11": {
38 "url": "https://www.awm-muenchen.de/entsorgen/abfuhrkalender?tx_awmabfuhrkalender_abfuhrkalender%5Bhausnummer%5D=11&tx_awmabfuhrkalender_abfuhrkalender%5Bleerungszyklus%5D%5BB%5D=1%2F2%3BU&tx_awmabfuhrkalender_abfuhrkalender%5Bleerungszyklus%5D%5BP%5D=1%2F2%3BG&tx_awmabfuhrkalender_abfuhrkalender%5Bleerungszyklus%5D%5BR%5D=001%3BU&tx_awmabfuhrkalender_abfuhrkalender%5Bsection%5D=ics&tx_awmabfuhrkalender_abfuhrkalender%5Bsinglestandplatz%5D=false&tx_awmabfuhrkalender_abfuhrkalender%5Bstandplatzwahl%5D=true&tx_awmabfuhrkalender_abfuhrkalender%5Bstellplatz%5D%5Bbio%5D=70024507&tx_awmabfuhrkalender_abfuhrkalender%5Bstellplatz%5D%5Bpapier%5D=70024507&tx_awmabfuhrkalender_abfuhrkalender%5Bstellplatz%5D%5Brestmuell%5D=70024507&tx_awmabfuhrkalender_abfuhrkalender%5Bstrasse%5D=bahnstr.&tx_awmabfuhrkalender_abfuhrkalender%5Byear%5D={%Y}",
39 "version": 1,
40 },
41 "Buxtehude, Am Berg": {
42 "url": "https://abfall.landkreis-stade.de/api_v2/collection_dates/1/ort/10/strasse/90/hausnummern/1/abfallarten/R02-R04-B02-D04-D12-P04-R12-R14-W0-R22-R24-R31/kalender.ics"
43 },
44 # "Hausmüllinfo: ASR Chemnitz": {
45 # "url": "https://asc.hausmuell.info/ics/ics.php",
46 # "method": "POST",
47 # "params": {
48 # "hidden_id_egebiet": 439087,
49 # "input_ort": "Chemnitz",
50 # "input_str": "Straße der Nationen",
51 # "input_hnr": 2,
52 # "hidden_send_btn": "ics",
53 # # "hiddenYear": 2021,
54 # "hidden_id_ort": 10,
55 # "hidden_id_ortsteil": 0,
56 # "hidden_id_str": 17814,
57 # "hidden_id_hnr": 5538100,
58 # "hidden_kalenderart": "privat",
59 # "showBinsBio": "on",
60 # "showBinsRest": "on",
61 # "showBinsRest_rc": "on",
62 # "showBinsPapier": "on",
63 # "showBinsOrganic": "on",
64 # "showBinsXmas": "on",
65 # "showBinsDsd": "on",
66 # "showBinsProb": "on",
67 # },
68 # "year_field": "hiddenYear",
69 # },
70 "Abfall Zollernalbkreis, Ebingen": {
71 "url": "https://www.abfallkalender-zak.de",
72 "params": {
73 "city": "2,3,4",
74 "street": "3",
75 "types[]": [
76 "restmuell",
77 "gelbersack",
78 "papiertonne",
79 "biomuell",
80 "gruenabfall",
81 "schadstoffsammlung",
82 "altpapiersammlung",
83 "schrottsammlung",
84 "weihnachtsbaeume",
85 "elektrosammlung",
86 ],
87 "go_ics": "Download",
88 },
89 "year_field": "year",
90 },
91 "Detmold": {
92 "url": "https://abfuhrkalender.detmold.de/icsmaker.php",
93 "method": "GET",
94 "params": {"strid": 338},
95 "year_field": "year",
96 },
97 "EAW Rheingau Taunus": {
98 "url": "https://www.eaw-rheingau-taunus.de/abfallsammlung/abfuhrtermine/feed.ics?tx_vierwdeaw_garbagecalendarics%5Baction%5D=ics&tx_vierwdeaw_garbagecalendarics%5Bcontroller%5D=GarbageCalendar&tx_vierwdeaw_garbagecalendarics%5Bstreet%5D=38",
99 "split_at": ",",
100 },
101 "Recollect, Ottawa": {
102 "url": "https://recollect.a.ssl.fastly.net/api/places/BCCDF30E-578B-11E4-AD38-5839C200407A/services/208/events.en.ics",
103 "split_at": "\\, [and ]*",
104 },
105 "Frankfurt am Main, Achenbachstrasse 3": {
106 "url": "https://www.fes-frankfurt.de/abfallkalender/QWNoZW5iYWNoc3RyLnwzfDYwNTk2.ics"
107 },
108 "Erlensee, Am Haspel": {
109 "url": "https://sperrmuell.erlensee.de/?type=reminder",
110 "method": "POST",
111 "params": {
112 "street": 8,
113 "eventType[]": [27, 23, 19, 20, 21, 24, 22, 25, 26],
114 "timeframe": 23,
115 "download": "ical",
116 },
117 },
118 }
119
120
121 HEADERS = {"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64)"}
122 _LOGGER = logging.getLogger(__name__)
123
124
125 class Source:
126 def __init__(
127 self,
128 url=None,
129 file=None,
130 offset=None,
131 params=None,
132 year_field=None,
133 method="GET",
134 regex=None,
135 split_at=None,
136 version=2,
137 verify_ssl=True,
138 ):
139 self._url = url
140 self._file = file
141 if bool(self._url is not None) == bool(self._file is not None):
142 raise RuntimeError("Specify either url or file")
143 if version == 1:
144 self._ics = ICS_v1(offset=offset, split_at=split_at, regex=regex)
145 else:
146 self._ics = ICS(offset=offset, split_at=split_at, regex=regex)
147 self._params = params
148 self._year_field = year_field # replace this field in params with current year
149 self._method = method # The method to send the params
150 self._verify_ssl = verify_ssl
151
152 def fetch(self):
153 if self._url is not None:
154 if "{%Y}" in self._url or self._year_field is not None:
155 # url contains wildcard or params contains year field
156 now = datetime.datetime.now()
157
158 # replace year in url
159 url = self._url.replace("{%Y}", str(now.year))
160
161 # replace year in params
162 if self._year_field is not None:
163 if self._params is None:
164 raise RuntimeError("year_field specified without params")
165 self._params[self._year_field] = str(now.year)
166
167 entries = self.fetch_url(url, self._params)
168
169 if now.month == 12:
170 # also get data for next year if we are already in december
171 url = self._url.replace("{%Y}", str(now.year + 1))
172 if self._year_field is not None:
173 self._params[self._year_field] = str(now.year + 1)
174
175 try:
176 entries.extend(self.fetch_url(url, self._params))
177 except Exception:
178 # ignore if fetch for next year fails
179 pass
180 return entries
181 else:
182 return self.fetch_url(self._url, self._params)
183 elif self._file is not None:
184 return self.fetch_file(self._file)
185
186 def fetch_url(self, url, params=None):
187 # get ics file
188 if self._method == "GET":
189 r = requests.get(
190 url, params=params, headers=HEADERS, verify=self._verify_ssl
191 )
192 elif self._method == "POST":
193 r = requests.post(
194 url, data=params, headers=HEADERS, verify=self._verify_ssl
195 )
196 else:
197 raise RuntimeError(
198 "Error: unknown method to fetch URL, use GET or POST; got {self._method}"
199 )
200 r.raise_for_status()
201
202 r.encoding = "utf-8" # requests doesn't guess the encoding correctly
203 return self._convert(r.text)
204
205 def fetch_file(self, file):
206 try:
207 f = open(file)
208 except FileNotFoundError as e:
209 _LOGGER.error(f"Working directory: '{getcwd()}'")
210 raise
211 return self._convert(f.read())
212
213 def _convert(self, data):
214 dates = self._ics.convert(data)
215
216 entries = []
217 for d in dates:
218 entries.append(Collection(d[0], d[1]))
219 return entries
220
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py
@@ -19,7 +19,7 @@
"url": "https://stadtreinigung-leipzig.de/wir-kommen-zu-ihnen/abfallkalender/ical.ics?position_nos=38296&name=Sandgrubenweg%2027"
},
"Ludwigsburg": {
- "url": "https://www.avl-ludwigsburg.de/fileadmin/Files/Abfallkalender/ICS/Privat/Privat_{%Y}_Ossweil.ics"
+ "url": "https://kundenportal.avl-lb.de/WasteManagementLudwigsburg/WasteManagementServiceServlet?ApplicationName=Calendar&SubmitAction=sync&StandortID=950230001&AboID=8188&Fra=BT;RT;PT;LT;GT"
},
"Esslingen, Bahnhof": {
"url": "https://api.abfall.io/?kh=DaA02103019b46345f1998698563DaAd&t=ics&s=1a862df26f6943997cef90233877a4fe"
@@ -135,6 +135,7 @@
split_at=None,
version=2,
verify_ssl=True,
+ headers={},
):
self._url = url
self._file = file
@@ -148,6 +149,8 @@
self._year_field = year_field # replace this field in params with current year
self._method = method # The method to send the params
self._verify_ssl = verify_ssl
+ self._headers = HEADERS
+ self._headers.update(headers)
def fetch(self):
if self._url is not None:
@@ -187,11 +190,11 @@
# get ics file
if self._method == "GET":
r = requests.get(
- url, params=params, headers=HEADERS, verify=self._verify_ssl
+ url, params=params, headers=self._headers, verify=self._verify_ssl
)
elif self._method == "POST":
r = requests.post(
- url, data=params, headers=HEADERS, verify=self._verify_ssl
+ url, data=params, headers=self._headers, verify=self._verify_ssl
)
else:
raise RuntimeError(
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py\n@@ -19,7 +19,7 @@\n \"url\": \"https://stadtreinigung-leipzig.de/wir-kommen-zu-ihnen/abfallkalender/ical.ics?position_nos=38296&name=Sandgrubenweg%2027\"\n },\n \"Ludwigsburg\": {\n- \"url\": \"https://www.avl-ludwigsburg.de/fileadmin/Files/Abfallkalender/ICS/Privat/Privat_{%Y}_Ossweil.ics\"\n+ \"url\": \"https://kundenportal.avl-lb.de/WasteManagementLudwigsburg/WasteManagementServiceServlet?ApplicationName=Calendar&SubmitAction=sync&StandortID=950230001&AboID=8188&Fra=BT;RT;PT;LT;GT\"\n },\n \"Esslingen, Bahnhof\": {\n \"url\": \"https://api.abfall.io/?kh=DaA02103019b46345f1998698563DaAd&t=ics&s=1a862df26f6943997cef90233877a4fe\"\n@@ -135,6 +135,7 @@\n split_at=None,\n version=2,\n verify_ssl=True,\n+ headers={},\n ):\n self._url = url\n self._file = file\n@@ -148,6 +149,8 @@\n self._year_field = year_field # replace this field in params with current year\n self._method = method # The method to send the params\n self._verify_ssl = verify_ssl\n+ self._headers = HEADERS\n+ self._headers.update(headers)\n \n def fetch(self):\n if self._url is not None:\n@@ -187,11 +190,11 @@\n # get ics file\n if self._method == \"GET\":\n r = requests.get(\n- url, params=params, headers=HEADERS, verify=self._verify_ssl\n+ url, params=params, headers=self._headers, verify=self._verify_ssl\n )\n elif self._method == \"POST\":\n r = requests.post(\n- url, data=params, headers=HEADERS, verify=self._verify_ssl\n+ url, data=params, headers=self._headers, verify=self._verify_ssl\n )\n else:\n raise RuntimeError(\n", "issue": "provide custom HEADERS in ical source\nHello, \r\n\r\nI'm trying to download an ics schedule from my providers homepage. It only works when a \"Referer\":\"<url>\" Header is set. Otherwise a 403 Error occure. I manually changed the fixed Header (currently the user-agent) in the ics.py, but that will be overridden in future updates, i guess.\r\n\r\nMy question (or feature request, i suppose) is, is there a way to provide such a custom Header in the source configuration for ics?\n", "before_files": [{"content": "import datetime\nimport logging\nfrom os import getcwd\nfrom pathlib import Path\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\nfrom waste_collection_schedule.service.ICS import ICS\nfrom waste_collection_schedule.service.ICS_v1 import ICS_v1\n\nTITLE = \"ICS\"\nDESCRIPTION = \"Source for ICS based schedules.\"\nURL = None\nTEST_CASES = {\n \"Dortmund, Dudenstr. 5\": {\n \"url\": \"https://www.edg.de/ical/kalender.ics?Strasse=Dudenstr.&Hausnummer=5&Erinnerung=-1&Abfallart=1,2,3,4\"\n },\n \"Leipzig, Sandgrubenweg 27\": {\n \"url\": \"https://stadtreinigung-leipzig.de/wir-kommen-zu-ihnen/abfallkalender/ical.ics?position_nos=38296&name=Sandgrubenweg%2027\"\n },\n \"Ludwigsburg\": {\n \"url\": \"https://www.avl-ludwigsburg.de/fileadmin/Files/Abfallkalender/ICS/Privat/Privat_{%Y}_Ossweil.ics\"\n },\n \"Esslingen, Bahnhof\": {\n \"url\": \"https://api.abfall.io/?kh=DaA02103019b46345f1998698563DaAd&t=ics&s=1a862df26f6943997cef90233877a4fe\"\n },\n \"Test File\": {\n # Path is used here to allow to call the Source from any location.\n # This is not required in a yaml configuration!\n \"file\": str(Path(__file__).resolve().parents[1].joinpath(\"test/test.ics\"))\n },\n \"Test File (recurring)\": {\n # Path is used here to allow to call the Source from any location.\n # This is not required in a yaml configuration!\n \"file\": str(Path(__file__).resolve().parents[1].joinpath(\"test/recurring.ics\"))\n },\n \"M\u00fcnchen, Bahnstr. 11\": {\n \"url\": \"https://www.awm-muenchen.de/entsorgen/abfuhrkalender?tx_awmabfuhrkalender_abfuhrkalender%5Bhausnummer%5D=11&tx_awmabfuhrkalender_abfuhrkalender%5Bleerungszyklus%5D%5BB%5D=1%2F2%3BU&tx_awmabfuhrkalender_abfuhrkalender%5Bleerungszyklus%5D%5BP%5D=1%2F2%3BG&tx_awmabfuhrkalender_abfuhrkalender%5Bleerungszyklus%5D%5BR%5D=001%3BU&tx_awmabfuhrkalender_abfuhrkalender%5Bsection%5D=ics&tx_awmabfuhrkalender_abfuhrkalender%5Bsinglestandplatz%5D=false&tx_awmabfuhrkalender_abfuhrkalender%5Bstandplatzwahl%5D=true&tx_awmabfuhrkalender_abfuhrkalender%5Bstellplatz%5D%5Bbio%5D=70024507&tx_awmabfuhrkalender_abfuhrkalender%5Bstellplatz%5D%5Bpapier%5D=70024507&tx_awmabfuhrkalender_abfuhrkalender%5Bstellplatz%5D%5Brestmuell%5D=70024507&tx_awmabfuhrkalender_abfuhrkalender%5Bstrasse%5D=bahnstr.&tx_awmabfuhrkalender_abfuhrkalender%5Byear%5D={%Y}\",\n \"version\": 1,\n },\n \"Buxtehude, Am Berg\": {\n \"url\": \"https://abfall.landkreis-stade.de/api_v2/collection_dates/1/ort/10/strasse/90/hausnummern/1/abfallarten/R02-R04-B02-D04-D12-P04-R12-R14-W0-R22-R24-R31/kalender.ics\"\n },\n # \"Hausm\u00fcllinfo: ASR Chemnitz\": {\n # \"url\": \"https://asc.hausmuell.info/ics/ics.php\",\n # \"method\": \"POST\",\n # \"params\": {\n # \"hidden_id_egebiet\": 439087,\n # \"input_ort\": \"Chemnitz\",\n # \"input_str\": \"Stra\u00dfe der Nationen\",\n # \"input_hnr\": 2,\n # \"hidden_send_btn\": \"ics\",\n # # \"hiddenYear\": 2021,\n # \"hidden_id_ort\": 10,\n # \"hidden_id_ortsteil\": 0,\n # \"hidden_id_str\": 17814,\n # \"hidden_id_hnr\": 5538100,\n # \"hidden_kalenderart\": \"privat\",\n # \"showBinsBio\": \"on\",\n # \"showBinsRest\": \"on\",\n # \"showBinsRest_rc\": \"on\",\n # \"showBinsPapier\": \"on\",\n # \"showBinsOrganic\": \"on\",\n # \"showBinsXmas\": \"on\",\n # \"showBinsDsd\": \"on\",\n # \"showBinsProb\": \"on\",\n # },\n # \"year_field\": \"hiddenYear\",\n # },\n \"Abfall Zollernalbkreis, Ebingen\": {\n \"url\": \"https://www.abfallkalender-zak.de\",\n \"params\": {\n \"city\": \"2,3,4\",\n \"street\": \"3\",\n \"types[]\": [\n \"restmuell\",\n \"gelbersack\",\n \"papiertonne\",\n \"biomuell\",\n \"gruenabfall\",\n \"schadstoffsammlung\",\n \"altpapiersammlung\",\n \"schrottsammlung\",\n \"weihnachtsbaeume\",\n \"elektrosammlung\",\n ],\n \"go_ics\": \"Download\",\n },\n \"year_field\": \"year\",\n },\n \"Detmold\": {\n \"url\": \"https://abfuhrkalender.detmold.de/icsmaker.php\",\n \"method\": \"GET\",\n \"params\": {\"strid\": 338},\n \"year_field\": \"year\",\n },\n \"EAW Rheingau Taunus\": {\n \"url\": \"https://www.eaw-rheingau-taunus.de/abfallsammlung/abfuhrtermine/feed.ics?tx_vierwdeaw_garbagecalendarics%5Baction%5D=ics&tx_vierwdeaw_garbagecalendarics%5Bcontroller%5D=GarbageCalendar&tx_vierwdeaw_garbagecalendarics%5Bstreet%5D=38\",\n \"split_at\": \",\",\n },\n \"Recollect, Ottawa\": {\n \"url\": \"https://recollect.a.ssl.fastly.net/api/places/BCCDF30E-578B-11E4-AD38-5839C200407A/services/208/events.en.ics\",\n \"split_at\": \"\\\\, [and ]*\",\n },\n \"Frankfurt am Main, Achenbachstrasse 3\": {\n \"url\": \"https://www.fes-frankfurt.de/abfallkalender/QWNoZW5iYWNoc3RyLnwzfDYwNTk2.ics\"\n },\n \"Erlensee, Am Haspel\": {\n \"url\": \"https://sperrmuell.erlensee.de/?type=reminder\",\n \"method\": \"POST\",\n \"params\": {\n \"street\": 8,\n \"eventType[]\": [27, 23, 19, 20, 21, 24, 22, 25, 26],\n \"timeframe\": 23,\n \"download\": \"ical\",\n },\n },\n}\n\n\nHEADERS = {\"user-agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64)\"}\n_LOGGER = logging.getLogger(__name__)\n\n\nclass Source:\n def __init__(\n self,\n url=None,\n file=None,\n offset=None,\n params=None,\n year_field=None,\n method=\"GET\",\n regex=None,\n split_at=None,\n version=2,\n verify_ssl=True,\n ):\n self._url = url\n self._file = file\n if bool(self._url is not None) == bool(self._file is not None):\n raise RuntimeError(\"Specify either url or file\")\n if version == 1:\n self._ics = ICS_v1(offset=offset, split_at=split_at, regex=regex)\n else:\n self._ics = ICS(offset=offset, split_at=split_at, regex=regex)\n self._params = params\n self._year_field = year_field # replace this field in params with current year\n self._method = method # The method to send the params\n self._verify_ssl = verify_ssl\n\n def fetch(self):\n if self._url is not None:\n if \"{%Y}\" in self._url or self._year_field is not None:\n # url contains wildcard or params contains year field\n now = datetime.datetime.now()\n\n # replace year in url\n url = self._url.replace(\"{%Y}\", str(now.year))\n\n # replace year in params\n if self._year_field is not None:\n if self._params is None:\n raise RuntimeError(\"year_field specified without params\")\n self._params[self._year_field] = str(now.year)\n\n entries = self.fetch_url(url, self._params)\n\n if now.month == 12:\n # also get data for next year if we are already in december\n url = self._url.replace(\"{%Y}\", str(now.year + 1))\n if self._year_field is not None:\n self._params[self._year_field] = str(now.year + 1)\n\n try:\n entries.extend(self.fetch_url(url, self._params))\n except Exception:\n # ignore if fetch for next year fails\n pass\n return entries\n else:\n return self.fetch_url(self._url, self._params)\n elif self._file is not None:\n return self.fetch_file(self._file)\n\n def fetch_url(self, url, params=None):\n # get ics file\n if self._method == \"GET\":\n r = requests.get(\n url, params=params, headers=HEADERS, verify=self._verify_ssl\n )\n elif self._method == \"POST\":\n r = requests.post(\n url, data=params, headers=HEADERS, verify=self._verify_ssl\n )\n else:\n raise RuntimeError(\n \"Error: unknown method to fetch URL, use GET or POST; got {self._method}\"\n )\n r.raise_for_status()\n\n r.encoding = \"utf-8\" # requests doesn't guess the encoding correctly\n return self._convert(r.text)\n\n def fetch_file(self, file):\n try:\n f = open(file)\n except FileNotFoundError as e:\n _LOGGER.error(f\"Working directory: '{getcwd()}'\")\n raise\n return self._convert(f.read())\n\n def _convert(self, data):\n dates = self._ics.convert(data)\n\n entries = []\n for d in dates:\n entries.append(Collection(d[0], d[1]))\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py"}], "after_files": [{"content": "import datetime\nimport logging\nfrom os import getcwd\nfrom pathlib import Path\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\nfrom waste_collection_schedule.service.ICS import ICS\nfrom waste_collection_schedule.service.ICS_v1 import ICS_v1\n\nTITLE = \"ICS\"\nDESCRIPTION = \"Source for ICS based schedules.\"\nURL = None\nTEST_CASES = {\n \"Dortmund, Dudenstr. 5\": {\n \"url\": \"https://www.edg.de/ical/kalender.ics?Strasse=Dudenstr.&Hausnummer=5&Erinnerung=-1&Abfallart=1,2,3,4\"\n },\n \"Leipzig, Sandgrubenweg 27\": {\n \"url\": \"https://stadtreinigung-leipzig.de/wir-kommen-zu-ihnen/abfallkalender/ical.ics?position_nos=38296&name=Sandgrubenweg%2027\"\n },\n \"Ludwigsburg\": {\n \"url\": \"https://kundenportal.avl-lb.de/WasteManagementLudwigsburg/WasteManagementServiceServlet?ApplicationName=Calendar&SubmitAction=sync&StandortID=950230001&AboID=8188&Fra=BT;RT;PT;LT;GT\"\n },\n \"Esslingen, Bahnhof\": {\n \"url\": \"https://api.abfall.io/?kh=DaA02103019b46345f1998698563DaAd&t=ics&s=1a862df26f6943997cef90233877a4fe\"\n },\n \"Test File\": {\n # Path is used here to allow to call the Source from any location.\n # This is not required in a yaml configuration!\n \"file\": str(Path(__file__).resolve().parents[1].joinpath(\"test/test.ics\"))\n },\n \"Test File (recurring)\": {\n # Path is used here to allow to call the Source from any location.\n # This is not required in a yaml configuration!\n \"file\": str(Path(__file__).resolve().parents[1].joinpath(\"test/recurring.ics\"))\n },\n \"M\u00fcnchen, Bahnstr. 11\": {\n \"url\": \"https://www.awm-muenchen.de/entsorgen/abfuhrkalender?tx_awmabfuhrkalender_abfuhrkalender%5Bhausnummer%5D=11&tx_awmabfuhrkalender_abfuhrkalender%5Bleerungszyklus%5D%5BB%5D=1%2F2%3BU&tx_awmabfuhrkalender_abfuhrkalender%5Bleerungszyklus%5D%5BP%5D=1%2F2%3BG&tx_awmabfuhrkalender_abfuhrkalender%5Bleerungszyklus%5D%5BR%5D=001%3BU&tx_awmabfuhrkalender_abfuhrkalender%5Bsection%5D=ics&tx_awmabfuhrkalender_abfuhrkalender%5Bsinglestandplatz%5D=false&tx_awmabfuhrkalender_abfuhrkalender%5Bstandplatzwahl%5D=true&tx_awmabfuhrkalender_abfuhrkalender%5Bstellplatz%5D%5Bbio%5D=70024507&tx_awmabfuhrkalender_abfuhrkalender%5Bstellplatz%5D%5Bpapier%5D=70024507&tx_awmabfuhrkalender_abfuhrkalender%5Bstellplatz%5D%5Brestmuell%5D=70024507&tx_awmabfuhrkalender_abfuhrkalender%5Bstrasse%5D=bahnstr.&tx_awmabfuhrkalender_abfuhrkalender%5Byear%5D={%Y}\",\n \"version\": 1,\n },\n \"Buxtehude, Am Berg\": {\n \"url\": \"https://abfall.landkreis-stade.de/api_v2/collection_dates/1/ort/10/strasse/90/hausnummern/1/abfallarten/R02-R04-B02-D04-D12-P04-R12-R14-W0-R22-R24-R31/kalender.ics\"\n },\n # \"Hausm\u00fcllinfo: ASR Chemnitz\": {\n # \"url\": \"https://asc.hausmuell.info/ics/ics.php\",\n # \"method\": \"POST\",\n # \"params\": {\n # \"hidden_id_egebiet\": 439087,\n # \"input_ort\": \"Chemnitz\",\n # \"input_str\": \"Stra\u00dfe der Nationen\",\n # \"input_hnr\": 2,\n # \"hidden_send_btn\": \"ics\",\n # # \"hiddenYear\": 2021,\n # \"hidden_id_ort\": 10,\n # \"hidden_id_ortsteil\": 0,\n # \"hidden_id_str\": 17814,\n # \"hidden_id_hnr\": 5538100,\n # \"hidden_kalenderart\": \"privat\",\n # \"showBinsBio\": \"on\",\n # \"showBinsRest\": \"on\",\n # \"showBinsRest_rc\": \"on\",\n # \"showBinsPapier\": \"on\",\n # \"showBinsOrganic\": \"on\",\n # \"showBinsXmas\": \"on\",\n # \"showBinsDsd\": \"on\",\n # \"showBinsProb\": \"on\",\n # },\n # \"year_field\": \"hiddenYear\",\n # },\n \"Abfall Zollernalbkreis, Ebingen\": {\n \"url\": \"https://www.abfallkalender-zak.de\",\n \"params\": {\n \"city\": \"2,3,4\",\n \"street\": \"3\",\n \"types[]\": [\n \"restmuell\",\n \"gelbersack\",\n \"papiertonne\",\n \"biomuell\",\n \"gruenabfall\",\n \"schadstoffsammlung\",\n \"altpapiersammlung\",\n \"schrottsammlung\",\n \"weihnachtsbaeume\",\n \"elektrosammlung\",\n ],\n \"go_ics\": \"Download\",\n },\n \"year_field\": \"year\",\n },\n \"Detmold\": {\n \"url\": \"https://abfuhrkalender.detmold.de/icsmaker.php\",\n \"method\": \"GET\",\n \"params\": {\"strid\": 338},\n \"year_field\": \"year\",\n },\n \"EAW Rheingau Taunus\": {\n \"url\": \"https://www.eaw-rheingau-taunus.de/abfallsammlung/abfuhrtermine/feed.ics?tx_vierwdeaw_garbagecalendarics%5Baction%5D=ics&tx_vierwdeaw_garbagecalendarics%5Bcontroller%5D=GarbageCalendar&tx_vierwdeaw_garbagecalendarics%5Bstreet%5D=38\",\n \"split_at\": \",\",\n },\n \"Recollect, Ottawa\": {\n \"url\": \"https://recollect.a.ssl.fastly.net/api/places/BCCDF30E-578B-11E4-AD38-5839C200407A/services/208/events.en.ics\",\n \"split_at\": \"\\\\, [and ]*\",\n },\n \"Frankfurt am Main, Achenbachstrasse 3\": {\n \"url\": \"https://www.fes-frankfurt.de/abfallkalender/QWNoZW5iYWNoc3RyLnwzfDYwNTk2.ics\"\n },\n \"Erlensee, Am Haspel\": {\n \"url\": \"https://sperrmuell.erlensee.de/?type=reminder\",\n \"method\": \"POST\",\n \"params\": {\n \"street\": 8,\n \"eventType[]\": [27, 23, 19, 20, 21, 24, 22, 25, 26],\n \"timeframe\": 23,\n \"download\": \"ical\",\n },\n },\n}\n\n\nHEADERS = {\"user-agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64)\"}\n_LOGGER = logging.getLogger(__name__)\n\n\nclass Source:\n def __init__(\n self,\n url=None,\n file=None,\n offset=None,\n params=None,\n year_field=None,\n method=\"GET\",\n regex=None,\n split_at=None,\n version=2,\n verify_ssl=True,\n headers={},\n ):\n self._url = url\n self._file = file\n if bool(self._url is not None) == bool(self._file is not None):\n raise RuntimeError(\"Specify either url or file\")\n if version == 1:\n self._ics = ICS_v1(offset=offset, split_at=split_at, regex=regex)\n else:\n self._ics = ICS(offset=offset, split_at=split_at, regex=regex)\n self._params = params\n self._year_field = year_field # replace this field in params with current year\n self._method = method # The method to send the params\n self._verify_ssl = verify_ssl\n self._headers = HEADERS\n self._headers.update(headers)\n\n def fetch(self):\n if self._url is not None:\n if \"{%Y}\" in self._url or self._year_field is not None:\n # url contains wildcard or params contains year field\n now = datetime.datetime.now()\n\n # replace year in url\n url = self._url.replace(\"{%Y}\", str(now.year))\n\n # replace year in params\n if self._year_field is not None:\n if self._params is None:\n raise RuntimeError(\"year_field specified without params\")\n self._params[self._year_field] = str(now.year)\n\n entries = self.fetch_url(url, self._params)\n\n if now.month == 12:\n # also get data for next year if we are already in december\n url = self._url.replace(\"{%Y}\", str(now.year + 1))\n if self._year_field is not None:\n self._params[self._year_field] = str(now.year + 1)\n\n try:\n entries.extend(self.fetch_url(url, self._params))\n except Exception:\n # ignore if fetch for next year fails\n pass\n return entries\n else:\n return self.fetch_url(self._url, self._params)\n elif self._file is not None:\n return self.fetch_file(self._file)\n\n def fetch_url(self, url, params=None):\n # get ics file\n if self._method == \"GET\":\n r = requests.get(\n url, params=params, headers=self._headers, verify=self._verify_ssl\n )\n elif self._method == \"POST\":\n r = requests.post(\n url, data=params, headers=self._headers, verify=self._verify_ssl\n )\n else:\n raise RuntimeError(\n \"Error: unknown method to fetch URL, use GET or POST; got {self._method}\"\n )\n r.raise_for_status()\n\n r.encoding = \"utf-8\" # requests doesn't guess the encoding correctly\n return self._convert(r.text)\n\n def fetch_file(self, file):\n try:\n f = open(file)\n except FileNotFoundError as e:\n _LOGGER.error(f\"Working directory: '{getcwd()}'\")\n raise\n return self._convert(f.read())\n\n def _convert(self, data):\n dates = self._ics.convert(data)\n\n entries = []\n for d in dates:\n entries.append(Collection(d[0], d[1]))\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py"}]} | 3,643 | 629 |
gh_patches_debug_19225 | rasdani/github-patches | git_diff | docker__docker-py-971 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
tlsv1 alert protocol version on 1.7.1 and 1.7.2 but not on 1.7.0
Similar to #949 I'm discovering issues with latest versions of `docker-py` running against docker 1.10.2 instance. I'm using `docker.utils.kwargs_from_env(assert_hostname=False)`. Things work fine with version 1.7.0.
Docker client is initialized via
```
client = docker.Client(
version='auto',
**docker.utils.kwargs_from_env(assert_hostname=False))
```
with docker environment variables being set to the following (via docker-machine)
```
DOCKER_HOST=tcp://192.168.156.137:2376
DOCKER_MACHINE_NAME=dev2
DOCKER_TLS_VERIFY=1
DOCKER_CERT_PATH=/Users/benjixx/.docker/machine/machines/dev2
```
docker-py 1.7.1 and 1.7.2 now raise the following exception:
```
DockerException: Error while fetching server API version: [Errno 1] _ssl.c:507: error:1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version
```
Any idea what's happening here?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docker/tls.py`
Content:
```
1 import os
2
3 from . import errors
4 from .ssladapter import ssladapter
5
6
7 class TLSConfig(object):
8 cert = None
9 ca_cert = None
10 verify = None
11 ssl_version = None
12
13 def __init__(self, client_cert=None, ca_cert=None, verify=None,
14 ssl_version=None, assert_hostname=None,
15 assert_fingerprint=None):
16 # Argument compatibility/mapping with
17 # https://docs.docker.com/engine/articles/https/
18 # This diverges from the Docker CLI in that users can specify 'tls'
19 # here, but also disable any public/default CA pool verification by
20 # leaving tls_verify=False
21
22 self.ssl_version = ssl_version
23 self.assert_hostname = assert_hostname
24 self.assert_fingerprint = assert_fingerprint
25
26 # "tls" and "tls_verify" must have both or neither cert/key files
27 # In either case, Alert the user when both are expected, but any are
28 # missing.
29
30 if client_cert:
31 try:
32 tls_cert, tls_key = client_cert
33 except ValueError:
34 raise errors.TLSParameterError(
35 'client_config must be a tuple of'
36 ' (client certificate, key file)'
37 )
38
39 if not (tls_cert and tls_key) or (not os.path.isfile(tls_cert) or
40 not os.path.isfile(tls_key)):
41 raise errors.TLSParameterError(
42 'Path to a certificate and key files must be provided'
43 ' through the client_config param'
44 )
45 self.cert = (tls_cert, tls_key)
46
47 # If verify is set, make sure the cert exists
48 self.verify = verify
49 self.ca_cert = ca_cert
50 if self.verify and self.ca_cert and not os.path.isfile(self.ca_cert):
51 raise errors.TLSParameterError(
52 'Invalid CA certificate provided for `tls_ca_cert`.'
53 )
54
55 def configure_client(self, client):
56 client.ssl_version = self.ssl_version
57
58 if self.verify and self.ca_cert:
59 client.verify = self.ca_cert
60 else:
61 client.verify = self.verify
62
63 if self.cert:
64 client.cert = self.cert
65
66 client.mount('https://', ssladapter.SSLAdapter(
67 ssl_version=self.ssl_version,
68 assert_hostname=self.assert_hostname,
69 assert_fingerprint=self.assert_fingerprint,
70 ))
71
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docker/tls.py b/docker/tls.py
--- a/docker/tls.py
+++ b/docker/tls.py
@@ -1,4 +1,5 @@
import os
+import ssl
from . import errors
from .ssladapter import ssladapter
@@ -19,10 +20,14 @@
# here, but also disable any public/default CA pool verification by
# leaving tls_verify=False
- self.ssl_version = ssl_version
self.assert_hostname = assert_hostname
self.assert_fingerprint = assert_fingerprint
+ # TLS v1.0 seems to be the safest default; SSLv23 fails in mysterious
+ # ways: https://github.com/docker/docker-py/issues/963
+
+ self.ssl_version = ssl_version or ssl.PROTOCOL_TLSv1
+
# "tls" and "tls_verify" must have both or neither cert/key files
# In either case, Alert the user when both are expected, but any are
# missing.
| {"golden_diff": "diff --git a/docker/tls.py b/docker/tls.py\n--- a/docker/tls.py\n+++ b/docker/tls.py\n@@ -1,4 +1,5 @@\n import os\n+import ssl\n \n from . import errors\n from .ssladapter import ssladapter\n@@ -19,10 +20,14 @@\n # here, but also disable any public/default CA pool verification by\n # leaving tls_verify=False\n \n- self.ssl_version = ssl_version\n self.assert_hostname = assert_hostname\n self.assert_fingerprint = assert_fingerprint\n \n+ # TLS v1.0 seems to be the safest default; SSLv23 fails in mysterious\n+ # ways: https://github.com/docker/docker-py/issues/963\n+\n+ self.ssl_version = ssl_version or ssl.PROTOCOL_TLSv1\n+\n # \"tls\" and \"tls_verify\" must have both or neither cert/key files\n # In either case, Alert the user when both are expected, but any are\n # missing.\n", "issue": "tlsv1 alert protocol version on 1.7.1 and 1.7.2 but not on 1.7.0\nSimilar to #949 I'm discovering issues with latest versions of `docker-py` running against docker 1.10.2 instance. I'm using `docker.utils.kwargs_from_env(assert_hostname=False)`. Things work fine with version 1.7.0.\n\nDocker client is initialized via\n\n```\nclient = docker.Client(\n version='auto',\n **docker.utils.kwargs_from_env(assert_hostname=False))\n```\n\nwith docker environment variables being set to the following (via docker-machine)\n\n```\nDOCKER_HOST=tcp://192.168.156.137:2376\nDOCKER_MACHINE_NAME=dev2\nDOCKER_TLS_VERIFY=1\nDOCKER_CERT_PATH=/Users/benjixx/.docker/machine/machines/dev2\n```\n\ndocker-py 1.7.1 and 1.7.2 now raise the following exception:\n\n```\nDockerException: Error while fetching server API version: [Errno 1] _ssl.c:507: error:1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version\n```\n\nAny idea what's happening here?\n\n", "before_files": [{"content": "import os\n\nfrom . import errors\nfrom .ssladapter import ssladapter\n\n\nclass TLSConfig(object):\n cert = None\n ca_cert = None\n verify = None\n ssl_version = None\n\n def __init__(self, client_cert=None, ca_cert=None, verify=None,\n ssl_version=None, assert_hostname=None,\n assert_fingerprint=None):\n # Argument compatibility/mapping with\n # https://docs.docker.com/engine/articles/https/\n # This diverges from the Docker CLI in that users can specify 'tls'\n # here, but also disable any public/default CA pool verification by\n # leaving tls_verify=False\n\n self.ssl_version = ssl_version\n self.assert_hostname = assert_hostname\n self.assert_fingerprint = assert_fingerprint\n\n # \"tls\" and \"tls_verify\" must have both or neither cert/key files\n # In either case, Alert the user when both are expected, but any are\n # missing.\n\n if client_cert:\n try:\n tls_cert, tls_key = client_cert\n except ValueError:\n raise errors.TLSParameterError(\n 'client_config must be a tuple of'\n ' (client certificate, key file)'\n )\n\n if not (tls_cert and tls_key) or (not os.path.isfile(tls_cert) or\n not os.path.isfile(tls_key)):\n raise errors.TLSParameterError(\n 'Path to a certificate and key files must be provided'\n ' through the client_config param'\n )\n self.cert = (tls_cert, tls_key)\n\n # If verify is set, make sure the cert exists\n self.verify = verify\n self.ca_cert = ca_cert\n if self.verify and self.ca_cert and not os.path.isfile(self.ca_cert):\n raise errors.TLSParameterError(\n 'Invalid CA certificate provided for `tls_ca_cert`.'\n )\n\n def configure_client(self, client):\n client.ssl_version = self.ssl_version\n\n if self.verify and self.ca_cert:\n client.verify = self.ca_cert\n else:\n client.verify = self.verify\n\n if self.cert:\n client.cert = self.cert\n\n client.mount('https://', ssladapter.SSLAdapter(\n ssl_version=self.ssl_version,\n assert_hostname=self.assert_hostname,\n assert_fingerprint=self.assert_fingerprint,\n ))\n", "path": "docker/tls.py"}], "after_files": [{"content": "import os\nimport ssl\n\nfrom . import errors\nfrom .ssladapter import ssladapter\n\n\nclass TLSConfig(object):\n cert = None\n ca_cert = None\n verify = None\n ssl_version = None\n\n def __init__(self, client_cert=None, ca_cert=None, verify=None,\n ssl_version=None, assert_hostname=None,\n assert_fingerprint=None):\n # Argument compatibility/mapping with\n # https://docs.docker.com/engine/articles/https/\n # This diverges from the Docker CLI in that users can specify 'tls'\n # here, but also disable any public/default CA pool verification by\n # leaving tls_verify=False\n\n self.assert_hostname = assert_hostname\n self.assert_fingerprint = assert_fingerprint\n\n # TLS v1.0 seems to be the safest default; SSLv23 fails in mysterious\n # ways: https://github.com/docker/docker-py/issues/963\n\n self.ssl_version = ssl_version or ssl.PROTOCOL_TLSv1\n\n # \"tls\" and \"tls_verify\" must have both or neither cert/key files\n # In either case, Alert the user when both are expected, but any are\n # missing.\n\n if client_cert:\n try:\n tls_cert, tls_key = client_cert\n except ValueError:\n raise errors.TLSParameterError(\n 'client_config must be a tuple of'\n ' (client certificate, key file)'\n )\n\n if not (tls_cert and tls_key) or (not os.path.isfile(tls_cert) or\n not os.path.isfile(tls_key)):\n raise errors.TLSParameterError(\n 'Path to a certificate and key files must be provided'\n ' through the client_config param'\n )\n self.cert = (tls_cert, tls_key)\n\n # If verify is set, make sure the cert exists\n self.verify = verify\n self.ca_cert = ca_cert\n if self.verify and self.ca_cert and not os.path.isfile(self.ca_cert):\n raise errors.TLSParameterError(\n 'Invalid CA certificate provided for `tls_ca_cert`.'\n )\n\n def configure_client(self, client):\n client.ssl_version = self.ssl_version\n\n if self.verify and self.ca_cert:\n client.verify = self.ca_cert\n else:\n client.verify = self.verify\n\n if self.cert:\n client.cert = self.cert\n\n client.mount('https://', ssladapter.SSLAdapter(\n ssl_version=self.ssl_version,\n assert_hostname=self.assert_hostname,\n assert_fingerprint=self.assert_fingerprint,\n ))\n", "path": "docker/tls.py"}]} | 1,174 | 223 |
gh_patches_debug_36965 | rasdani/github-patches | git_diff | mne-tools__mne-bids-750 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
STY: Move to pydata-sphinx-theme
Now that NumPy, SciPy (in progress), pandas, and (most importantly I think) MNE-Python have moved to pydata-sphinx-theme, it might be nice to move `mne-bids` over, too, for cohesiveness. Any thoughts or objections?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `doc/conf.py`
Content:
```
1 """Configure details for documentation with sphinx."""
2 import os
3 import sys
4 from datetime import date
5
6 import sphinx_gallery # noqa: F401
7 from sphinx_gallery.sorting import ExampleTitleSortKey
8 import sphinx_bootstrap_theme
9
10 import mne_bids
11
12
13 # If extensions (or modules to document with autodoc) are in another directory,
14 # add these directories to sys.path here. If the directory is relative to the
15 # documentation root, use os.path.abspath to make it absolute, like shown here.
16 curdir = os.path.dirname(__file__)
17 sys.path.append(os.path.abspath(os.path.join(curdir, '..', 'mne_bids')))
18 sys.path.append(os.path.abspath(os.path.join(curdir, 'sphinxext')))
19
20
21 # -- General configuration ------------------------------------------------
22
23 # If your documentation needs a minimal Sphinx version, state it here.
24 #
25 # needs_sphinx = '1.0'
26
27 # Add any Sphinx extension module names here, as strings. They can be
28 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
29 # ones.
30 extensions = [
31 'sphinx.ext.githubpages',
32 'sphinx.ext.autodoc',
33 'sphinx.ext.mathjax',
34 'sphinx.ext.viewcode',
35 'sphinx.ext.autosummary',
36 'sphinx.ext.doctest',
37 'sphinx.ext.intersphinx',
38 'sphinx_gallery.gen_gallery',
39 'numpydoc',
40 'sphinx_copybutton',
41 'gen_cli', # custom extension, see ./sphinxext/gen_cli.py
42 'gh_substitutions', # custom extension, see ./sphinxext/gh_substitutions.py
43 ]
44
45 # configure sphinx-copybutton
46 copybutton_prompt_text = r">>> |\.\.\. |\$ "
47 copybutton_prompt_is_regexp = True
48
49 # configure numpydoc
50 numpydoc_xref_param_type = True
51 numpydoc_class_members_toctree = False
52 numpydoc_attributes_as_param_list = True
53 numpydoc_xref_aliases = {
54 'NibabelImageObject': 'nibabel.spatialimages.SpatialImage',
55 }
56 numpydoc_xref_ignore = {
57 # words
58 'of',
59 }
60
61
62 # generate autosummary even if no references
63 autosummary_generate = True
64 autodoc_default_options = {'inherited-members': None}
65 default_role = 'autolink' # XXX silently allows bad syntax, someone should fix
66
67 # The suffix(es) of source filenames.
68 # You can specify multiple suffix as a list of string:
69 #
70 # source_suffix = ['.rst', '.md']
71 source_suffix = '.rst'
72
73 # The master toctree document.
74 master_doc = 'index'
75
76 # General information about the project.
77 project = u'mne_bids'
78 td = date.today()
79 copyright = u'2017-%s, MNE Developers. Last updated on %s' % (td.year,
80 td.isoformat())
81
82 author = u'MNE Developers'
83
84 # The version info for the project you're documenting, acts as replacement for
85 # |version| and |release|, also used in various other places throughout the
86 # built documents.
87 #
88 # The short X.Y version.
89 version = mne_bids.__version__
90 # The full version, including alpha/beta/rc tags.
91 release = version
92
93 # List of patterns, relative to source directory, that match files and
94 # directories to ignore when looking for source files.
95 # This patterns also effect to html_static_path and html_extra_path
96 exclude_patterns = ['auto_examples/index.rst', '_build', 'Thumbs.db',
97 '.DS_Store']
98
99 # HTML options (e.g., theme)
100 # see: https://sphinx-bootstrap-theme.readthedocs.io/en/latest/README.html
101 # Clean up sidebar: Do not show "Source" link
102 html_show_sourcelink = False
103
104 html_theme = 'bootstrap'
105 html_theme_path = sphinx_bootstrap_theme.get_html_theme_path()
106
107 # Add any paths that contain templates here, relative to this directory.
108 templates_path = ['_templates']
109 html_static_path = ['_static']
110 html_css_files = ['style.css']
111
112 # Theme options are theme-specific and customize the look and feel of a theme
113 # further. For a list of options available for each theme, see the
114 # documentation.
115 html_theme_options = {
116 'navbar_title': 'MNE-BIDS',
117 'bootswatch_theme': "flatly",
118 'navbar_sidebarrel': False, # no "previous / next" navigation
119 'navbar_pagenav': False, # no "Page" navigation in sidebar
120 'bootstrap_version': "3",
121 'navbar_links': [
122 ("News", "whats_new"),
123 ("Install", "install"),
124 ("Use", "use"),
125 ("API", "api"),
126 ("CLI", "generated/cli"),
127 ("Contribute!", "contribute")
128 ]}
129
130 html_sidebars = {'**': ['localtoc.html']}
131
132 # Example configuration for intersphinx: refer to the Python standard library.
133 intersphinx_mapping = {
134 'python': ('https://docs.python.org/3', None),
135 'mne': ('https://mne.tools/dev', None),
136 'numpy': ('https://numpy.org/devdocs', None),
137 'scipy': ('https://scipy.github.io/devdocs', None),
138 'matplotlib': ('https://matplotlib.org', None),
139 'nilearn': ('https://nilearn.github.io', None),
140 'pandas': ('https://pandas.pydata.org/pandas-docs/dev', None),
141 'nibabel': ('https://nipy.org/nibabel', None),
142 }
143 intersphinx_timeout = 5
144
145 # Resolve binder filepath_prefix. From the docs:
146 # "A prefix to append to the filepath in the Binder links. You should use this
147 # if you will store your built documentation in a sub-folder of a repository,
148 # instead of in the root."
149 # we will store dev docs in a `dev` subdirectory and all other docs in a
150 # directory "v" + version_str. E.g., "v0.3"
151 if 'dev' in version:
152 filepath_prefix = 'dev'
153 else:
154 filepath_prefix = 'v{}'.format(version)
155
156 sphinx_gallery_conf = {
157 'doc_module': 'mne_bids',
158 'reference_url': {
159 'mne_bids': None,
160 },
161 'backreferences_dir': 'generated',
162 'examples_dirs': '../examples',
163 'within_subsection_order': ExampleTitleSortKey,
164 'gallery_dirs': 'auto_examples',
165 'filename_pattern': '^((?!sgskip).)*$',
166 'binder': {
167 # Required keys
168 'org': 'mne-tools',
169 'repo': 'mne-bids',
170 'branch': 'gh-pages', # noqa: E501 Can be any branch, tag, or commit hash. Use a branch that hosts your docs.
171 'binderhub_url': 'https://mybinder.org', # noqa: E501 Any URL of a binderhub deployment. Must be full URL (e.g. https://mybinder.org).
172 'filepath_prefix': filepath_prefix, # noqa: E501 A prefix to prepend to any filepaths in Binder links.
173 'dependencies': [
174 '../test_requirements.txt',
175 './requirements.txt',
176 ],
177 }
178 }
179
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/doc/conf.py b/doc/conf.py
--- a/doc/conf.py
+++ b/doc/conf.py
@@ -5,7 +5,6 @@
import sphinx_gallery # noqa: F401
from sphinx_gallery.sorting import ExampleTitleSortKey
-import sphinx_bootstrap_theme
import mne_bids
@@ -74,7 +73,7 @@
master_doc = 'index'
# General information about the project.
-project = u'mne_bids'
+project = u'MNE-BIDS'
td = date.today()
copyright = u'2017-%s, MNE Developers. Last updated on %s' % (td.year,
td.isoformat())
@@ -97,12 +96,10 @@
'.DS_Store']
# HTML options (e.g., theme)
-# see: https://sphinx-bootstrap-theme.readthedocs.io/en/latest/README.html
-# Clean up sidebar: Do not show "Source" link
html_show_sourcelink = False
+html_copy_source = False
-html_theme = 'bootstrap'
-html_theme_path = sphinx_bootstrap_theme.get_html_theme_path()
+html_theme = 'pydata_sphinx_theme'
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
@@ -113,21 +110,31 @@
# further. For a list of options available for each theme, see the
# documentation.
html_theme_options = {
- 'navbar_title': 'MNE-BIDS',
- 'bootswatch_theme': "flatly",
- 'navbar_sidebarrel': False, # no "previous / next" navigation
- 'navbar_pagenav': False, # no "Page" navigation in sidebar
- 'bootstrap_version': "3",
- 'navbar_links': [
- ("News", "whats_new"),
- ("Install", "install"),
- ("Use", "use"),
- ("API", "api"),
- ("CLI", "generated/cli"),
- ("Contribute!", "contribute")
- ]}
-
-html_sidebars = {'**': ['localtoc.html']}
+ 'icon_links': [
+ dict(name='GitHub',
+ url='https://github.com/mne-tools/mne-bids',
+ icon='fab fa-github-square'),
+ ],
+ 'icon_links_label': 'Quick Links', # for screen reader
+ 'use_edit_page_button': False,
+ 'navigation_with_keys': False,
+ 'show_toc_level': 1,
+}
+
+html_context = {
+ 'versions_dropdown': {
+ 'dev': 'v0.8 (devel)',
+ 'stable': 'v0.7 (stable)',
+ 'v0.6': 'v0.6',
+ 'v0.5': 'v0.5',
+ 'v0.4': 'v0.4',
+ 'v0.3': 'v0.3',
+ 'v0.2': 'v0.2',
+ 'v0.1': 'v0.1',
+ },
+}
+
+html_sidebars = {}
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {
| {"golden_diff": "diff --git a/doc/conf.py b/doc/conf.py\n--- a/doc/conf.py\n+++ b/doc/conf.py\n@@ -5,7 +5,6 @@\n \n import sphinx_gallery # noqa: F401\n from sphinx_gallery.sorting import ExampleTitleSortKey\n-import sphinx_bootstrap_theme\n \n import mne_bids\n \n@@ -74,7 +73,7 @@\n master_doc = 'index'\n \n # General information about the project.\n-project = u'mne_bids'\n+project = u'MNE-BIDS'\n td = date.today()\n copyright = u'2017-%s, MNE Developers. Last updated on %s' % (td.year,\n td.isoformat())\n@@ -97,12 +96,10 @@\n '.DS_Store']\n \n # HTML options (e.g., theme)\n-# see: https://sphinx-bootstrap-theme.readthedocs.io/en/latest/README.html\n-# Clean up sidebar: Do not show \"Source\" link\n html_show_sourcelink = False\n+html_copy_source = False\n \n-html_theme = 'bootstrap'\n-html_theme_path = sphinx_bootstrap_theme.get_html_theme_path()\n+html_theme = 'pydata_sphinx_theme'\n \n # Add any paths that contain templates here, relative to this directory.\n templates_path = ['_templates']\n@@ -113,21 +110,31 @@\n # further. For a list of options available for each theme, see the\n # documentation.\n html_theme_options = {\n- 'navbar_title': 'MNE-BIDS',\n- 'bootswatch_theme': \"flatly\",\n- 'navbar_sidebarrel': False, # no \"previous / next\" navigation\n- 'navbar_pagenav': False, # no \"Page\" navigation in sidebar\n- 'bootstrap_version': \"3\",\n- 'navbar_links': [\n- (\"News\", \"whats_new\"),\n- (\"Install\", \"install\"),\n- (\"Use\", \"use\"),\n- (\"API\", \"api\"),\n- (\"CLI\", \"generated/cli\"),\n- (\"Contribute!\", \"contribute\")\n- ]}\n-\n-html_sidebars = {'**': ['localtoc.html']}\n+ 'icon_links': [\n+ dict(name='GitHub',\n+ url='https://github.com/mne-tools/mne-bids',\n+ icon='fab fa-github-square'),\n+ ],\n+ 'icon_links_label': 'Quick Links', # for screen reader\n+ 'use_edit_page_button': False,\n+ 'navigation_with_keys': False,\n+ 'show_toc_level': 1,\n+}\n+\n+html_context = {\n+ 'versions_dropdown': {\n+ 'dev': 'v0.8 (devel)',\n+ 'stable': 'v0.7 (stable)',\n+ 'v0.6': 'v0.6',\n+ 'v0.5': 'v0.5',\n+ 'v0.4': 'v0.4',\n+ 'v0.3': 'v0.3',\n+ 'v0.2': 'v0.2',\n+ 'v0.1': 'v0.1',\n+ },\n+}\n+\n+html_sidebars = {}\n \n # Example configuration for intersphinx: refer to the Python standard library.\n intersphinx_mapping = {\n", "issue": "STY: Move to pydata-sphinx-theme\nNow that NumPy, SciPy (in progress), pandas, and (most importantly I think) MNE-Python have moved to pydata-sphinx-theme, it might be nice to move `mne-bids` over, too, for cohesiveness. Any thoughts or objections?\n", "before_files": [{"content": "\"\"\"Configure details for documentation with sphinx.\"\"\"\nimport os\nimport sys\nfrom datetime import date\n\nimport sphinx_gallery # noqa: F401\nfrom sphinx_gallery.sorting import ExampleTitleSortKey\nimport sphinx_bootstrap_theme\n\nimport mne_bids\n\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\ncurdir = os.path.dirname(__file__)\nsys.path.append(os.path.abspath(os.path.join(curdir, '..', 'mne_bids')))\nsys.path.append(os.path.abspath(os.path.join(curdir, 'sphinxext')))\n\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.githubpages',\n 'sphinx.ext.autodoc',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.viewcode',\n 'sphinx.ext.autosummary',\n 'sphinx.ext.doctest',\n 'sphinx.ext.intersphinx',\n 'sphinx_gallery.gen_gallery',\n 'numpydoc',\n 'sphinx_copybutton',\n 'gen_cli', # custom extension, see ./sphinxext/gen_cli.py\n 'gh_substitutions', # custom extension, see ./sphinxext/gh_substitutions.py\n]\n\n# configure sphinx-copybutton\ncopybutton_prompt_text = r\">>> |\\.\\.\\. |\\$ \"\ncopybutton_prompt_is_regexp = True\n\n# configure numpydoc\nnumpydoc_xref_param_type = True\nnumpydoc_class_members_toctree = False\nnumpydoc_attributes_as_param_list = True\nnumpydoc_xref_aliases = {\n 'NibabelImageObject': 'nibabel.spatialimages.SpatialImage',\n}\nnumpydoc_xref_ignore = {\n # words\n 'of',\n}\n\n\n# generate autosummary even if no references\nautosummary_generate = True\nautodoc_default_options = {'inherited-members': None}\ndefault_role = 'autolink' # XXX silently allows bad syntax, someone should fix\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'mne_bids'\ntd = date.today()\ncopyright = u'2017-%s, MNE Developers. Last updated on %s' % (td.year,\n td.isoformat())\n\nauthor = u'MNE Developers'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = mne_bids.__version__\n# The full version, including alpha/beta/rc tags.\nrelease = version\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = ['auto_examples/index.rst', '_build', 'Thumbs.db',\n '.DS_Store']\n\n# HTML options (e.g., theme)\n# see: https://sphinx-bootstrap-theme.readthedocs.io/en/latest/README.html\n# Clean up sidebar: Do not show \"Source\" link\nhtml_show_sourcelink = False\n\nhtml_theme = 'bootstrap'\nhtml_theme_path = sphinx_bootstrap_theme.get_html_theme_path()\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\nhtml_static_path = ['_static']\nhtml_css_files = ['style.css']\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\nhtml_theme_options = {\n 'navbar_title': 'MNE-BIDS',\n 'bootswatch_theme': \"flatly\",\n 'navbar_sidebarrel': False, # no \"previous / next\" navigation\n 'navbar_pagenav': False, # no \"Page\" navigation in sidebar\n 'bootstrap_version': \"3\",\n 'navbar_links': [\n (\"News\", \"whats_new\"),\n (\"Install\", \"install\"),\n (\"Use\", \"use\"),\n (\"API\", \"api\"),\n (\"CLI\", \"generated/cli\"),\n (\"Contribute!\", \"contribute\")\n ]}\n\nhtml_sidebars = {'**': ['localtoc.html']}\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {\n 'python': ('https://docs.python.org/3', None),\n 'mne': ('https://mne.tools/dev', None),\n 'numpy': ('https://numpy.org/devdocs', None),\n 'scipy': ('https://scipy.github.io/devdocs', None),\n 'matplotlib': ('https://matplotlib.org', None),\n 'nilearn': ('https://nilearn.github.io', None),\n 'pandas': ('https://pandas.pydata.org/pandas-docs/dev', None),\n 'nibabel': ('https://nipy.org/nibabel', None),\n}\nintersphinx_timeout = 5\n\n# Resolve binder filepath_prefix. From the docs:\n# \"A prefix to append to the filepath in the Binder links. You should use this\n# if you will store your built documentation in a sub-folder of a repository,\n# instead of in the root.\"\n# we will store dev docs in a `dev` subdirectory and all other docs in a\n# directory \"v\" + version_str. E.g., \"v0.3\"\nif 'dev' in version:\n filepath_prefix = 'dev'\nelse:\n filepath_prefix = 'v{}'.format(version)\n\nsphinx_gallery_conf = {\n 'doc_module': 'mne_bids',\n 'reference_url': {\n 'mne_bids': None,\n },\n 'backreferences_dir': 'generated',\n 'examples_dirs': '../examples',\n 'within_subsection_order': ExampleTitleSortKey,\n 'gallery_dirs': 'auto_examples',\n 'filename_pattern': '^((?!sgskip).)*$',\n 'binder': {\n # Required keys\n 'org': 'mne-tools',\n 'repo': 'mne-bids',\n 'branch': 'gh-pages', # noqa: E501 Can be any branch, tag, or commit hash. Use a branch that hosts your docs.\n 'binderhub_url': 'https://mybinder.org', # noqa: E501 Any URL of a binderhub deployment. Must be full URL (e.g. https://mybinder.org).\n 'filepath_prefix': filepath_prefix, # noqa: E501 A prefix to prepend to any filepaths in Binder links.\n 'dependencies': [\n '../test_requirements.txt',\n './requirements.txt',\n ],\n }\n}\n", "path": "doc/conf.py"}], "after_files": [{"content": "\"\"\"Configure details for documentation with sphinx.\"\"\"\nimport os\nimport sys\nfrom datetime import date\n\nimport sphinx_gallery # noqa: F401\nfrom sphinx_gallery.sorting import ExampleTitleSortKey\n\nimport mne_bids\n\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\ncurdir = os.path.dirname(__file__)\nsys.path.append(os.path.abspath(os.path.join(curdir, '..', 'mne_bids')))\nsys.path.append(os.path.abspath(os.path.join(curdir, 'sphinxext')))\n\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.githubpages',\n 'sphinx.ext.autodoc',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.viewcode',\n 'sphinx.ext.autosummary',\n 'sphinx.ext.doctest',\n 'sphinx.ext.intersphinx',\n 'sphinx_gallery.gen_gallery',\n 'numpydoc',\n 'sphinx_copybutton',\n 'gen_cli', # custom extension, see ./sphinxext/gen_cli.py\n 'gh_substitutions', # custom extension, see ./sphinxext/gh_substitutions.py\n]\n\n# configure sphinx-copybutton\ncopybutton_prompt_text = r\">>> |\\.\\.\\. |\\$ \"\ncopybutton_prompt_is_regexp = True\n\n# configure numpydoc\nnumpydoc_xref_param_type = True\nnumpydoc_class_members_toctree = False\nnumpydoc_attributes_as_param_list = True\nnumpydoc_xref_aliases = {\n 'NibabelImageObject': 'nibabel.spatialimages.SpatialImage',\n}\nnumpydoc_xref_ignore = {\n # words\n 'of',\n}\n\n\n# generate autosummary even if no references\nautosummary_generate = True\nautodoc_default_options = {'inherited-members': None}\ndefault_role = 'autolink' # XXX silently allows bad syntax, someone should fix\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'MNE-BIDS'\ntd = date.today()\ncopyright = u'2017-%s, MNE Developers. Last updated on %s' % (td.year,\n td.isoformat())\n\nauthor = u'MNE Developers'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = mne_bids.__version__\n# The full version, including alpha/beta/rc tags.\nrelease = version\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = ['auto_examples/index.rst', '_build', 'Thumbs.db',\n '.DS_Store']\n\n# HTML options (e.g., theme)\nhtml_show_sourcelink = False\nhtml_copy_source = False\n\nhtml_theme = 'pydata_sphinx_theme'\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\nhtml_static_path = ['_static']\nhtml_css_files = ['style.css']\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\nhtml_theme_options = {\n 'icon_links': [\n dict(name='GitHub',\n url='https://github.com/mne-tools/mne-bids',\n icon='fab fa-github-square'),\n ],\n 'icon_links_label': 'Quick Links', # for screen reader\n 'use_edit_page_button': False,\n 'navigation_with_keys': False,\n 'show_toc_level': 1,\n}\n\nhtml_context = {\n 'versions_dropdown': {\n 'dev': 'v0.8 (devel)',\n 'stable': 'v0.7 (stable)',\n 'v0.6': 'v0.6',\n 'v0.5': 'v0.5',\n 'v0.4': 'v0.4',\n 'v0.3': 'v0.3',\n 'v0.2': 'v0.2',\n 'v0.1': 'v0.1',\n },\n}\n\nhtml_sidebars = {}\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {\n 'python': ('https://docs.python.org/3', None),\n 'mne': ('https://mne.tools/dev', None),\n 'numpy': ('https://numpy.org/devdocs', None),\n 'scipy': ('https://scipy.github.io/devdocs', None),\n 'matplotlib': ('https://matplotlib.org', None),\n 'nilearn': ('https://nilearn.github.io', None),\n 'pandas': ('https://pandas.pydata.org/pandas-docs/dev', None),\n 'nibabel': ('https://nipy.org/nibabel', None),\n}\nintersphinx_timeout = 5\n\n# Resolve binder filepath_prefix. From the docs:\n# \"A prefix to append to the filepath in the Binder links. You should use this\n# if you will store your built documentation in a sub-folder of a repository,\n# instead of in the root.\"\n# we will store dev docs in a `dev` subdirectory and all other docs in a\n# directory \"v\" + version_str. E.g., \"v0.3\"\nif 'dev' in version:\n filepath_prefix = 'dev'\nelse:\n filepath_prefix = 'v{}'.format(version)\n\nsphinx_gallery_conf = {\n 'doc_module': 'mne_bids',\n 'reference_url': {\n 'mne_bids': None,\n },\n 'backreferences_dir': 'generated',\n 'examples_dirs': '../examples',\n 'within_subsection_order': ExampleTitleSortKey,\n 'gallery_dirs': 'auto_examples',\n 'filename_pattern': '^((?!sgskip).)*$',\n 'binder': {\n # Required keys\n 'org': 'mne-tools',\n 'repo': 'mne-bids',\n 'branch': 'gh-pages', # noqa: E501 Can be any branch, tag, or commit hash. Use a branch that hosts your docs.\n 'binderhub_url': 'https://mybinder.org', # noqa: E501 Any URL of a binderhub deployment. Must be full URL (e.g. https://mybinder.org).\n 'filepath_prefix': filepath_prefix, # noqa: E501 A prefix to prepend to any filepaths in Binder links.\n 'dependencies': [\n '../test_requirements.txt',\n './requirements.txt',\n ],\n }\n}\n", "path": "doc/conf.py"}]} | 2,325 | 719 |
gh_patches_debug_12849 | rasdani/github-patches | git_diff | OCA__server-tools-37 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug] mass_editing - Search in ir.model.fields not working
I setup the mass_editing for res.partner then i go to:
Settings -> Technical -> Database Structure -> Fields
and go to the Filter with ('model_id','=',61) where 61 is the id of res.partner in res.model. By typ "res.partner" and select "Partner" from the suggestion.
Then i get the following error:
File "[..]/mass_editing/mass_editing.py", line 34, in search
model_domain += [('model_id', 'in', map(int, domain[2][1:-1].split(',')))]
TypeError: 'int' object has no attribute '**getitem**'
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mass_editing/mass_editing.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 ##############################################################################
3 #
4 # This module uses OpenERP, Open Source Management Solution Framework.
5 # Copyright (C):
6 # 2012-Today Serpent Consulting Services (<http://www.serpentcs.com>)
7 #
8 # This program is free software: you can redistribute it and/or modify
9 # it under the terms of the GNU General Public License as published by
10 # the Free Software Foundation, either version 3 of the License, or
11 # (at your option) any later version.
12 #
13 # This program is distributed in the hope that it will be useful,
14 # but WITHOUT ANY WARRANTY; without even the implied warranty of
15 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 # GNU General Public License for more details.
17 #
18 # You should have received a copy of the GNU General Public License
19 # along with this program. If not, see <http://www.gnu.org/licenses/>
20 #
21 ##############################################################################
22
23 from openerp.osv import orm, fields, osv
24 from openerp.tools.translate import _
25
26
27 class ir_model_fields(orm.Model):
28 _inherit = 'ir.model.fields'
29
30 def search(
31 self, cr, uid, args, offset=0, limit=0, order=None, context=None,
32 count=False):
33 model_domain = []
34 for domain in args:
35 if domain[0] == 'model_id' and domain[2]\
36 and type(domain[2]) != list:
37 model_domain += [(
38 'model_id', 'in', map(int, domain[2][1:-1].split(',')))]
39 else:
40 model_domain.append(domain)
41 return super(ir_model_fields, self).search(
42 cr, uid, model_domain, offset=offset, limit=limit, order=order,
43 context=context, count=count)
44
45 ir_model_fields()
46
47
48 class mass_object(orm.Model):
49 _name = "mass.object"
50
51 _columns = {
52 'name': fields.char("Name", size=64, required=True, select=1),
53 'model_id': fields.many2one(
54 'ir.model', 'Model', required=True, select=1),
55 'field_ids': fields.many2many(
56 'ir.model.fields', 'mass_field_rel', 'mass_id', 'field_id',
57 'Fields'),
58 'ref_ir_act_window': fields.many2one(
59 'ir.actions.act_window', 'Sidebar Action', readonly=True,
60 help="Sidebar action to make this template available on records \
61 of the related document model"),
62 'ref_ir_value': fields.many2one(
63 'ir.values', 'Sidebar Button', readonly=True,
64 help="Sidebar button to open the sidebar action"),
65 'model_ids': fields.many2many('ir.model', string='Model List')
66 }
67
68 _sql_constraints = [
69 ('name_uniq', 'unique (name)', _('Name must be unique!')),
70 ]
71
72 def onchange_model(self, cr, uid, ids, model_id, context=None):
73 if context is None:
74 context = {}
75 if not model_id:
76 return {'value': {'model_ids': [(6, 0, [])]}}
77 model_ids = [model_id]
78 model_obj = self.pool.get('ir.model')
79 active_model_obj = self.pool.get(model_obj.browse(
80 cr, uid, model_id).model)
81 if active_model_obj._inherits:
82 for key, val in active_model_obj._inherits.items():
83 found_model_ids = model_obj.search(
84 cr, uid, [('model', '=', key)], context=context)
85 model_ids += found_model_ids
86 return {'value': {'model_ids': [(6, 0, model_ids)]}}
87
88 def create_action(self, cr, uid, ids, context=None):
89 vals = {}
90 action_obj = self.pool.get('ir.actions.act_window')
91 ir_values_obj = self.pool.get('ir.values')
92 for data in self.browse(cr, uid, ids, context=context):
93 src_obj = data.model_id.model
94 button_name = _('Mass Editing (%s)') % data.name
95 vals['ref_ir_act_window'] = action_obj.create(cr, uid, {
96 'name': button_name,
97 'type': 'ir.actions.act_window',
98 'res_model': 'mass.editing.wizard',
99 'src_model': src_obj,
100 'view_type': 'form',
101 'context': "{'mass_editing_object' : %d}" % (data.id),
102 'view_mode': 'form,tree',
103 'target': 'new',
104 'auto_refresh': 1,
105 }, context)
106 vals['ref_ir_value'] = ir_values_obj.create(cr, uid, {
107 'name': button_name,
108 'model': src_obj,
109 'key2': 'client_action_multi',
110 'value': (
111 "ir.actions.act_window,"
112 + str(vals['ref_ir_act_window'])),
113 'object': True,
114 }, context)
115 self.write(cr, uid, ids, {
116 'ref_ir_act_window': vals.get('ref_ir_act_window', False),
117 'ref_ir_value': vals.get('ref_ir_value', False),
118 }, context)
119 return True
120
121 def unlink_action(self, cr, uid, ids, context=None):
122 for template in self.browse(cr, uid, ids, context=context):
123 try:
124 if template.ref_ir_act_window:
125 self.pool.get('ir.actions.act_window').unlink(
126 cr, uid, template.ref_ir_act_window.id, context)
127 if template.ref_ir_value:
128 ir_values_obj = self.pool.get('ir.values')
129 ir_values_obj.unlink(
130 cr, uid, template.ref_ir_value.id, context)
131 except:
132 raise osv.except_osv(
133 _("Warning"),
134 _("Deletion of the action record failed."))
135 return True
136
137 def unlink(self, cr, uid, ids, context=None):
138 self.unlink_action(cr, uid, ids, context)
139 return super(mass_object, self).unlink(cr, uid, ids, context)
140
141 def copy(self, cr, uid, record_id, default=None, context=None):
142 if default is None:
143 default = {}
144 default.update({'name': '', 'field_ids': []})
145 return super(mass_object, self).copy(
146 cr, uid, record_id, default, context)
147
148 mass_object()
149 # vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
150
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mass_editing/mass_editing.py b/mass_editing/mass_editing.py
--- a/mass_editing/mass_editing.py
+++ b/mass_editing/mass_editing.py
@@ -32,10 +32,11 @@
count=False):
model_domain = []
for domain in args:
- if domain[0] == 'model_id' and domain[2]\
- and type(domain[2]) != list:
- model_domain += [(
- 'model_id', 'in', map(int, domain[2][1:-1].split(',')))]
+ if (len(domain) > 2 and domain[0] == 'model_id'
+ and isinstance(domain[2], basestring)):
+ model_domain += [
+ ('model_id', 'in', map(int, domain[2][1:-1].split(',')))
+ ]
else:
model_domain.append(domain)
return super(ir_model_fields, self).search(
| {"golden_diff": "diff --git a/mass_editing/mass_editing.py b/mass_editing/mass_editing.py\n--- a/mass_editing/mass_editing.py\n+++ b/mass_editing/mass_editing.py\n@@ -32,10 +32,11 @@\n count=False):\n model_domain = []\n for domain in args:\n- if domain[0] == 'model_id' and domain[2]\\\n- and type(domain[2]) != list:\n- model_domain += [(\n- 'model_id', 'in', map(int, domain[2][1:-1].split(',')))]\n+ if (len(domain) > 2 and domain[0] == 'model_id'\n+ and isinstance(domain[2], basestring)):\n+ model_domain += [\n+ ('model_id', 'in', map(int, domain[2][1:-1].split(',')))\n+ ]\n else:\n model_domain.append(domain)\n return super(ir_model_fields, self).search(\n", "issue": "[Bug] mass_editing - Search in ir.model.fields not working\nI setup the mass_editing for res.partner then i go to:\nSettings -> Technical -> Database Structure -> Fields\nand go to the Filter with ('model_id','=',61) where 61 is the id of res.partner in res.model. By typ \"res.partner\" and select \"Partner\" from the suggestion.\n\nThen i get the following error:\nFile \"[..]/mass_editing/mass_editing.py\", line 34, in search\n model_domain += [('model_id', 'in', map(int, domain[2][1:-1].split(',')))]\nTypeError: 'int' object has no attribute '**getitem**'\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n##############################################################################\n#\n# This module uses OpenERP, Open Source Management Solution Framework.\n# Copyright (C):\n# 2012-Today Serpent Consulting Services (<http://www.serpentcs.com>)\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>\n#\n##############################################################################\n\nfrom openerp.osv import orm, fields, osv\nfrom openerp.tools.translate import _\n\n\nclass ir_model_fields(orm.Model):\n _inherit = 'ir.model.fields'\n\n def search(\n self, cr, uid, args, offset=0, limit=0, order=None, context=None,\n count=False):\n model_domain = []\n for domain in args:\n if domain[0] == 'model_id' and domain[2]\\\n and type(domain[2]) != list:\n model_domain += [(\n 'model_id', 'in', map(int, domain[2][1:-1].split(',')))]\n else:\n model_domain.append(domain)\n return super(ir_model_fields, self).search(\n cr, uid, model_domain, offset=offset, limit=limit, order=order,\n context=context, count=count)\n\nir_model_fields()\n\n\nclass mass_object(orm.Model):\n _name = \"mass.object\"\n\n _columns = {\n 'name': fields.char(\"Name\", size=64, required=True, select=1),\n 'model_id': fields.many2one(\n 'ir.model', 'Model', required=True, select=1),\n 'field_ids': fields.many2many(\n 'ir.model.fields', 'mass_field_rel', 'mass_id', 'field_id',\n 'Fields'),\n 'ref_ir_act_window': fields.many2one(\n 'ir.actions.act_window', 'Sidebar Action', readonly=True,\n help=\"Sidebar action to make this template available on records \\\n of the related document model\"),\n 'ref_ir_value': fields.many2one(\n 'ir.values', 'Sidebar Button', readonly=True,\n help=\"Sidebar button to open the sidebar action\"),\n 'model_ids': fields.many2many('ir.model', string='Model List')\n }\n\n _sql_constraints = [\n ('name_uniq', 'unique (name)', _('Name must be unique!')),\n ]\n\n def onchange_model(self, cr, uid, ids, model_id, context=None):\n if context is None:\n context = {}\n if not model_id:\n return {'value': {'model_ids': [(6, 0, [])]}}\n model_ids = [model_id]\n model_obj = self.pool.get('ir.model')\n active_model_obj = self.pool.get(model_obj.browse(\n cr, uid, model_id).model)\n if active_model_obj._inherits:\n for key, val in active_model_obj._inherits.items():\n found_model_ids = model_obj.search(\n cr, uid, [('model', '=', key)], context=context)\n model_ids += found_model_ids\n return {'value': {'model_ids': [(6, 0, model_ids)]}}\n\n def create_action(self, cr, uid, ids, context=None):\n vals = {}\n action_obj = self.pool.get('ir.actions.act_window')\n ir_values_obj = self.pool.get('ir.values')\n for data in self.browse(cr, uid, ids, context=context):\n src_obj = data.model_id.model\n button_name = _('Mass Editing (%s)') % data.name\n vals['ref_ir_act_window'] = action_obj.create(cr, uid, {\n 'name': button_name,\n 'type': 'ir.actions.act_window',\n 'res_model': 'mass.editing.wizard',\n 'src_model': src_obj,\n 'view_type': 'form',\n 'context': \"{'mass_editing_object' : %d}\" % (data.id),\n 'view_mode': 'form,tree',\n 'target': 'new',\n 'auto_refresh': 1,\n }, context)\n vals['ref_ir_value'] = ir_values_obj.create(cr, uid, {\n 'name': button_name,\n 'model': src_obj,\n 'key2': 'client_action_multi',\n 'value': (\n \"ir.actions.act_window,\"\n + str(vals['ref_ir_act_window'])),\n 'object': True,\n }, context)\n self.write(cr, uid, ids, {\n 'ref_ir_act_window': vals.get('ref_ir_act_window', False),\n 'ref_ir_value': vals.get('ref_ir_value', False),\n }, context)\n return True\n\n def unlink_action(self, cr, uid, ids, context=None):\n for template in self.browse(cr, uid, ids, context=context):\n try:\n if template.ref_ir_act_window:\n self.pool.get('ir.actions.act_window').unlink(\n cr, uid, template.ref_ir_act_window.id, context)\n if template.ref_ir_value:\n ir_values_obj = self.pool.get('ir.values')\n ir_values_obj.unlink(\n cr, uid, template.ref_ir_value.id, context)\n except:\n raise osv.except_osv(\n _(\"Warning\"),\n _(\"Deletion of the action record failed.\"))\n return True\n\n def unlink(self, cr, uid, ids, context=None):\n self.unlink_action(cr, uid, ids, context)\n return super(mass_object, self).unlink(cr, uid, ids, context)\n\n def copy(self, cr, uid, record_id, default=None, context=None):\n if default is None:\n default = {}\n default.update({'name': '', 'field_ids': []})\n return super(mass_object, self).copy(\n cr, uid, record_id, default, context)\n\nmass_object()\n# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:\n", "path": "mass_editing/mass_editing.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n##############################################################################\n#\n# This module uses OpenERP, Open Source Management Solution Framework.\n# Copyright (C):\n# 2012-Today Serpent Consulting Services (<http://www.serpentcs.com>)\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>\n#\n##############################################################################\n\nfrom openerp.osv import orm, fields, osv\nfrom openerp.tools.translate import _\n\n\nclass ir_model_fields(orm.Model):\n _inherit = 'ir.model.fields'\n\n def search(\n self, cr, uid, args, offset=0, limit=0, order=None, context=None,\n count=False):\n model_domain = []\n for domain in args:\n if (len(domain) > 2 and domain[0] == 'model_id'\n and isinstance(domain[2], basestring)):\n model_domain += [\n ('model_id', 'in', map(int, domain[2][1:-1].split(',')))\n ]\n else:\n model_domain.append(domain)\n return super(ir_model_fields, self).search(\n cr, uid, model_domain, offset=offset, limit=limit, order=order,\n context=context, count=count)\n\nir_model_fields()\n\n\nclass mass_object(orm.Model):\n _name = \"mass.object\"\n\n _columns = {\n 'name': fields.char(\"Name\", size=64, required=True, select=1),\n 'model_id': fields.many2one(\n 'ir.model', 'Model', required=True, select=1),\n 'field_ids': fields.many2many(\n 'ir.model.fields', 'mass_field_rel', 'mass_id', 'field_id',\n 'Fields'),\n 'ref_ir_act_window': fields.many2one(\n 'ir.actions.act_window', 'Sidebar Action', readonly=True,\n help=\"Sidebar action to make this template available on records \\\n of the related document model\"),\n 'ref_ir_value': fields.many2one(\n 'ir.values', 'Sidebar Button', readonly=True,\n help=\"Sidebar button to open the sidebar action\"),\n 'model_ids': fields.many2many('ir.model', string='Model List')\n }\n\n _sql_constraints = [\n ('name_uniq', 'unique (name)', _('Name must be unique!')),\n ]\n\n def onchange_model(self, cr, uid, ids, model_id, context=None):\n if context is None:\n context = {}\n if not model_id:\n return {'value': {'model_ids': [(6, 0, [])]}}\n model_ids = [model_id]\n model_obj = self.pool.get('ir.model')\n active_model_obj = self.pool.get(model_obj.browse(\n cr, uid, model_id).model)\n if active_model_obj._inherits:\n for key, val in active_model_obj._inherits.items():\n found_model_ids = model_obj.search(\n cr, uid, [('model', '=', key)], context=context)\n model_ids += found_model_ids\n return {'value': {'model_ids': [(6, 0, model_ids)]}}\n\n def create_action(self, cr, uid, ids, context=None):\n vals = {}\n action_obj = self.pool.get('ir.actions.act_window')\n ir_values_obj = self.pool.get('ir.values')\n for data in self.browse(cr, uid, ids, context=context):\n src_obj = data.model_id.model\n button_name = _('Mass Editing (%s)') % data.name\n vals['ref_ir_act_window'] = action_obj.create(cr, uid, {\n 'name': button_name,\n 'type': 'ir.actions.act_window',\n 'res_model': 'mass.editing.wizard',\n 'src_model': src_obj,\n 'view_type': 'form',\n 'context': \"{'mass_editing_object' : %d}\" % (data.id),\n 'view_mode': 'form,tree',\n 'target': 'new',\n 'auto_refresh': 1,\n }, context)\n vals['ref_ir_value'] = ir_values_obj.create(cr, uid, {\n 'name': button_name,\n 'model': src_obj,\n 'key2': 'client_action_multi',\n 'value': (\n \"ir.actions.act_window,\"\n + str(vals['ref_ir_act_window'])),\n 'object': True,\n }, context)\n self.write(cr, uid, ids, {\n 'ref_ir_act_window': vals.get('ref_ir_act_window', False),\n 'ref_ir_value': vals.get('ref_ir_value', False),\n }, context)\n return True\n\n def unlink_action(self, cr, uid, ids, context=None):\n for template in self.browse(cr, uid, ids, context=context):\n try:\n if template.ref_ir_act_window:\n self.pool.get('ir.actions.act_window').unlink(\n cr, uid, template.ref_ir_act_window.id, context)\n if template.ref_ir_value:\n ir_values_obj = self.pool.get('ir.values')\n ir_values_obj.unlink(\n cr, uid, template.ref_ir_value.id, context)\n except:\n raise osv.except_osv(\n _(\"Warning\"),\n _(\"Deletion of the action record failed.\"))\n return True\n\n def unlink(self, cr, uid, ids, context=None):\n self.unlink_action(cr, uid, ids, context)\n return super(mass_object, self).unlink(cr, uid, ids, context)\n\n def copy(self, cr, uid, record_id, default=None, context=None):\n if default is None:\n default = {}\n default.update({'name': '', 'field_ids': []})\n return super(mass_object, self).copy(\n cr, uid, record_id, default, context)\n\nmass_object()\n# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:\n", "path": "mass_editing/mass_editing.py"}]} | 2,147 | 220 |
gh_patches_debug_12926 | rasdani/github-patches | git_diff | bokeh__bokeh-6804 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Automatic configuration of Slider.format
Integer sliders should use integer formatting.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bokeh/models/widgets/sliders.py`
Content:
```
1 """ Various kinds of slider widgets.
2
3 """
4 from __future__ import absolute_import
5
6 from ...core.has_props import abstract
7 from ...core.properties import Bool, Int, Float, String, Date, Enum, Tuple, Instance, Color, Override
8 from ...core.enums import SliderCallbackPolicy
9 from ..callbacks import Callback
10 from .widget import Widget
11
12 @abstract
13 class AbstractSlider(Widget):
14 """ """
15
16 title = String(default="", help="""
17 Slider's label.
18 """)
19
20 show_value = Bool(default=True, help="""
21 Whether or not show slider's value.
22 """)
23
24 format = String(help="""
25 """)
26
27 orientation = Enum("horizontal", "vertical", help="""
28 Orient the slider either horizontally (default) or vertically.
29 """)
30
31 direction = Enum("ltr", "rtl", help="""
32 """)
33
34 tooltips = Bool(default=True, help="""
35 """)
36
37 callback = Instance(Callback, help="""
38 A callback to run in the browser whenever the current Slider value changes.
39 """)
40
41 callback_throttle = Float(default=200, help="""
42 Number of millseconds to pause between callback calls as the slider is moved.
43 """)
44
45 callback_policy = Enum(SliderCallbackPolicy, default="throttle", help="""
46 When the callback is initiated. This parameter can take on only one of three options:
47
48 * "continuous": the callback will be executed immediately for each movement of the slider
49 * "throttle": the callback will be executed at most every ``callback_throttle`` milliseconds.
50 * "mouseup": the callback will be executed only once when the slider is released.
51
52 The "mouseup" policy is intended for scenarios in which the callback is expensive in time.
53 """)
54
55 bar_color = Color(default="#3fb8af", help="""
56 """)
57
58 class Slider(AbstractSlider):
59 """ Slider-based number selection widget. """
60
61 start = Float(help="""
62 The minimum allowable value.
63 """)
64
65 end = Float(help="""
66 The maximum allowable value.
67 """)
68
69 value = Float(help="""
70 Initial or selected value.
71 """)
72
73 step = Float(default=1, help="""
74 The step between consecutive values.
75 """)
76
77 format = Override(default="0,0.00")
78
79 class RangeSlider(AbstractSlider):
80 """ Range-slider based number range selection widget. """
81
82 value = Tuple(Float, Float, help="""
83 Initial or selected range.
84 """)
85
86 start = Float(help="""
87 The minimum allowable value.
88 """)
89
90 end = Float(help="""
91 The maximum allowable value.
92 """)
93
94 step = Float(default=1, help="""
95 The step between consecutive values.
96 """)
97
98 format = Override(default="0,0.00")
99
100 class DateSlider(AbstractSlider):
101 """ Slider-based date selection widget. """
102
103 value = Date(help="""
104 Initial or selected value.
105 """)
106
107 start = Date(help="""
108 The minimum allowable value.
109 """)
110
111 end = Date(help="""
112 The maximum allowable value.
113 """)
114
115 step = Int(default=1, help="""
116 The step between consecutive values.
117 """)
118
119 format = Override(default="%d %b %G")
120
121 class DateRangeSlider(AbstractSlider):
122 """ Slider-based date range selection widget. """
123
124 value = Tuple(Date, Date, help="""
125 Initial or selected range.
126 """)
127
128 start = Date(help="""
129 The minimum allowable value.
130 """)
131
132 end = Date(help="""
133 The maximum allowable value.
134 """)
135
136 step = Int(default=1, help="""
137 The step between consecutive values.
138 """)
139
140 format = Override(default="%d %b %G")
141
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bokeh/models/widgets/sliders.py b/bokeh/models/widgets/sliders.py
--- a/bokeh/models/widgets/sliders.py
+++ b/bokeh/models/widgets/sliders.py
@@ -74,7 +74,7 @@
The step between consecutive values.
""")
- format = Override(default="0,0.00")
+ format = Override(default="0[.]00")
class RangeSlider(AbstractSlider):
""" Range-slider based number range selection widget. """
@@ -95,7 +95,7 @@
The step between consecutive values.
""")
- format = Override(default="0,0.00")
+ format = Override(default="0[.]00")
class DateSlider(AbstractSlider):
""" Slider-based date selection widget. """
| {"golden_diff": "diff --git a/bokeh/models/widgets/sliders.py b/bokeh/models/widgets/sliders.py\n--- a/bokeh/models/widgets/sliders.py\n+++ b/bokeh/models/widgets/sliders.py\n@@ -74,7 +74,7 @@\n The step between consecutive values.\n \"\"\")\n \n- format = Override(default=\"0,0.00\")\n+ format = Override(default=\"0[.]00\")\n \n class RangeSlider(AbstractSlider):\n \"\"\" Range-slider based number range selection widget. \"\"\"\n@@ -95,7 +95,7 @@\n The step between consecutive values.\n \"\"\")\n \n- format = Override(default=\"0,0.00\")\n+ format = Override(default=\"0[.]00\")\n \n class DateSlider(AbstractSlider):\n \"\"\" Slider-based date selection widget. \"\"\"\n", "issue": "Automatic configuration of Slider.format\nInteger sliders should use integer formatting.\r\n\n", "before_files": [{"content": "\"\"\" Various kinds of slider widgets.\n\n\"\"\"\nfrom __future__ import absolute_import\n\nfrom ...core.has_props import abstract\nfrom ...core.properties import Bool, Int, Float, String, Date, Enum, Tuple, Instance, Color, Override\nfrom ...core.enums import SliderCallbackPolicy\nfrom ..callbacks import Callback\nfrom .widget import Widget\n\n@abstract\nclass AbstractSlider(Widget):\n \"\"\" \"\"\"\n\n title = String(default=\"\", help=\"\"\"\n Slider's label.\n \"\"\")\n\n show_value = Bool(default=True, help=\"\"\"\n Whether or not show slider's value.\n \"\"\")\n\n format = String(help=\"\"\"\n \"\"\")\n\n orientation = Enum(\"horizontal\", \"vertical\", help=\"\"\"\n Orient the slider either horizontally (default) or vertically.\n \"\"\")\n\n direction = Enum(\"ltr\", \"rtl\", help=\"\"\"\n \"\"\")\n\n tooltips = Bool(default=True, help=\"\"\"\n \"\"\")\n\n callback = Instance(Callback, help=\"\"\"\n A callback to run in the browser whenever the current Slider value changes.\n \"\"\")\n\n callback_throttle = Float(default=200, help=\"\"\"\n Number of millseconds to pause between callback calls as the slider is moved.\n \"\"\")\n\n callback_policy = Enum(SliderCallbackPolicy, default=\"throttle\", help=\"\"\"\n When the callback is initiated. This parameter can take on only one of three options:\n\n * \"continuous\": the callback will be executed immediately for each movement of the slider\n * \"throttle\": the callback will be executed at most every ``callback_throttle`` milliseconds.\n * \"mouseup\": the callback will be executed only once when the slider is released.\n\n The \"mouseup\" policy is intended for scenarios in which the callback is expensive in time.\n \"\"\")\n\n bar_color = Color(default=\"#3fb8af\", help=\"\"\"\n \"\"\")\n\nclass Slider(AbstractSlider):\n \"\"\" Slider-based number selection widget. \"\"\"\n\n start = Float(help=\"\"\"\n The minimum allowable value.\n \"\"\")\n\n end = Float(help=\"\"\"\n The maximum allowable value.\n \"\"\")\n\n value = Float(help=\"\"\"\n Initial or selected value.\n \"\"\")\n\n step = Float(default=1, help=\"\"\"\n The step between consecutive values.\n \"\"\")\n\n format = Override(default=\"0,0.00\")\n\nclass RangeSlider(AbstractSlider):\n \"\"\" Range-slider based number range selection widget. \"\"\"\n\n value = Tuple(Float, Float, help=\"\"\"\n Initial or selected range.\n \"\"\")\n\n start = Float(help=\"\"\"\n The minimum allowable value.\n \"\"\")\n\n end = Float(help=\"\"\"\n The maximum allowable value.\n \"\"\")\n\n step = Float(default=1, help=\"\"\"\n The step between consecutive values.\n \"\"\")\n\n format = Override(default=\"0,0.00\")\n\nclass DateSlider(AbstractSlider):\n \"\"\" Slider-based date selection widget. \"\"\"\n\n value = Date(help=\"\"\"\n Initial or selected value.\n \"\"\")\n\n start = Date(help=\"\"\"\n The minimum allowable value.\n \"\"\")\n\n end = Date(help=\"\"\"\n The maximum allowable value.\n \"\"\")\n\n step = Int(default=1, help=\"\"\"\n The step between consecutive values.\n \"\"\")\n\n format = Override(default=\"%d %b %G\")\n\nclass DateRangeSlider(AbstractSlider):\n \"\"\" Slider-based date range selection widget. \"\"\"\n\n value = Tuple(Date, Date, help=\"\"\"\n Initial or selected range.\n \"\"\")\n\n start = Date(help=\"\"\"\n The minimum allowable value.\n \"\"\")\n\n end = Date(help=\"\"\"\n The maximum allowable value.\n \"\"\")\n\n step = Int(default=1, help=\"\"\"\n The step between consecutive values.\n \"\"\")\n\n format = Override(default=\"%d %b %G\")\n", "path": "bokeh/models/widgets/sliders.py"}], "after_files": [{"content": "\"\"\" Various kinds of slider widgets.\n\n\"\"\"\nfrom __future__ import absolute_import\n\nfrom ...core.has_props import abstract\nfrom ...core.properties import Bool, Int, Float, String, Date, Enum, Tuple, Instance, Color, Override\nfrom ...core.enums import SliderCallbackPolicy\nfrom ..callbacks import Callback\nfrom .widget import Widget\n\n@abstract\nclass AbstractSlider(Widget):\n \"\"\" \"\"\"\n\n title = String(default=\"\", help=\"\"\"\n Slider's label.\n \"\"\")\n\n show_value = Bool(default=True, help=\"\"\"\n Whether or not show slider's value.\n \"\"\")\n\n format = String(help=\"\"\"\n \"\"\")\n\n orientation = Enum(\"horizontal\", \"vertical\", help=\"\"\"\n Orient the slider either horizontally (default) or vertically.\n \"\"\")\n\n direction = Enum(\"ltr\", \"rtl\", help=\"\"\"\n \"\"\")\n\n tooltips = Bool(default=True, help=\"\"\"\n \"\"\")\n\n callback = Instance(Callback, help=\"\"\"\n A callback to run in the browser whenever the current Slider value changes.\n \"\"\")\n\n callback_throttle = Float(default=200, help=\"\"\"\n Number of millseconds to pause between callback calls as the slider is moved.\n \"\"\")\n\n callback_policy = Enum(SliderCallbackPolicy, default=\"throttle\", help=\"\"\"\n When the callback is initiated. This parameter can take on only one of three options:\n\n * \"continuous\": the callback will be executed immediately for each movement of the slider\n * \"throttle\": the callback will be executed at most every ``callback_throttle`` milliseconds.\n * \"mouseup\": the callback will be executed only once when the slider is released.\n\n The \"mouseup\" policy is intended for scenarios in which the callback is expensive in time.\n \"\"\")\n\n bar_color = Color(default=\"#3fb8af\", help=\"\"\"\n \"\"\")\n\nclass Slider(AbstractSlider):\n \"\"\" Slider-based number selection widget. \"\"\"\n\n start = Float(help=\"\"\"\n The minimum allowable value.\n \"\"\")\n\n end = Float(help=\"\"\"\n The maximum allowable value.\n \"\"\")\n\n value = Float(help=\"\"\"\n Initial or selected value.\n \"\"\")\n\n step = Float(default=1, help=\"\"\"\n The step between consecutive values.\n \"\"\")\n\n format = Override(default=\"0[.]00\")\n\nclass RangeSlider(AbstractSlider):\n \"\"\" Range-slider based number range selection widget. \"\"\"\n\n value = Tuple(Float, Float, help=\"\"\"\n Initial or selected range.\n \"\"\")\n\n start = Float(help=\"\"\"\n The minimum allowable value.\n \"\"\")\n\n end = Float(help=\"\"\"\n The maximum allowable value.\n \"\"\")\n\n step = Float(default=1, help=\"\"\"\n The step between consecutive values.\n \"\"\")\n\n format = Override(default=\"0[.]00\")\n\nclass DateSlider(AbstractSlider):\n \"\"\" Slider-based date selection widget. \"\"\"\n\n value = Date(help=\"\"\"\n Initial or selected value.\n \"\"\")\n\n start = Date(help=\"\"\"\n The minimum allowable value.\n \"\"\")\n\n end = Date(help=\"\"\"\n The maximum allowable value.\n \"\"\")\n\n step = Int(default=1, help=\"\"\"\n The step between consecutive values.\n \"\"\")\n\n format = Override(default=\"%d %b %G\")\n\nclass DateRangeSlider(AbstractSlider):\n \"\"\" Slider-based date range selection widget. \"\"\"\n\n value = Tuple(Date, Date, help=\"\"\"\n Initial or selected range.\n \"\"\")\n\n start = Date(help=\"\"\"\n The minimum allowable value.\n \"\"\")\n\n end = Date(help=\"\"\"\n The maximum allowable value.\n \"\"\")\n\n step = Int(default=1, help=\"\"\"\n The step between consecutive values.\n \"\"\")\n\n format = Override(default=\"%d %b %G\")\n", "path": "bokeh/models/widgets/sliders.py"}]} | 1,411 | 178 |
gh_patches_debug_11006 | rasdani/github-patches | git_diff | urllib3__urllib3-818 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
UnicodeDecodeError in format_header_params
This issue was discussed here: https://github.com/kennethreitz/requests/issues/2639 and it seemed like the consensus was that this should be fixed in urllib3.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `urllib3/fields.py`
Content:
```
1 from __future__ import absolute_import
2 import email.utils
3 import mimetypes
4
5 from .packages import six
6
7
8 def guess_content_type(filename, default='application/octet-stream'):
9 """
10 Guess the "Content-Type" of a file.
11
12 :param filename:
13 The filename to guess the "Content-Type" of using :mod:`mimetypes`.
14 :param default:
15 If no "Content-Type" can be guessed, default to `default`.
16 """
17 if filename:
18 return mimetypes.guess_type(filename)[0] or default
19 return default
20
21
22 def format_header_param(name, value):
23 """
24 Helper function to format and quote a single header parameter.
25
26 Particularly useful for header parameters which might contain
27 non-ASCII values, like file names. This follows RFC 2231, as
28 suggested by RFC 2388 Section 4.4.
29
30 :param name:
31 The name of the parameter, a string expected to be ASCII only.
32 :param value:
33 The value of the parameter, provided as a unicode string.
34 """
35 if not any(ch in value for ch in '"\\\r\n'):
36 result = '%s="%s"' % (name, value)
37 try:
38 result.encode('ascii')
39 except UnicodeEncodeError:
40 pass
41 else:
42 return result
43 if not six.PY3: # Python 2:
44 value = value.encode('utf-8')
45 value = email.utils.encode_rfc2231(value, 'utf-8')
46 value = '%s*=%s' % (name, value)
47 return value
48
49
50 class RequestField(object):
51 """
52 A data container for request body parameters.
53
54 :param name:
55 The name of this request field.
56 :param data:
57 The data/value body.
58 :param filename:
59 An optional filename of the request field.
60 :param headers:
61 An optional dict-like object of headers to initially use for the field.
62 """
63 def __init__(self, name, data, filename=None, headers=None):
64 self._name = name
65 self._filename = filename
66 self.data = data
67 self.headers = {}
68 if headers:
69 self.headers = dict(headers)
70
71 @classmethod
72 def from_tuples(cls, fieldname, value):
73 """
74 A :class:`~urllib3.fields.RequestField` factory from old-style tuple parameters.
75
76 Supports constructing :class:`~urllib3.fields.RequestField` from
77 parameter of key/value strings AND key/filetuple. A filetuple is a
78 (filename, data, MIME type) tuple where the MIME type is optional.
79 For example::
80
81 'foo': 'bar',
82 'fakefile': ('foofile.txt', 'contents of foofile'),
83 'realfile': ('barfile.txt', open('realfile').read()),
84 'typedfile': ('bazfile.bin', open('bazfile').read(), 'image/jpeg'),
85 'nonamefile': 'contents of nonamefile field',
86
87 Field names and filenames must be unicode.
88 """
89 if isinstance(value, tuple):
90 if len(value) == 3:
91 filename, data, content_type = value
92 else:
93 filename, data = value
94 content_type = guess_content_type(filename)
95 else:
96 filename = None
97 content_type = None
98 data = value
99
100 request_param = cls(fieldname, data, filename=filename)
101 request_param.make_multipart(content_type=content_type)
102
103 return request_param
104
105 def _render_part(self, name, value):
106 """
107 Overridable helper function to format a single header parameter.
108
109 :param name:
110 The name of the parameter, a string expected to be ASCII only.
111 :param value:
112 The value of the parameter, provided as a unicode string.
113 """
114 return format_header_param(name, value)
115
116 def _render_parts(self, header_parts):
117 """
118 Helper function to format and quote a single header.
119
120 Useful for single headers that are composed of multiple items. E.g.,
121 'Content-Disposition' fields.
122
123 :param header_parts:
124 A sequence of (k, v) typles or a :class:`dict` of (k, v) to format
125 as `k1="v1"; k2="v2"; ...`.
126 """
127 parts = []
128 iterable = header_parts
129 if isinstance(header_parts, dict):
130 iterable = header_parts.items()
131
132 for name, value in iterable:
133 if value:
134 parts.append(self._render_part(name, value))
135
136 return '; '.join(parts)
137
138 def render_headers(self):
139 """
140 Renders the headers for this request field.
141 """
142 lines = []
143
144 sort_keys = ['Content-Disposition', 'Content-Type', 'Content-Location']
145 for sort_key in sort_keys:
146 if self.headers.get(sort_key, False):
147 lines.append('%s: %s' % (sort_key, self.headers[sort_key]))
148
149 for header_name, header_value in self.headers.items():
150 if header_name not in sort_keys:
151 if header_value:
152 lines.append('%s: %s' % (header_name, header_value))
153
154 lines.append('\r\n')
155 return '\r\n'.join(lines)
156
157 def make_multipart(self, content_disposition=None, content_type=None,
158 content_location=None):
159 """
160 Makes this request field into a multipart request field.
161
162 This method overrides "Content-Disposition", "Content-Type" and
163 "Content-Location" headers to the request parameter.
164
165 :param content_type:
166 The 'Content-Type' of the request body.
167 :param content_location:
168 The 'Content-Location' of the request body.
169
170 """
171 self.headers['Content-Disposition'] = content_disposition or 'form-data'
172 self.headers['Content-Disposition'] += '; '.join([
173 '', self._render_parts(
174 (('name', self._name), ('filename', self._filename))
175 )
176 ])
177 self.headers['Content-Type'] = content_type
178 self.headers['Content-Location'] = content_location
179
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/urllib3/fields.py b/urllib3/fields.py
--- a/urllib3/fields.py
+++ b/urllib3/fields.py
@@ -36,11 +36,11 @@
result = '%s="%s"' % (name, value)
try:
result.encode('ascii')
- except UnicodeEncodeError:
+ except (UnicodeEncodeError, UnicodeDecodeError):
pass
else:
return result
- if not six.PY3: # Python 2:
+ if not six.PY3 and isinstance(value, six.text_type): # Python 2:
value = value.encode('utf-8')
value = email.utils.encode_rfc2231(value, 'utf-8')
value = '%s*=%s' % (name, value)
| {"golden_diff": "diff --git a/urllib3/fields.py b/urllib3/fields.py\n--- a/urllib3/fields.py\n+++ b/urllib3/fields.py\n@@ -36,11 +36,11 @@\n result = '%s=\"%s\"' % (name, value)\n try:\n result.encode('ascii')\n- except UnicodeEncodeError:\n+ except (UnicodeEncodeError, UnicodeDecodeError):\n pass\n else:\n return result\n- if not six.PY3: # Python 2:\n+ if not six.PY3 and isinstance(value, six.text_type): # Python 2:\n value = value.encode('utf-8')\n value = email.utils.encode_rfc2231(value, 'utf-8')\n value = '%s*=%s' % (name, value)\n", "issue": "UnicodeDecodeError in format_header_params\nThis issue was discussed here: https://github.com/kennethreitz/requests/issues/2639 and it seemed like the consensus was that this should be fixed in urllib3. \n\n", "before_files": [{"content": "from __future__ import absolute_import\nimport email.utils\nimport mimetypes\n\nfrom .packages import six\n\n\ndef guess_content_type(filename, default='application/octet-stream'):\n \"\"\"\n Guess the \"Content-Type\" of a file.\n\n :param filename:\n The filename to guess the \"Content-Type\" of using :mod:`mimetypes`.\n :param default:\n If no \"Content-Type\" can be guessed, default to `default`.\n \"\"\"\n if filename:\n return mimetypes.guess_type(filename)[0] or default\n return default\n\n\ndef format_header_param(name, value):\n \"\"\"\n Helper function to format and quote a single header parameter.\n\n Particularly useful for header parameters which might contain\n non-ASCII values, like file names. This follows RFC 2231, as\n suggested by RFC 2388 Section 4.4.\n\n :param name:\n The name of the parameter, a string expected to be ASCII only.\n :param value:\n The value of the parameter, provided as a unicode string.\n \"\"\"\n if not any(ch in value for ch in '\"\\\\\\r\\n'):\n result = '%s=\"%s\"' % (name, value)\n try:\n result.encode('ascii')\n except UnicodeEncodeError:\n pass\n else:\n return result\n if not six.PY3: # Python 2:\n value = value.encode('utf-8')\n value = email.utils.encode_rfc2231(value, 'utf-8')\n value = '%s*=%s' % (name, value)\n return value\n\n\nclass RequestField(object):\n \"\"\"\n A data container for request body parameters.\n\n :param name:\n The name of this request field.\n :param data:\n The data/value body.\n :param filename:\n An optional filename of the request field.\n :param headers:\n An optional dict-like object of headers to initially use for the field.\n \"\"\"\n def __init__(self, name, data, filename=None, headers=None):\n self._name = name\n self._filename = filename\n self.data = data\n self.headers = {}\n if headers:\n self.headers = dict(headers)\n\n @classmethod\n def from_tuples(cls, fieldname, value):\n \"\"\"\n A :class:`~urllib3.fields.RequestField` factory from old-style tuple parameters.\n\n Supports constructing :class:`~urllib3.fields.RequestField` from\n parameter of key/value strings AND key/filetuple. A filetuple is a\n (filename, data, MIME type) tuple where the MIME type is optional.\n For example::\n\n 'foo': 'bar',\n 'fakefile': ('foofile.txt', 'contents of foofile'),\n 'realfile': ('barfile.txt', open('realfile').read()),\n 'typedfile': ('bazfile.bin', open('bazfile').read(), 'image/jpeg'),\n 'nonamefile': 'contents of nonamefile field',\n\n Field names and filenames must be unicode.\n \"\"\"\n if isinstance(value, tuple):\n if len(value) == 3:\n filename, data, content_type = value\n else:\n filename, data = value\n content_type = guess_content_type(filename)\n else:\n filename = None\n content_type = None\n data = value\n\n request_param = cls(fieldname, data, filename=filename)\n request_param.make_multipart(content_type=content_type)\n\n return request_param\n\n def _render_part(self, name, value):\n \"\"\"\n Overridable helper function to format a single header parameter.\n\n :param name:\n The name of the parameter, a string expected to be ASCII only.\n :param value:\n The value of the parameter, provided as a unicode string.\n \"\"\"\n return format_header_param(name, value)\n\n def _render_parts(self, header_parts):\n \"\"\"\n Helper function to format and quote a single header.\n\n Useful for single headers that are composed of multiple items. E.g.,\n 'Content-Disposition' fields.\n\n :param header_parts:\n A sequence of (k, v) typles or a :class:`dict` of (k, v) to format\n as `k1=\"v1\"; k2=\"v2\"; ...`.\n \"\"\"\n parts = []\n iterable = header_parts\n if isinstance(header_parts, dict):\n iterable = header_parts.items()\n\n for name, value in iterable:\n if value:\n parts.append(self._render_part(name, value))\n\n return '; '.join(parts)\n\n def render_headers(self):\n \"\"\"\n Renders the headers for this request field.\n \"\"\"\n lines = []\n\n sort_keys = ['Content-Disposition', 'Content-Type', 'Content-Location']\n for sort_key in sort_keys:\n if self.headers.get(sort_key, False):\n lines.append('%s: %s' % (sort_key, self.headers[sort_key]))\n\n for header_name, header_value in self.headers.items():\n if header_name not in sort_keys:\n if header_value:\n lines.append('%s: %s' % (header_name, header_value))\n\n lines.append('\\r\\n')\n return '\\r\\n'.join(lines)\n\n def make_multipart(self, content_disposition=None, content_type=None,\n content_location=None):\n \"\"\"\n Makes this request field into a multipart request field.\n\n This method overrides \"Content-Disposition\", \"Content-Type\" and\n \"Content-Location\" headers to the request parameter.\n\n :param content_type:\n The 'Content-Type' of the request body.\n :param content_location:\n The 'Content-Location' of the request body.\n\n \"\"\"\n self.headers['Content-Disposition'] = content_disposition or 'form-data'\n self.headers['Content-Disposition'] += '; '.join([\n '', self._render_parts(\n (('name', self._name), ('filename', self._filename))\n )\n ])\n self.headers['Content-Type'] = content_type\n self.headers['Content-Location'] = content_location\n", "path": "urllib3/fields.py"}], "after_files": [{"content": "from __future__ import absolute_import\nimport email.utils\nimport mimetypes\n\nfrom .packages import six\n\n\ndef guess_content_type(filename, default='application/octet-stream'):\n \"\"\"\n Guess the \"Content-Type\" of a file.\n\n :param filename:\n The filename to guess the \"Content-Type\" of using :mod:`mimetypes`.\n :param default:\n If no \"Content-Type\" can be guessed, default to `default`.\n \"\"\"\n if filename:\n return mimetypes.guess_type(filename)[0] or default\n return default\n\n\ndef format_header_param(name, value):\n \"\"\"\n Helper function to format and quote a single header parameter.\n\n Particularly useful for header parameters which might contain\n non-ASCII values, like file names. This follows RFC 2231, as\n suggested by RFC 2388 Section 4.4.\n\n :param name:\n The name of the parameter, a string expected to be ASCII only.\n :param value:\n The value of the parameter, provided as a unicode string.\n \"\"\"\n if not any(ch in value for ch in '\"\\\\\\r\\n'):\n result = '%s=\"%s\"' % (name, value)\n try:\n result.encode('ascii')\n except (UnicodeEncodeError, UnicodeDecodeError):\n pass\n else:\n return result\n if not six.PY3 and isinstance(value, six.text_type): # Python 2:\n value = value.encode('utf-8')\n value = email.utils.encode_rfc2231(value, 'utf-8')\n value = '%s*=%s' % (name, value)\n return value\n\n\nclass RequestField(object):\n \"\"\"\n A data container for request body parameters.\n\n :param name:\n The name of this request field.\n :param data:\n The data/value body.\n :param filename:\n An optional filename of the request field.\n :param headers:\n An optional dict-like object of headers to initially use for the field.\n \"\"\"\n def __init__(self, name, data, filename=None, headers=None):\n self._name = name\n self._filename = filename\n self.data = data\n self.headers = {}\n if headers:\n self.headers = dict(headers)\n\n @classmethod\n def from_tuples(cls, fieldname, value):\n \"\"\"\n A :class:`~urllib3.fields.RequestField` factory from old-style tuple parameters.\n\n Supports constructing :class:`~urllib3.fields.RequestField` from\n parameter of key/value strings AND key/filetuple. A filetuple is a\n (filename, data, MIME type) tuple where the MIME type is optional.\n For example::\n\n 'foo': 'bar',\n 'fakefile': ('foofile.txt', 'contents of foofile'),\n 'realfile': ('barfile.txt', open('realfile').read()),\n 'typedfile': ('bazfile.bin', open('bazfile').read(), 'image/jpeg'),\n 'nonamefile': 'contents of nonamefile field',\n\n Field names and filenames must be unicode.\n \"\"\"\n if isinstance(value, tuple):\n if len(value) == 3:\n filename, data, content_type = value\n else:\n filename, data = value\n content_type = guess_content_type(filename)\n else:\n filename = None\n content_type = None\n data = value\n\n request_param = cls(fieldname, data, filename=filename)\n request_param.make_multipart(content_type=content_type)\n\n return request_param\n\n def _render_part(self, name, value):\n \"\"\"\n Overridable helper function to format a single header parameter.\n\n :param name:\n The name of the parameter, a string expected to be ASCII only.\n :param value:\n The value of the parameter, provided as a unicode string.\n \"\"\"\n return format_header_param(name, value)\n\n def _render_parts(self, header_parts):\n \"\"\"\n Helper function to format and quote a single header.\n\n Useful for single headers that are composed of multiple items. E.g.,\n 'Content-Disposition' fields.\n\n :param header_parts:\n A sequence of (k, v) typles or a :class:`dict` of (k, v) to format\n as `k1=\"v1\"; k2=\"v2\"; ...`.\n \"\"\"\n parts = []\n iterable = header_parts\n if isinstance(header_parts, dict):\n iterable = header_parts.items()\n\n for name, value in iterable:\n if value:\n parts.append(self._render_part(name, value))\n\n return '; '.join(parts)\n\n def render_headers(self):\n \"\"\"\n Renders the headers for this request field.\n \"\"\"\n lines = []\n\n sort_keys = ['Content-Disposition', 'Content-Type', 'Content-Location']\n for sort_key in sort_keys:\n if self.headers.get(sort_key, False):\n lines.append('%s: %s' % (sort_key, self.headers[sort_key]))\n\n for header_name, header_value in self.headers.items():\n if header_name not in sort_keys:\n if header_value:\n lines.append('%s: %s' % (header_name, header_value))\n\n lines.append('\\r\\n')\n return '\\r\\n'.join(lines)\n\n def make_multipart(self, content_disposition=None, content_type=None,\n content_location=None):\n \"\"\"\n Makes this request field into a multipart request field.\n\n This method overrides \"Content-Disposition\", \"Content-Type\" and\n \"Content-Location\" headers to the request parameter.\n\n :param content_type:\n The 'Content-Type' of the request body.\n :param content_location:\n The 'Content-Location' of the request body.\n\n \"\"\"\n self.headers['Content-Disposition'] = content_disposition or 'form-data'\n self.headers['Content-Disposition'] += '; '.join([\n '', self._render_parts(\n (('name', self._name), ('filename', self._filename))\n )\n ])\n self.headers['Content-Type'] = content_type\n self.headers['Content-Location'] = content_location\n", "path": "urllib3/fields.py"}]} | 2,046 | 187 |
gh_patches_debug_17063 | rasdani/github-patches | git_diff | elastic__ecs-813 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Honor reusable.top_level: false for generated ES templates
When setting `reusable.top_level: false` the generated Elasticsearch template still includes the field in the template at the top level. I would not expect this field to be included at the top level of the template when this is `false`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/generator.py`
Content:
```
1 import argparse
2 import glob
3 import os
4 import schema_reader
5 import yaml
6 from generators import intermediate_files
7 from generators import csv_generator
8 from generators import es_template
9 from generators import beats
10 from generators import asciidoc_fields
11 from generators import ecs_helpers
12
13
14 def main():
15 args = argument_parser()
16
17 ecs_version = read_version()
18 print('Running generator. ECS version ' + ecs_version)
19
20 # Load the default schemas
21 print('Loading default schemas')
22 intermediate_fields = schema_reader.load_schemas()
23
24 # Maybe load user specified directory of schemas
25 if args.include:
26 include_glob = ecs_helpers.get_glob_files(args.include, ecs_helpers.YAML_EXT)
27
28 print('Loading user defined schemas: {0}'.format(include_glob))
29
30 intermediate_custom = schema_reader.load_schemas(include_glob)
31 schema_reader.merge_schema_fields(intermediate_fields, intermediate_custom)
32
33 schema_reader.assemble_reusables(intermediate_fields)
34
35 if args.subset:
36 subset = {}
37 for arg in args.subset:
38 for file in glob.glob(arg):
39 with open(file) as f:
40 raw = yaml.safe_load(f.read())
41 ecs_helpers.recursive_merge_subset_dicts(subset, raw)
42 if not subset:
43 raise ValueError('Subset option specified but no subsets found')
44 intermediate_fields = ecs_helpers.fields_subset(subset, intermediate_fields)
45
46 (nested, flat) = schema_reader.generate_nested_flat(intermediate_fields)
47
48 # default location to save files
49 out_dir = 'generated'
50 docs_dir = 'docs'
51 if args.out:
52 out_dir = os.path.join(args.out, out_dir)
53 docs_dir = os.path.join(args.out, docs_dir)
54
55 ecs_helpers.make_dirs(out_dir)
56 ecs_helpers.make_dirs(docs_dir)
57
58 intermediate_files.generate(nested, flat, out_dir)
59 if args.intermediate_only:
60 exit()
61
62 csv_generator.generate(flat, ecs_version, out_dir)
63 es_template.generate(flat, ecs_version, out_dir)
64 if args.include or args.subset:
65 exit()
66 beats.generate(nested, ecs_version, out_dir)
67 asciidoc_fields.generate(intermediate_fields, ecs_version, docs_dir)
68
69
70 def argument_parser():
71 parser = argparse.ArgumentParser()
72 parser.add_argument('--intermediate-only', action='store_true',
73 help='generate intermediary files only')
74 parser.add_argument('--include', nargs='+',
75 help='include user specified directory of custom field definitions')
76 parser.add_argument('--subset', nargs='+',
77 help='render a subset of the schema')
78 parser.add_argument('--out', action='store', help='directory to store the generated files')
79 return parser.parse_args()
80
81
82 def read_version(file='version'):
83 with open(file, 'r') as infile:
84 return infile.read().rstrip()
85
86
87 if __name__ == '__main__':
88 main()
89
```
Path: `scripts/schema_reader.py`
Content:
```
1 import os
2 import yaml
3 import copy
4 from generators import ecs_helpers
5
6 # This script has a few entrypoints. The code related to each entrypoint is grouped
7 # together between comments.
8 #
9 # load_schemas()
10 # yml file load (ECS or custom) + cleanup of field set attributes.
11 # merge_schema_fields()
12 # Merge ECS field sets with custom field sets
13 # generate_nested_flat()
14 # Finalize the intermediate representation of all fields. Fills field defaults,
15 # performs field nestings, and precalculates many values used by various generators.
16
17 # Loads schemas and perform cleanup of schema attributes
18
19
20 def load_schemas(files=ecs_helpers.ecs_files()):
21 """Loads the list of files and performs schema level cleanup"""
22 fields_intermediate = load_schema_files(files)
23 finalize_schemas(fields_intermediate)
24 return fields_intermediate
25
26
27 def load_schema_files(files):
28 fields_nested = {}
29 for f in files:
30 new_fields = read_schema_file(f)
31 fields_nested.update(new_fields)
32 return fields_nested
33
34
35 def read_schema_file(file):
36 """Read a raw schema yml into a map, removing the wrapping array in each file"""
37 with open(file) as f:
38 raw = yaml.safe_load(f.read())
39 fields = {}
40 for field_set in raw:
41 fields[field_set['name']] = field_set
42 return fields
43
44
45 def finalize_schemas(fields_nested):
46 """Clean up all schema level attributes"""
47 for schema_name in fields_nested:
48 schema = fields_nested[schema_name]
49 schema_cleanup_values(schema)
50
51
52 def schema_cleanup_values(schema):
53 """Clean up one schema"""
54 ecs_helpers.dict_clean_string_values(schema)
55 schema_set_default_values(schema)
56 schema_set_fieldset_prefix(schema)
57 schema_fields_as_dictionary(schema)
58
59
60 def schema_set_default_values(schema):
61 schema['type'] = 'group'
62 schema.setdefault('group', 2)
63 schema.setdefault('short', schema['description'])
64 if "\n" in schema['short']:
65 raise ValueError("Short descriptions must be single line.\nFieldset: {}\n{}".format(schema['name'], schema))
66
67
68 def schema_set_fieldset_prefix(schema):
69 if 'root' in schema and schema['root']:
70 schema['prefix'] = ''
71 else:
72 schema['prefix'] = schema['name'] + '.'
73
74
75 def schema_fields_as_dictionary(schema):
76 """Re-nest the array of field names as a dictionary of 'fieldname' => { field definition }"""
77 field_array = schema.pop('fields', [])
78 schema['fields'] = {}
79 for field in field_array:
80 nested_levels = field['name'].split('.')
81 nested_schema = schema['fields']
82 for level in nested_levels[:-1]:
83 if level not in nested_schema:
84 nested_schema[level] = {}
85 if 'fields' not in nested_schema[level]:
86 nested_schema[level]['fields'] = {}
87 nested_schema = nested_schema[level]['fields']
88 if nested_levels[-1] not in nested_schema:
89 nested_schema[nested_levels[-1]] = {}
90 # Only leaf fields will have field details so we can identify them later
91 nested_schema[nested_levels[-1]]['field_details'] = field
92
93 # Merge ECS field sets with custom field sets
94
95
96 def merge_schema_fields(a, b):
97 """Merge ECS field sets with custom field sets"""
98 for key in b:
99 if key not in a:
100 a[key] = b[key]
101 else:
102 a_type = a[key].get('field_details', {}).get('type', 'object')
103 b_type = b[key].get('field_details', {}).get('type', 'object')
104 if a_type != b_type:
105 raise ValueError('Schemas unmergeable: type {} does not match type {}'.format(a_type, b_type))
106 elif a_type not in ['object', 'nested']:
107 print('Warning: dropping field {}, already defined'.format(key))
108 continue
109 # reusable should only be found at the top level of a fieldset
110 if 'reusable' in b[key]:
111 a[key].setdefault('reusable', {})
112 a[key]['reusable']['top_level'] = a[key]['reusable'].get(
113 'top_level', False) or b[key]['reusable']['top_level']
114 a[key]['reusable'].setdefault('expected', [])
115 a[key]['reusable']['expected'].extend(b[key]['reusable']['expected'])
116 if 'fields' in b[key]:
117 a[key].setdefault('fields', {})
118 merge_schema_fields(a[key]['fields'], b[key]['fields'])
119
120 # Finalize the intermediate representation of all fields.
121
122
123 def generate_nested_flat(fields_intermediate):
124 for field_name, field in fields_intermediate.items():
125 nestings = find_nestings(field['fields'], field_name + ".")
126 nestings.sort()
127 if len(nestings) > 0:
128 field['nestings'] = nestings
129 fields_nested = generate_partially_flattened_fields(fields_intermediate)
130 fields_flat = generate_fully_flattened_fields(fields_intermediate)
131 return (fields_nested, fields_flat)
132
133
134 def assemble_reusables(fields_nested):
135 # This happens as a second pass, so that all fieldsets have their
136 # fields array replaced with a fields dictionary.
137 for schema_name in fields_nested:
138 schema = fields_nested[schema_name]
139 duplicate_reusable_fieldsets(schema, fields_nested)
140 cleanup_fields_recursive(fields_nested, "")
141
142
143 def duplicate_reusable_fieldsets(schema, fields_nested):
144 """Copies reusable field definitions to their expected places"""
145 # Note: across this schema reader, functions are modifying dictionaries passed
146 # as arguments, which is usually a risk of unintended side effects.
147 # Here it simplifies the nesting of 'group' under 'user',
148 # which is in turn reusable in a few places.
149 if 'reusable' in schema:
150 for new_nesting in schema['reusable']['expected']:
151 split_flat_name = new_nesting.split('.')
152 top_level = split_flat_name[0]
153 # List field set names expected under another field set.
154 # E.g. host.nestings = [ 'geo', 'os', 'user' ]
155 nested_schema = fields_nested[top_level]['fields']
156 for level in split_flat_name[1:]:
157 nested_schema = nested_schema.get(level, None)
158 if not nested_schema:
159 raise ValueError('Field {} in path {} not found in schema'.format(level, new_nesting))
160 if nested_schema.get('reusable', None):
161 raise ValueError(
162 'Reusable fields cannot be put inside other reusable fields except when the destination reusable is at the top level')
163 nested_schema = nested_schema.setdefault('fields', {})
164 nested_schema[schema['name']] = schema
165
166
167 def cleanup_fields_recursive(fields, prefix, original_fieldset=None):
168 for (name, field) in fields.items():
169 # Copy field here so reusable field sets become unique copies instead of references to the original set
170 field = field.copy()
171 fields[name] = field
172 temp_original_fieldset = name if ('reusable' in field and prefix != "") else original_fieldset
173 if 'field_details' in field:
174 # Deep copy the field details so we can insert different flat names for each reusable fieldset
175 field_details = copy.deepcopy(field['field_details'])
176 new_flat_name = prefix + name
177 field_details['flat_name'] = new_flat_name
178 field_details['dashed_name'] = new_flat_name.replace('.', '-').replace('_', '-')
179 if temp_original_fieldset:
180 field_details['original_fieldset'] = temp_original_fieldset
181 ecs_helpers.dict_clean_string_values(field_details)
182 field_set_defaults(field_details)
183 field['field_details'] = field_details
184 if 'fields' in field:
185 field['fields'] = field['fields'].copy()
186 new_prefix = prefix + name + "."
187 if 'root' in field and field['root']:
188 new_prefix = ""
189 cleanup_fields_recursive(field['fields'], new_prefix, temp_original_fieldset)
190
191
192 def field_set_defaults(field):
193 field.setdefault('normalize', [])
194 if field['type'] == 'keyword':
195 field.setdefault('ignore_above', 1024)
196 if field['type'] == 'text':
197 field.setdefault('norms', False)
198 if field['type'] == 'object':
199 field.setdefault('object_type', 'keyword')
200
201 field.setdefault('short', field['description'])
202 if "\n" in field['short']:
203 raise ValueError("Short descriptions must be single line.\nField: {}\n{}".format(field['flat_name'], field))
204 # print(" Short descriptions must be single line. Field: {}".format(field['flat_name']))
205
206 if 'index' in field and not field['index']:
207 field.setdefault('doc_values', False)
208 if 'multi_fields' in field:
209 field_set_multi_field_defaults(field)
210
211
212 def field_set_multi_field_defaults(parent_field):
213 """Sets defaults for each nested field in the multi_fields array"""
214 for mf in parent_field['multi_fields']:
215 mf.setdefault('name', mf['type'])
216 if mf['type'] == 'text':
217 mf.setdefault('norms', False)
218 mf['flat_name'] = parent_field['flat_name'] + '.' + mf['name']
219
220
221 def find_nestings(fields, prefix):
222 """Recursively finds all reusable fields in the fields dictionary."""
223 nestings = []
224 for field_name, field in fields.items():
225 if 'reusable' in field:
226 nestings.append(prefix + field_name)
227 if 'fields' in field:
228 nestings.extend(find_nestings(field['fields'], prefix + field_name + '.'))
229 return nestings
230
231
232 def generate_partially_flattened_fields(fields_nested):
233 flat_fields = {}
234 for (name, field) in fields_nested.items():
235 # assigning field.copy() adds all the top level schema fields, has to be a copy since we're about
236 # to reassign the 'fields' key and we don't want to modify fields_nested
237 flat_fields[name] = field.copy()
238 flat_fields[name]['fields'] = flatten_fields(field['fields'], "")
239 return flat_fields
240
241
242 def generate_fully_flattened_fields(fields_nested):
243 return flatten_fields(fields_nested, "")
244
245
246 def flatten_fields(fields, key_prefix):
247 flat_fields = {}
248 for (name, field) in fields.items():
249 new_key = key_prefix + name
250 if 'field_details' in field:
251 flat_fields[new_key] = field['field_details'].copy()
252 if 'fields' in field:
253 new_prefix = new_key + "."
254 if 'root' in field and field['root']:
255 new_prefix = ""
256 flat_fields.update(flatten_fields(field['fields'], new_prefix))
257 return flat_fields
258
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scripts/generator.py b/scripts/generator.py
--- a/scripts/generator.py
+++ b/scripts/generator.py
@@ -13,6 +13,9 @@
def main():
args = argument_parser()
+ # Get rid of empty include
+ if args.include and [''] == args.include:
+ args.include.clear()
ecs_version = read_version()
print('Running generator. ECS version ' + ecs_version)
diff --git a/scripts/schema_reader.py b/scripts/schema_reader.py
--- a/scripts/schema_reader.py
+++ b/scripts/schema_reader.py
@@ -240,7 +240,16 @@
def generate_fully_flattened_fields(fields_nested):
- return flatten_fields(fields_nested, "")
+ flattened = flatten_fields(remove_non_root_reusables(fields_nested), "")
+ return flattened
+
+
+def remove_non_root_reusables(fields_nested):
+ fields = {}
+ for (name, field) in fields_nested.items():
+ if 'reusable' not in field or ('reusable' in field and field['reusable']['top_level']):
+ fields[name] = field
+ return fields
def flatten_fields(fields, key_prefix):
| {"golden_diff": "diff --git a/scripts/generator.py b/scripts/generator.py\n--- a/scripts/generator.py\n+++ b/scripts/generator.py\n@@ -13,6 +13,9 @@\n \n def main():\n args = argument_parser()\n+ # Get rid of empty include\n+ if args.include and [''] == args.include:\n+ args.include.clear()\n \n ecs_version = read_version()\n print('Running generator. ECS version ' + ecs_version)\ndiff --git a/scripts/schema_reader.py b/scripts/schema_reader.py\n--- a/scripts/schema_reader.py\n+++ b/scripts/schema_reader.py\n@@ -240,7 +240,16 @@\n \n \n def generate_fully_flattened_fields(fields_nested):\n- return flatten_fields(fields_nested, \"\")\n+ flattened = flatten_fields(remove_non_root_reusables(fields_nested), \"\")\n+ return flattened\n+\n+\n+def remove_non_root_reusables(fields_nested):\n+ fields = {}\n+ for (name, field) in fields_nested.items():\n+ if 'reusable' not in field or ('reusable' in field and field['reusable']['top_level']):\n+ fields[name] = field\n+ return fields\n \n \n def flatten_fields(fields, key_prefix):\n", "issue": "Honor reusable.top_level: false for generated ES templates\nWhen setting `reusable.top_level: false` the generated Elasticsearch template still includes the field in the template at the top level. I would not expect this field to be included at the top level of the template when this is `false`.\n", "before_files": [{"content": "import argparse\nimport glob\nimport os\nimport schema_reader\nimport yaml\nfrom generators import intermediate_files\nfrom generators import csv_generator\nfrom generators import es_template\nfrom generators import beats\nfrom generators import asciidoc_fields\nfrom generators import ecs_helpers\n\n\ndef main():\n args = argument_parser()\n\n ecs_version = read_version()\n print('Running generator. ECS version ' + ecs_version)\n\n # Load the default schemas\n print('Loading default schemas')\n intermediate_fields = schema_reader.load_schemas()\n\n # Maybe load user specified directory of schemas\n if args.include:\n include_glob = ecs_helpers.get_glob_files(args.include, ecs_helpers.YAML_EXT)\n\n print('Loading user defined schemas: {0}'.format(include_glob))\n\n intermediate_custom = schema_reader.load_schemas(include_glob)\n schema_reader.merge_schema_fields(intermediate_fields, intermediate_custom)\n\n schema_reader.assemble_reusables(intermediate_fields)\n\n if args.subset:\n subset = {}\n for arg in args.subset:\n for file in glob.glob(arg):\n with open(file) as f:\n raw = yaml.safe_load(f.read())\n ecs_helpers.recursive_merge_subset_dicts(subset, raw)\n if not subset:\n raise ValueError('Subset option specified but no subsets found')\n intermediate_fields = ecs_helpers.fields_subset(subset, intermediate_fields)\n\n (nested, flat) = schema_reader.generate_nested_flat(intermediate_fields)\n\n # default location to save files\n out_dir = 'generated'\n docs_dir = 'docs'\n if args.out:\n out_dir = os.path.join(args.out, out_dir)\n docs_dir = os.path.join(args.out, docs_dir)\n\n ecs_helpers.make_dirs(out_dir)\n ecs_helpers.make_dirs(docs_dir)\n\n intermediate_files.generate(nested, flat, out_dir)\n if args.intermediate_only:\n exit()\n\n csv_generator.generate(flat, ecs_version, out_dir)\n es_template.generate(flat, ecs_version, out_dir)\n if args.include or args.subset:\n exit()\n beats.generate(nested, ecs_version, out_dir)\n asciidoc_fields.generate(intermediate_fields, ecs_version, docs_dir)\n\n\ndef argument_parser():\n parser = argparse.ArgumentParser()\n parser.add_argument('--intermediate-only', action='store_true',\n help='generate intermediary files only')\n parser.add_argument('--include', nargs='+',\n help='include user specified directory of custom field definitions')\n parser.add_argument('--subset', nargs='+',\n help='render a subset of the schema')\n parser.add_argument('--out', action='store', help='directory to store the generated files')\n return parser.parse_args()\n\n\ndef read_version(file='version'):\n with open(file, 'r') as infile:\n return infile.read().rstrip()\n\n\nif __name__ == '__main__':\n main()\n", "path": "scripts/generator.py"}, {"content": "import os\nimport yaml\nimport copy\nfrom generators import ecs_helpers\n\n# This script has a few entrypoints. The code related to each entrypoint is grouped\n# together between comments.\n#\n# load_schemas()\n# yml file load (ECS or custom) + cleanup of field set attributes.\n# merge_schema_fields()\n# Merge ECS field sets with custom field sets\n# generate_nested_flat()\n# Finalize the intermediate representation of all fields. Fills field defaults,\n# performs field nestings, and precalculates many values used by various generators.\n\n# Loads schemas and perform cleanup of schema attributes\n\n\ndef load_schemas(files=ecs_helpers.ecs_files()):\n \"\"\"Loads the list of files and performs schema level cleanup\"\"\"\n fields_intermediate = load_schema_files(files)\n finalize_schemas(fields_intermediate)\n return fields_intermediate\n\n\ndef load_schema_files(files):\n fields_nested = {}\n for f in files:\n new_fields = read_schema_file(f)\n fields_nested.update(new_fields)\n return fields_nested\n\n\ndef read_schema_file(file):\n \"\"\"Read a raw schema yml into a map, removing the wrapping array in each file\"\"\"\n with open(file) as f:\n raw = yaml.safe_load(f.read())\n fields = {}\n for field_set in raw:\n fields[field_set['name']] = field_set\n return fields\n\n\ndef finalize_schemas(fields_nested):\n \"\"\"Clean up all schema level attributes\"\"\"\n for schema_name in fields_nested:\n schema = fields_nested[schema_name]\n schema_cleanup_values(schema)\n\n\ndef schema_cleanup_values(schema):\n \"\"\"Clean up one schema\"\"\"\n ecs_helpers.dict_clean_string_values(schema)\n schema_set_default_values(schema)\n schema_set_fieldset_prefix(schema)\n schema_fields_as_dictionary(schema)\n\n\ndef schema_set_default_values(schema):\n schema['type'] = 'group'\n schema.setdefault('group', 2)\n schema.setdefault('short', schema['description'])\n if \"\\n\" in schema['short']:\n raise ValueError(\"Short descriptions must be single line.\\nFieldset: {}\\n{}\".format(schema['name'], schema))\n\n\ndef schema_set_fieldset_prefix(schema):\n if 'root' in schema and schema['root']:\n schema['prefix'] = ''\n else:\n schema['prefix'] = schema['name'] + '.'\n\n\ndef schema_fields_as_dictionary(schema):\n \"\"\"Re-nest the array of field names as a dictionary of 'fieldname' => { field definition }\"\"\"\n field_array = schema.pop('fields', [])\n schema['fields'] = {}\n for field in field_array:\n nested_levels = field['name'].split('.')\n nested_schema = schema['fields']\n for level in nested_levels[:-1]:\n if level not in nested_schema:\n nested_schema[level] = {}\n if 'fields' not in nested_schema[level]:\n nested_schema[level]['fields'] = {}\n nested_schema = nested_schema[level]['fields']\n if nested_levels[-1] not in nested_schema:\n nested_schema[nested_levels[-1]] = {}\n # Only leaf fields will have field details so we can identify them later\n nested_schema[nested_levels[-1]]['field_details'] = field\n\n# Merge ECS field sets with custom field sets\n\n\ndef merge_schema_fields(a, b):\n \"\"\"Merge ECS field sets with custom field sets\"\"\"\n for key in b:\n if key not in a:\n a[key] = b[key]\n else:\n a_type = a[key].get('field_details', {}).get('type', 'object')\n b_type = b[key].get('field_details', {}).get('type', 'object')\n if a_type != b_type:\n raise ValueError('Schemas unmergeable: type {} does not match type {}'.format(a_type, b_type))\n elif a_type not in ['object', 'nested']:\n print('Warning: dropping field {}, already defined'.format(key))\n continue\n # reusable should only be found at the top level of a fieldset\n if 'reusable' in b[key]:\n a[key].setdefault('reusable', {})\n a[key]['reusable']['top_level'] = a[key]['reusable'].get(\n 'top_level', False) or b[key]['reusable']['top_level']\n a[key]['reusable'].setdefault('expected', [])\n a[key]['reusable']['expected'].extend(b[key]['reusable']['expected'])\n if 'fields' in b[key]:\n a[key].setdefault('fields', {})\n merge_schema_fields(a[key]['fields'], b[key]['fields'])\n\n# Finalize the intermediate representation of all fields.\n\n\ndef generate_nested_flat(fields_intermediate):\n for field_name, field in fields_intermediate.items():\n nestings = find_nestings(field['fields'], field_name + \".\")\n nestings.sort()\n if len(nestings) > 0:\n field['nestings'] = nestings\n fields_nested = generate_partially_flattened_fields(fields_intermediate)\n fields_flat = generate_fully_flattened_fields(fields_intermediate)\n return (fields_nested, fields_flat)\n\n\ndef assemble_reusables(fields_nested):\n # This happens as a second pass, so that all fieldsets have their\n # fields array replaced with a fields dictionary.\n for schema_name in fields_nested:\n schema = fields_nested[schema_name]\n duplicate_reusable_fieldsets(schema, fields_nested)\n cleanup_fields_recursive(fields_nested, \"\")\n\n\ndef duplicate_reusable_fieldsets(schema, fields_nested):\n \"\"\"Copies reusable field definitions to their expected places\"\"\"\n # Note: across this schema reader, functions are modifying dictionaries passed\n # as arguments, which is usually a risk of unintended side effects.\n # Here it simplifies the nesting of 'group' under 'user',\n # which is in turn reusable in a few places.\n if 'reusable' in schema:\n for new_nesting in schema['reusable']['expected']:\n split_flat_name = new_nesting.split('.')\n top_level = split_flat_name[0]\n # List field set names expected under another field set.\n # E.g. host.nestings = [ 'geo', 'os', 'user' ]\n nested_schema = fields_nested[top_level]['fields']\n for level in split_flat_name[1:]:\n nested_schema = nested_schema.get(level, None)\n if not nested_schema:\n raise ValueError('Field {} in path {} not found in schema'.format(level, new_nesting))\n if nested_schema.get('reusable', None):\n raise ValueError(\n 'Reusable fields cannot be put inside other reusable fields except when the destination reusable is at the top level')\n nested_schema = nested_schema.setdefault('fields', {})\n nested_schema[schema['name']] = schema\n\n\ndef cleanup_fields_recursive(fields, prefix, original_fieldset=None):\n for (name, field) in fields.items():\n # Copy field here so reusable field sets become unique copies instead of references to the original set\n field = field.copy()\n fields[name] = field\n temp_original_fieldset = name if ('reusable' in field and prefix != \"\") else original_fieldset\n if 'field_details' in field:\n # Deep copy the field details so we can insert different flat names for each reusable fieldset\n field_details = copy.deepcopy(field['field_details'])\n new_flat_name = prefix + name\n field_details['flat_name'] = new_flat_name\n field_details['dashed_name'] = new_flat_name.replace('.', '-').replace('_', '-')\n if temp_original_fieldset:\n field_details['original_fieldset'] = temp_original_fieldset\n ecs_helpers.dict_clean_string_values(field_details)\n field_set_defaults(field_details)\n field['field_details'] = field_details\n if 'fields' in field:\n field['fields'] = field['fields'].copy()\n new_prefix = prefix + name + \".\"\n if 'root' in field and field['root']:\n new_prefix = \"\"\n cleanup_fields_recursive(field['fields'], new_prefix, temp_original_fieldset)\n\n\ndef field_set_defaults(field):\n field.setdefault('normalize', [])\n if field['type'] == 'keyword':\n field.setdefault('ignore_above', 1024)\n if field['type'] == 'text':\n field.setdefault('norms', False)\n if field['type'] == 'object':\n field.setdefault('object_type', 'keyword')\n\n field.setdefault('short', field['description'])\n if \"\\n\" in field['short']:\n raise ValueError(\"Short descriptions must be single line.\\nField: {}\\n{}\".format(field['flat_name'], field))\n # print(\" Short descriptions must be single line. Field: {}\".format(field['flat_name']))\n\n if 'index' in field and not field['index']:\n field.setdefault('doc_values', False)\n if 'multi_fields' in field:\n field_set_multi_field_defaults(field)\n\n\ndef field_set_multi_field_defaults(parent_field):\n \"\"\"Sets defaults for each nested field in the multi_fields array\"\"\"\n for mf in parent_field['multi_fields']:\n mf.setdefault('name', mf['type'])\n if mf['type'] == 'text':\n mf.setdefault('norms', False)\n mf['flat_name'] = parent_field['flat_name'] + '.' + mf['name']\n\n\ndef find_nestings(fields, prefix):\n \"\"\"Recursively finds all reusable fields in the fields dictionary.\"\"\"\n nestings = []\n for field_name, field in fields.items():\n if 'reusable' in field:\n nestings.append(prefix + field_name)\n if 'fields' in field:\n nestings.extend(find_nestings(field['fields'], prefix + field_name + '.'))\n return nestings\n\n\ndef generate_partially_flattened_fields(fields_nested):\n flat_fields = {}\n for (name, field) in fields_nested.items():\n # assigning field.copy() adds all the top level schema fields, has to be a copy since we're about\n # to reassign the 'fields' key and we don't want to modify fields_nested\n flat_fields[name] = field.copy()\n flat_fields[name]['fields'] = flatten_fields(field['fields'], \"\")\n return flat_fields\n\n\ndef generate_fully_flattened_fields(fields_nested):\n return flatten_fields(fields_nested, \"\")\n\n\ndef flatten_fields(fields, key_prefix):\n flat_fields = {}\n for (name, field) in fields.items():\n new_key = key_prefix + name\n if 'field_details' in field:\n flat_fields[new_key] = field['field_details'].copy()\n if 'fields' in field:\n new_prefix = new_key + \".\"\n if 'root' in field and field['root']:\n new_prefix = \"\"\n flat_fields.update(flatten_fields(field['fields'], new_prefix))\n return flat_fields\n", "path": "scripts/schema_reader.py"}], "after_files": [{"content": "import argparse\nimport glob\nimport os\nimport schema_reader\nimport yaml\nfrom generators import intermediate_files\nfrom generators import csv_generator\nfrom generators import es_template\nfrom generators import beats\nfrom generators import asciidoc_fields\nfrom generators import ecs_helpers\n\n\ndef main():\n args = argument_parser()\n # Get rid of empty include\n if args.include and [''] == args.include:\n args.include.clear()\n\n ecs_version = read_version()\n print('Running generator. ECS version ' + ecs_version)\n\n # Load the default schemas\n print('Loading default schemas')\n intermediate_fields = schema_reader.load_schemas()\n\n # Maybe load user specified directory of schemas\n if args.include:\n include_glob = ecs_helpers.get_glob_files(args.include, ecs_helpers.YAML_EXT)\n\n print('Loading user defined schemas: {0}'.format(include_glob))\n\n intermediate_custom = schema_reader.load_schemas(include_glob)\n schema_reader.merge_schema_fields(intermediate_fields, intermediate_custom)\n\n schema_reader.assemble_reusables(intermediate_fields)\n\n if args.subset:\n subset = {}\n for arg in args.subset:\n for file in glob.glob(arg):\n with open(file) as f:\n raw = yaml.safe_load(f.read())\n ecs_helpers.recursive_merge_subset_dicts(subset, raw)\n if not subset:\n raise ValueError('Subset option specified but no subsets found')\n intermediate_fields = ecs_helpers.fields_subset(subset, intermediate_fields)\n\n (nested, flat) = schema_reader.generate_nested_flat(intermediate_fields)\n\n # default location to save files\n out_dir = 'generated'\n docs_dir = 'docs'\n if args.out:\n out_dir = os.path.join(args.out, out_dir)\n docs_dir = os.path.join(args.out, docs_dir)\n\n ecs_helpers.make_dirs(out_dir)\n ecs_helpers.make_dirs(docs_dir)\n\n intermediate_files.generate(nested, flat, out_dir)\n if args.intermediate_only:\n exit()\n\n csv_generator.generate(flat, ecs_version, out_dir)\n es_template.generate(flat, ecs_version, out_dir)\n if args.include or args.subset:\n exit()\n beats.generate(nested, ecs_version, out_dir)\n asciidoc_fields.generate(intermediate_fields, ecs_version, docs_dir)\n\n\ndef argument_parser():\n parser = argparse.ArgumentParser()\n parser.add_argument('--intermediate-only', action='store_true',\n help='generate intermediary files only')\n parser.add_argument('--include', nargs='+',\n help='include user specified directory of custom field definitions')\n parser.add_argument('--subset', nargs='+',\n help='render a subset of the schema')\n parser.add_argument('--out', action='store', help='directory to store the generated files')\n return parser.parse_args()\n\n\ndef read_version(file='version'):\n with open(file, 'r') as infile:\n return infile.read().rstrip()\n\n\nif __name__ == '__main__':\n main()\n", "path": "scripts/generator.py"}, {"content": "import os\nimport yaml\nimport copy\nfrom generators import ecs_helpers\n\n# This script has a few entrypoints. The code related to each entrypoint is grouped\n# together between comments.\n#\n# load_schemas()\n# yml file load (ECS or custom) + cleanup of field set attributes.\n# merge_schema_fields()\n# Merge ECS field sets with custom field sets\n# generate_nested_flat()\n# Finalize the intermediate representation of all fields. Fills field defaults,\n# performs field nestings, and precalculates many values used by various generators.\n\n# Loads schemas and perform cleanup of schema attributes\n\n\ndef load_schemas(files=ecs_helpers.ecs_files()):\n \"\"\"Loads the list of files and performs schema level cleanup\"\"\"\n fields_intermediate = load_schema_files(files)\n finalize_schemas(fields_intermediate)\n return fields_intermediate\n\n\ndef load_schema_files(files):\n fields_nested = {}\n for f in files:\n new_fields = read_schema_file(f)\n fields_nested.update(new_fields)\n return fields_nested\n\n\ndef read_schema_file(file):\n \"\"\"Read a raw schema yml into a map, removing the wrapping array in each file\"\"\"\n with open(file) as f:\n raw = yaml.safe_load(f.read())\n fields = {}\n for field_set in raw:\n fields[field_set['name']] = field_set\n return fields\n\n\ndef finalize_schemas(fields_nested):\n \"\"\"Clean up all schema level attributes\"\"\"\n for schema_name in fields_nested:\n schema = fields_nested[schema_name]\n schema_cleanup_values(schema)\n\n\ndef schema_cleanup_values(schema):\n \"\"\"Clean up one schema\"\"\"\n ecs_helpers.dict_clean_string_values(schema)\n schema_set_default_values(schema)\n schema_set_fieldset_prefix(schema)\n schema_fields_as_dictionary(schema)\n\n\ndef schema_set_default_values(schema):\n schema['type'] = 'group'\n schema.setdefault('group', 2)\n schema.setdefault('short', schema['description'])\n if \"\\n\" in schema['short']:\n raise ValueError(\"Short descriptions must be single line.\\nFieldset: {}\\n{}\".format(schema['name'], schema))\n\n\ndef schema_set_fieldset_prefix(schema):\n if 'root' in schema and schema['root']:\n schema['prefix'] = ''\n else:\n schema['prefix'] = schema['name'] + '.'\n\n\ndef schema_fields_as_dictionary(schema):\n \"\"\"Re-nest the array of field names as a dictionary of 'fieldname' => { field definition }\"\"\"\n field_array = schema.pop('fields', [])\n schema['fields'] = {}\n for field in field_array:\n nested_levels = field['name'].split('.')\n nested_schema = schema['fields']\n for level in nested_levels[:-1]:\n if level not in nested_schema:\n nested_schema[level] = {}\n if 'fields' not in nested_schema[level]:\n nested_schema[level]['fields'] = {}\n nested_schema = nested_schema[level]['fields']\n if nested_levels[-1] not in nested_schema:\n nested_schema[nested_levels[-1]] = {}\n # Only leaf fields will have field details so we can identify them later\n nested_schema[nested_levels[-1]]['field_details'] = field\n\n# Merge ECS field sets with custom field sets\n\n\ndef merge_schema_fields(a, b):\n \"\"\"Merge ECS field sets with custom field sets\"\"\"\n for key in b:\n if key not in a:\n a[key] = b[key]\n else:\n a_type = a[key].get('field_details', {}).get('type', 'object')\n b_type = b[key].get('field_details', {}).get('type', 'object')\n if a_type != b_type:\n raise ValueError('Schemas unmergeable: type {} does not match type {}'.format(a_type, b_type))\n elif a_type not in ['object', 'nested']:\n print('Warning: dropping field {}, already defined'.format(key))\n continue\n # reusable should only be found at the top level of a fieldset\n if 'reusable' in b[key]:\n a[key].setdefault('reusable', {})\n a[key]['reusable']['top_level'] = a[key]['reusable'].get(\n 'top_level', False) or b[key]['reusable']['top_level']\n a[key]['reusable'].setdefault('expected', [])\n a[key]['reusable']['expected'].extend(b[key]['reusable']['expected'])\n if 'fields' in b[key]:\n a[key].setdefault('fields', {})\n merge_schema_fields(a[key]['fields'], b[key]['fields'])\n\n# Finalize the intermediate representation of all fields.\n\n\ndef generate_nested_flat(fields_intermediate):\n for field_name, field in fields_intermediate.items():\n nestings = find_nestings(field['fields'], field_name + \".\")\n nestings.sort()\n if len(nestings) > 0:\n field['nestings'] = nestings\n fields_nested = generate_partially_flattened_fields(fields_intermediate)\n fields_flat = generate_fully_flattened_fields(fields_intermediate)\n return (fields_nested, fields_flat)\n\n\ndef assemble_reusables(fields_nested):\n # This happens as a second pass, so that all fieldsets have their\n # fields array replaced with a fields dictionary.\n for schema_name in fields_nested:\n schema = fields_nested[schema_name]\n duplicate_reusable_fieldsets(schema, fields_nested)\n cleanup_fields_recursive(fields_nested, \"\")\n\n\ndef duplicate_reusable_fieldsets(schema, fields_nested):\n \"\"\"Copies reusable field definitions to their expected places\"\"\"\n # Note: across this schema reader, functions are modifying dictionaries passed\n # as arguments, which is usually a risk of unintended side effects.\n # Here it simplifies the nesting of 'group' under 'user',\n # which is in turn reusable in a few places.\n if 'reusable' in schema:\n for new_nesting in schema['reusable']['expected']:\n split_flat_name = new_nesting.split('.')\n top_level = split_flat_name[0]\n # List field set names expected under another field set.\n # E.g. host.nestings = [ 'geo', 'os', 'user' ]\n nested_schema = fields_nested[top_level]['fields']\n for level in split_flat_name[1:]:\n nested_schema = nested_schema.get(level, None)\n if not nested_schema:\n raise ValueError('Field {} in path {} not found in schema'.format(level, new_nesting))\n if nested_schema.get('reusable', None):\n raise ValueError(\n 'Reusable fields cannot be put inside other reusable fields except when the destination reusable is at the top level')\n nested_schema = nested_schema.setdefault('fields', {})\n nested_schema[schema['name']] = schema\n\n\ndef cleanup_fields_recursive(fields, prefix, original_fieldset=None):\n for (name, field) in fields.items():\n # Copy field here so reusable field sets become unique copies instead of references to the original set\n field = field.copy()\n fields[name] = field\n temp_original_fieldset = name if ('reusable' in field and prefix != \"\") else original_fieldset\n if 'field_details' in field:\n # Deep copy the field details so we can insert different flat names for each reusable fieldset\n field_details = copy.deepcopy(field['field_details'])\n new_flat_name = prefix + name\n field_details['flat_name'] = new_flat_name\n field_details['dashed_name'] = new_flat_name.replace('.', '-').replace('_', '-')\n if temp_original_fieldset:\n field_details['original_fieldset'] = temp_original_fieldset\n ecs_helpers.dict_clean_string_values(field_details)\n field_set_defaults(field_details)\n field['field_details'] = field_details\n if 'fields' in field:\n field['fields'] = field['fields'].copy()\n new_prefix = prefix + name + \".\"\n if 'root' in field and field['root']:\n new_prefix = \"\"\n cleanup_fields_recursive(field['fields'], new_prefix, temp_original_fieldset)\n\n\ndef field_set_defaults(field):\n field.setdefault('normalize', [])\n if field['type'] == 'keyword':\n field.setdefault('ignore_above', 1024)\n if field['type'] == 'text':\n field.setdefault('norms', False)\n if field['type'] == 'object':\n field.setdefault('object_type', 'keyword')\n\n field.setdefault('short', field['description'])\n if \"\\n\" in field['short']:\n raise ValueError(\"Short descriptions must be single line.\\nField: {}\\n{}\".format(field['flat_name'], field))\n # print(\" Short descriptions must be single line. Field: {}\".format(field['flat_name']))\n\n if 'index' in field and not field['index']:\n field.setdefault('doc_values', False)\n if 'multi_fields' in field:\n field_set_multi_field_defaults(field)\n\n\ndef field_set_multi_field_defaults(parent_field):\n \"\"\"Sets defaults for each nested field in the multi_fields array\"\"\"\n for mf in parent_field['multi_fields']:\n mf.setdefault('name', mf['type'])\n if mf['type'] == 'text':\n mf.setdefault('norms', False)\n mf['flat_name'] = parent_field['flat_name'] + '.' + mf['name']\n\n\ndef find_nestings(fields, prefix):\n \"\"\"Recursively finds all reusable fields in the fields dictionary.\"\"\"\n nestings = []\n for field_name, field in fields.items():\n if 'reusable' in field:\n nestings.append(prefix + field_name)\n if 'fields' in field:\n nestings.extend(find_nestings(field['fields'], prefix + field_name + '.'))\n return nestings\n\n\ndef generate_partially_flattened_fields(fields_nested):\n flat_fields = {}\n for (name, field) in fields_nested.items():\n # assigning field.copy() adds all the top level schema fields, has to be a copy since we're about\n # to reassign the 'fields' key and we don't want to modify fields_nested\n flat_fields[name] = field.copy()\n flat_fields[name]['fields'] = flatten_fields(field['fields'], \"\")\n return flat_fields\n\n\ndef generate_fully_flattened_fields(fields_nested):\n flattened = flatten_fields(remove_non_root_reusables(fields_nested), \"\")\n return flattened\n\n\ndef remove_non_root_reusables(fields_nested):\n fields = {}\n for (name, field) in fields_nested.items():\n if 'reusable' not in field or ('reusable' in field and field['reusable']['top_level']):\n fields[name] = field\n return fields\n\n\ndef flatten_fields(fields, key_prefix):\n flat_fields = {}\n for (name, field) in fields.items():\n new_key = key_prefix + name\n if 'field_details' in field:\n flat_fields[new_key] = field['field_details'].copy()\n if 'fields' in field:\n new_prefix = new_key + \".\"\n if 'root' in field and field['root']:\n new_prefix = \"\"\n flat_fields.update(flatten_fields(field['fields'], new_prefix))\n return flat_fields\n", "path": "scripts/schema_reader.py"}]} | 4,080 | 262 |
gh_patches_debug_22902 | rasdani/github-patches | git_diff | Lightning-AI__pytorch-lightning-720 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
W&B: Allow for passing experiment into the WandbLogger (and logging semantics)
Currently, the WandbLogger will automatically create a new internal experiment (run) whenever you create a new WandbLogger.
# Issue
If I instantiate a wandb experiment outside of the logger, then I will have two experiments when I train my model since there is no way to set the internal experiment of the WandbLogger to my current external experiment.
# Potential Solution
Allow for passing an experiment into the WandbLogger:
```
class WandbLogger(LightningLoggerBase):
def __init__(self, name=None, save_dir=None, offline=False, id=None, anonymous=False,
version=None, project=None, tags=None, experiment=None):
.
.
.
self._experiment = experiment
```
Then I can do this:
```
experiment = wandb.init(.......)
wandb_logger = WandbLogger(experiment=experiment)
```
I made this change locally, however, I wasn't sure if this was something you also wanted to implement as well. It works for me.
# Another small note
In the `WandbLogger.log_metrics` function, I would change:
`self.experiment.history.add(metrics)` --> `self.experiment.log(metrics)`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pytorch_lightning/logging/wandb.py`
Content:
```
1 import os
2
3 try:
4 import wandb
5 except ImportError:
6 raise ImportError('Missing wandb package.')
7
8 from .base import LightningLoggerBase, rank_zero_only
9
10
11 class WandbLogger(LightningLoggerBase):
12 """
13 Logger for W&B.
14
15 Args:
16 name (str): display name for the run.
17 save_dir (str): path where data is saved.
18 offline (bool): run offline (data can be streamed later to wandb servers).
19 id or version (str): sets the version, mainly used to resume a previous run.
20 anonymous (bool): enables or explicitly disables anonymous logging.
21 project (str): the name of the project to which this run will belong.
22 tags (list of str): tags associated with this run.
23
24 Example
25 --------
26 .. code-block:: python
27
28 from pytorch_lightning.logging import WandbLogger
29 from pytorch_lightning import Trainer
30
31 wandb_logger = WandbLogger()
32 trainer = Trainer(logger=wandb_logger)
33 """
34
35 def __init__(self, name=None, save_dir=None, offline=False, id=None, anonymous=False,
36 version=None, project=None, tags=None):
37 super().__init__()
38 self._name = name
39 self._save_dir = save_dir
40 self._anonymous = "allow" if anonymous else None
41 self._id = version or id
42 self._tags = tags
43 self._project = project
44 self._experiment = None
45 self._offline = offline
46
47 def __getstate__(self):
48 state = self.__dict__.copy()
49 # cannot be pickled
50 state['_experiment'] = None
51 # args needed to reload correct experiment
52 state['_id'] = self.experiment.id
53 return state
54
55 @property
56 def experiment(self):
57 r"""
58
59 Actual wandb object. To use wandb features do the following.
60
61 Example::
62
63 self.logger.experiment.some_wandb_function()
64
65 """
66 if self._experiment is None:
67 if self._offline:
68 os.environ["WANDB_MODE"] = "dryrun"
69 self._experiment = wandb.init(
70 name=self._name, dir=self._save_dir, project=self._project, anonymous=self._anonymous,
71 id=self._id, resume="allow", tags=self._tags)
72 return self._experiment
73
74 def watch(self, model, log="gradients", log_freq=100):
75 wandb.watch(model, log, log_freq)
76
77 @rank_zero_only
78 def log_hyperparams(self, params):
79 self.experiment.config.update(params)
80
81 @rank_zero_only
82 def log_metrics(self, metrics, step=None):
83 metrics["global_step"] = step
84 self.experiment.history.add(metrics)
85
86 def save(self):
87 pass
88
89 @rank_zero_only
90 def finalize(self, status='success'):
91 try:
92 exit_code = 0 if status == 'success' else 1
93 wandb.join(exit_code)
94 except TypeError:
95 wandb.join()
96
97 @property
98 def name(self):
99 return self.experiment.project_name()
100
101 @property
102 def version(self):
103 return self.experiment.id
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pytorch_lightning/logging/wandb.py b/pytorch_lightning/logging/wandb.py
--- a/pytorch_lightning/logging/wandb.py
+++ b/pytorch_lightning/logging/wandb.py
@@ -33,7 +33,7 @@
"""
def __init__(self, name=None, save_dir=None, offline=False, id=None, anonymous=False,
- version=None, project=None, tags=None):
+ version=None, project=None, tags=None, experiment=None):
super().__init__()
self._name = name
self._save_dir = save_dir
@@ -41,7 +41,7 @@
self._id = version or id
self._tags = tags
self._project = project
- self._experiment = None
+ self._experiment = experiment
self._offline = offline
def __getstate__(self):
@@ -81,7 +81,7 @@
@rank_zero_only
def log_metrics(self, metrics, step=None):
metrics["global_step"] = step
- self.experiment.history.add(metrics)
+ self.experiment.log(metrics)
def save(self):
pass
| {"golden_diff": "diff --git a/pytorch_lightning/logging/wandb.py b/pytorch_lightning/logging/wandb.py\n--- a/pytorch_lightning/logging/wandb.py\n+++ b/pytorch_lightning/logging/wandb.py\n@@ -33,7 +33,7 @@\n \"\"\"\n \n def __init__(self, name=None, save_dir=None, offline=False, id=None, anonymous=False,\n- version=None, project=None, tags=None):\n+ version=None, project=None, tags=None, experiment=None):\n super().__init__()\n self._name = name\n self._save_dir = save_dir\n@@ -41,7 +41,7 @@\n self._id = version or id\n self._tags = tags\n self._project = project\n- self._experiment = None\n+ self._experiment = experiment\n self._offline = offline\n \n def __getstate__(self):\n@@ -81,7 +81,7 @@\n @rank_zero_only\n def log_metrics(self, metrics, step=None):\n metrics[\"global_step\"] = step\n- self.experiment.history.add(metrics)\n+ self.experiment.log(metrics)\n \n def save(self):\n pass\n", "issue": "W&B: Allow for passing experiment into the WandbLogger (and logging semantics)\nCurrently, the WandbLogger will automatically create a new internal experiment (run) whenever you create a new WandbLogger.\r\n\r\n# Issue \r\n\r\nIf I instantiate a wandb experiment outside of the logger, then I will have two experiments when I train my model since there is no way to set the internal experiment of the WandbLogger to my current external experiment.\r\n\r\n# Potential Solution\r\nAllow for passing an experiment into the WandbLogger:\r\n\r\n```\r\nclass WandbLogger(LightningLoggerBase):\r\n def __init__(self, name=None, save_dir=None, offline=False, id=None, anonymous=False,\r\n version=None, project=None, tags=None, experiment=None):\r\n .\r\n .\r\n .\r\n self._experiment = experiment\r\n```\r\nThen I can do this:\r\n\r\n```\r\nexperiment = wandb.init(.......)\r\nwandb_logger = WandbLogger(experiment=experiment)\r\n```\r\nI made this change locally, however, I wasn't sure if this was something you also wanted to implement as well. It works for me.\r\n\r\n# Another small note\r\n\r\nIn the `WandbLogger.log_metrics` function, I would change:\r\n\r\n`self.experiment.history.add(metrics)` --> `self.experiment.log(metrics)`\n", "before_files": [{"content": "import os\n\ntry:\n import wandb\nexcept ImportError:\n raise ImportError('Missing wandb package.')\n\nfrom .base import LightningLoggerBase, rank_zero_only\n\n\nclass WandbLogger(LightningLoggerBase):\n \"\"\"\n Logger for W&B.\n\n Args:\n name (str): display name for the run.\n save_dir (str): path where data is saved.\n offline (bool): run offline (data can be streamed later to wandb servers).\n id or version (str): sets the version, mainly used to resume a previous run.\n anonymous (bool): enables or explicitly disables anonymous logging.\n project (str): the name of the project to which this run will belong.\n tags (list of str): tags associated with this run.\n\n Example\n --------\n .. code-block:: python\n\n from pytorch_lightning.logging import WandbLogger\n from pytorch_lightning import Trainer\n\n wandb_logger = WandbLogger()\n trainer = Trainer(logger=wandb_logger)\n \"\"\"\n\n def __init__(self, name=None, save_dir=None, offline=False, id=None, anonymous=False,\n version=None, project=None, tags=None):\n super().__init__()\n self._name = name\n self._save_dir = save_dir\n self._anonymous = \"allow\" if anonymous else None\n self._id = version or id\n self._tags = tags\n self._project = project\n self._experiment = None\n self._offline = offline\n\n def __getstate__(self):\n state = self.__dict__.copy()\n # cannot be pickled\n state['_experiment'] = None\n # args needed to reload correct experiment\n state['_id'] = self.experiment.id\n return state\n\n @property\n def experiment(self):\n r\"\"\"\n\n Actual wandb object. To use wandb features do the following.\n\n Example::\n\n self.logger.experiment.some_wandb_function()\n\n \"\"\"\n if self._experiment is None:\n if self._offline:\n os.environ[\"WANDB_MODE\"] = \"dryrun\"\n self._experiment = wandb.init(\n name=self._name, dir=self._save_dir, project=self._project, anonymous=self._anonymous,\n id=self._id, resume=\"allow\", tags=self._tags)\n return self._experiment\n\n def watch(self, model, log=\"gradients\", log_freq=100):\n wandb.watch(model, log, log_freq)\n\n @rank_zero_only\n def log_hyperparams(self, params):\n self.experiment.config.update(params)\n\n @rank_zero_only\n def log_metrics(self, metrics, step=None):\n metrics[\"global_step\"] = step\n self.experiment.history.add(metrics)\n\n def save(self):\n pass\n\n @rank_zero_only\n def finalize(self, status='success'):\n try:\n exit_code = 0 if status == 'success' else 1\n wandb.join(exit_code)\n except TypeError:\n wandb.join()\n\n @property\n def name(self):\n return self.experiment.project_name()\n\n @property\n def version(self):\n return self.experiment.id\n", "path": "pytorch_lightning/logging/wandb.py"}], "after_files": [{"content": "import os\n\ntry:\n import wandb\nexcept ImportError:\n raise ImportError('Missing wandb package.')\n\nfrom .base import LightningLoggerBase, rank_zero_only\n\n\nclass WandbLogger(LightningLoggerBase):\n \"\"\"\n Logger for W&B.\n\n Args:\n name (str): display name for the run.\n save_dir (str): path where data is saved.\n offline (bool): run offline (data can be streamed later to wandb servers).\n id or version (str): sets the version, mainly used to resume a previous run.\n anonymous (bool): enables or explicitly disables anonymous logging.\n project (str): the name of the project to which this run will belong.\n tags (list of str): tags associated with this run.\n\n Example\n --------\n .. code-block:: python\n\n from pytorch_lightning.logging import WandbLogger\n from pytorch_lightning import Trainer\n\n wandb_logger = WandbLogger()\n trainer = Trainer(logger=wandb_logger)\n \"\"\"\n\n def __init__(self, name=None, save_dir=None, offline=False, id=None, anonymous=False,\n version=None, project=None, tags=None, experiment=None):\n super().__init__()\n self._name = name\n self._save_dir = save_dir\n self._anonymous = \"allow\" if anonymous else None\n self._id = version or id\n self._tags = tags\n self._project = project\n self._experiment = experiment\n self._offline = offline\n\n def __getstate__(self):\n state = self.__dict__.copy()\n # cannot be pickled\n state['_experiment'] = None\n # args needed to reload correct experiment\n state['_id'] = self.experiment.id\n return state\n\n @property\n def experiment(self):\n r\"\"\"\n\n Actual wandb object. To use wandb features do the following.\n\n Example::\n\n self.logger.experiment.some_wandb_function()\n\n \"\"\"\n if self._experiment is None:\n if self._offline:\n os.environ[\"WANDB_MODE\"] = \"dryrun\"\n self._experiment = wandb.init(\n name=self._name, dir=self._save_dir, project=self._project, anonymous=self._anonymous,\n id=self._id, resume=\"allow\", tags=self._tags)\n return self._experiment\n\n def watch(self, model, log=\"gradients\", log_freq=100):\n wandb.watch(model, log, log_freq)\n\n @rank_zero_only\n def log_hyperparams(self, params):\n self.experiment.config.update(params)\n\n @rank_zero_only\n def log_metrics(self, metrics, step=None):\n metrics[\"global_step\"] = step\n self.experiment.log(metrics)\n\n def save(self):\n pass\n\n @rank_zero_only\n def finalize(self, status='success'):\n try:\n exit_code = 0 if status == 'success' else 1\n wandb.join(exit_code)\n except TypeError:\n wandb.join()\n\n @property\n def name(self):\n return self.experiment.project_name()\n\n @property\n def version(self):\n return self.experiment.id\n", "path": "pytorch_lightning/logging/wandb.py"}]} | 1,425 | 262 |
gh_patches_debug_8178 | rasdani/github-patches | git_diff | kartoza__prj.app-1077 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Please include project name in certifying org admin
<img width="895" alt="image" src="https://user-images.githubusercontent.com/178003/67474466-6ab98000-f64c-11e9-8a69-95e12e8bd404.png">
We have two Kartoza entries there - one for InaSAFE and one for QGIS - I nearly deleted the first not realising it was for InaSAFE
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django_project/certification/admin.py`
Content:
```
1 # coding=UTF-8
2 """Model admin class definitions."""
3
4 from django.contrib.gis import admin
5 from simple_history.admin import SimpleHistoryAdmin
6 from certification.models.certificate import Certificate
7 from certification.models.course import Course
8 from certification.models.training_center import TrainingCenter
9 from certification.models.course_convener import CourseConvener
10 from certification.models.course_type import CourseType
11 from certification.models.attendee import Attendee
12 from certification.models.course_attendee import CourseAttendee
13 from certification.models.certifying_organisation import CertifyingOrganisation
14 from certification.models.organisation_certificate import \
15 CertifyingOrganisationCertificate
16 from certification.models.status import Status
17
18
19 class CertificateAdmin(admin.ModelAdmin):
20 """Certificate admin model."""
21
22 list_display = ('__unicode__', 'course')
23 search_fields = ('certificateID', 'course__name',)
24
25 def queryset(self, request):
26 """Ensure we use the correct manager.
27
28 :param request: HttpRequest object
29 """
30 query_set = self.model.objects
31 ordering = self.get_ordering(request)
32 if ordering:
33 query_set = query_set.order_by(*ordering)
34 return query_set
35
36
37 class AttendeeAdmin(admin.ModelAdmin):
38 """Attendee admin model."""
39 list_display = ('firstname', 'surname', 'email', 'certifying_organisation')
40 search_fields = ['firstname', 'surname']
41
42 def queryset(self, request):
43 """Ensure we use the correct manager.
44
45 :param request: HttpRequest object
46 """
47 query_set = self.model.objects
48 ordering = self.get_ordering(request)
49 if ordering:
50 query_set = query_set.order_by(*ordering)
51 return query_set
52
53
54 class CourseAttendeeAdmin(admin.ModelAdmin):
55 """Certificate admin model."""
56 list_display = ('course', 'attendee', 'author')
57
58 def queryset(self, request):
59 """Ensure we use the correct manager.
60
61 :param request: HttpRequest object
62 """
63 query_set = self.model.objects
64 ordering = self.get_ordering(request)
65 if ordering:
66 query_set = query_set.order_by(*ordering)
67 return query_set
68
69
70 class CourseAdmin(admin.ModelAdmin):
71 """Course admin model."""
72
73 def queryset(self, request):
74 """Ensure we use the correct manager.
75
76 :param request: HttpRequest object
77 """
78 query_set = self.model.objects
79 ordering = self.get_ordering(request)
80 if ordering:
81 query_set = query_set.order_by(*ordering)
82 return query_set
83
84
85 class CourseTypeAdmin(admin.ModelAdmin):
86 """Course type admin model."""
87
88 def queryset(self, request):
89 """Ensure we use the correct manager.
90
91 :param request: HttpRequest object
92 """
93 query_set = self.model.objects
94 ordering = self.get_ordering(request)
95 if ordering:
96 query_set = query_set.order_by(*ordering)
97 return query_set
98
99
100 class TrainingCenterAdmin(admin.GeoModelAdmin):
101 """Training center admin model."""
102
103 def queryset(self, request):
104 """Ensure we use the correct manager.
105
106 :param request: HttpRequest object
107 """
108 query_set = self.model.objects
109 ordering = self.get_ordering(request)
110 if ordering:
111 query_set = query_set.order_by(*ordering)
112 return query_set
113
114
115 class CourseConvenerAdmin(admin.ModelAdmin):
116 """Course convener admin model."""
117
118 def queryset(self, request):
119 """Ensure we use the correct manager.
120
121 :param request: HttpRequest object
122 """
123 query_set = self.model.objects
124 ordering = self.get_ordering(request)
125 if ordering:
126 query_set = query_set.order_by(*ordering)
127 return query_set
128
129
130 class CertifyingOrganisationCertificateAdminInline(admin.TabularInline):
131 model = CertifyingOrganisationCertificate
132 extra = 0
133
134
135 class CertifyingOrganisationCertificateAdmin(SimpleHistoryAdmin):
136 history_list_display = ['issued', 'valid']
137
138
139 class CertifyingOrganisationAdmin(SimpleHistoryAdmin):
140 """Certifying organisation admin model."""
141
142 filter_horizontal = ('organisation_owners',)
143 search_fields = ['name']
144 list_display = ('name', 'country', 'approved', 'rejected')
145 list_filter = ('country', 'approved', 'rejected', 'status')
146 inlines = (CertifyingOrganisationCertificateAdminInline, )
147 history_list_display = ['status', 'remarks']
148
149 def queryset(self, request):
150 """Ensure we use the correct manager.
151
152 :param request: HttpRequest object
153 """
154 query_set = self.model.objects
155 ordering = self.get_ordering(request)
156 if ordering:
157 query_set = query_set.order_by(*ordering)
158 return query_set
159
160
161 class StatusAdmin(admin.ModelAdmin):
162 list_display = ('name', 'project', 'order')
163
164
165 admin.site.register(Certificate, CertificateAdmin)
166 admin.site.register(Attendee, AttendeeAdmin)
167 admin.site.register(Course, CourseAdmin)
168 admin.site.register(CourseType, CourseTypeAdmin)
169 admin.site.register(TrainingCenter, TrainingCenterAdmin)
170 admin.site.register(CourseConvener, CourseConvenerAdmin)
171 admin.site.register(CertifyingOrganisation, CertifyingOrganisationAdmin)
172 admin.site.register(CourseAttendee, CourseAttendeeAdmin)
173 admin.site.register(
174 CertifyingOrganisationCertificate, CertifyingOrganisationCertificateAdmin)
175 admin.site.register(Status, StatusAdmin)
176
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/django_project/certification/admin.py b/django_project/certification/admin.py
--- a/django_project/certification/admin.py
+++ b/django_project/certification/admin.py
@@ -141,7 +141,7 @@
filter_horizontal = ('organisation_owners',)
search_fields = ['name']
- list_display = ('name', 'country', 'approved', 'rejected')
+ list_display = ('name', 'project', 'country', 'approved', 'rejected')
list_filter = ('country', 'approved', 'rejected', 'status')
inlines = (CertifyingOrganisationCertificateAdminInline, )
history_list_display = ['status', 'remarks']
| {"golden_diff": "diff --git a/django_project/certification/admin.py b/django_project/certification/admin.py\n--- a/django_project/certification/admin.py\n+++ b/django_project/certification/admin.py\n@@ -141,7 +141,7 @@\n \n filter_horizontal = ('organisation_owners',)\n search_fields = ['name']\n- list_display = ('name', 'country', 'approved', 'rejected')\n+ list_display = ('name', 'project', 'country', 'approved', 'rejected')\n list_filter = ('country', 'approved', 'rejected', 'status')\n inlines = (CertifyingOrganisationCertificateAdminInline, )\n history_list_display = ['status', 'remarks']\n", "issue": "Please include project name in certifying org admin\n<img width=\"895\" alt=\"image\" src=\"https://user-images.githubusercontent.com/178003/67474466-6ab98000-f64c-11e9-8a69-95e12e8bd404.png\">\r\n\r\n\r\nWe have two Kartoza entries there - one for InaSAFE and one for QGIS - I nearly deleted the first not realising it was for InaSAFE\n", "before_files": [{"content": "# coding=UTF-8\n\"\"\"Model admin class definitions.\"\"\"\n\nfrom django.contrib.gis import admin\nfrom simple_history.admin import SimpleHistoryAdmin\nfrom certification.models.certificate import Certificate\nfrom certification.models.course import Course\nfrom certification.models.training_center import TrainingCenter\nfrom certification.models.course_convener import CourseConvener\nfrom certification.models.course_type import CourseType\nfrom certification.models.attendee import Attendee\nfrom certification.models.course_attendee import CourseAttendee\nfrom certification.models.certifying_organisation import CertifyingOrganisation\nfrom certification.models.organisation_certificate import \\\n CertifyingOrganisationCertificate\nfrom certification.models.status import Status\n\n\nclass CertificateAdmin(admin.ModelAdmin):\n \"\"\"Certificate admin model.\"\"\"\n\n list_display = ('__unicode__', 'course')\n search_fields = ('certificateID', 'course__name',)\n\n def queryset(self, request):\n \"\"\"Ensure we use the correct manager.\n\n :param request: HttpRequest object\n \"\"\"\n query_set = self.model.objects\n ordering = self.get_ordering(request)\n if ordering:\n query_set = query_set.order_by(*ordering)\n return query_set\n\n\nclass AttendeeAdmin(admin.ModelAdmin):\n \"\"\"Attendee admin model.\"\"\"\n list_display = ('firstname', 'surname', 'email', 'certifying_organisation')\n search_fields = ['firstname', 'surname']\n\n def queryset(self, request):\n \"\"\"Ensure we use the correct manager.\n\n :param request: HttpRequest object\n \"\"\"\n query_set = self.model.objects\n ordering = self.get_ordering(request)\n if ordering:\n query_set = query_set.order_by(*ordering)\n return query_set\n\n\nclass CourseAttendeeAdmin(admin.ModelAdmin):\n \"\"\"Certificate admin model.\"\"\"\n list_display = ('course', 'attendee', 'author')\n\n def queryset(self, request):\n \"\"\"Ensure we use the correct manager.\n\n :param request: HttpRequest object\n \"\"\"\n query_set = self.model.objects\n ordering = self.get_ordering(request)\n if ordering:\n query_set = query_set.order_by(*ordering)\n return query_set\n\n\nclass CourseAdmin(admin.ModelAdmin):\n \"\"\"Course admin model.\"\"\"\n\n def queryset(self, request):\n \"\"\"Ensure we use the correct manager.\n\n :param request: HttpRequest object\n \"\"\"\n query_set = self.model.objects\n ordering = self.get_ordering(request)\n if ordering:\n query_set = query_set.order_by(*ordering)\n return query_set\n\n\nclass CourseTypeAdmin(admin.ModelAdmin):\n \"\"\"Course type admin model.\"\"\"\n\n def queryset(self, request):\n \"\"\"Ensure we use the correct manager.\n\n :param request: HttpRequest object\n \"\"\"\n query_set = self.model.objects\n ordering = self.get_ordering(request)\n if ordering:\n query_set = query_set.order_by(*ordering)\n return query_set\n\n\nclass TrainingCenterAdmin(admin.GeoModelAdmin):\n \"\"\"Training center admin model.\"\"\"\n\n def queryset(self, request):\n \"\"\"Ensure we use the correct manager.\n\n :param request: HttpRequest object\n \"\"\"\n query_set = self.model.objects\n ordering = self.get_ordering(request)\n if ordering:\n query_set = query_set.order_by(*ordering)\n return query_set\n\n\nclass CourseConvenerAdmin(admin.ModelAdmin):\n \"\"\"Course convener admin model.\"\"\"\n\n def queryset(self, request):\n \"\"\"Ensure we use the correct manager.\n\n :param request: HttpRequest object\n \"\"\"\n query_set = self.model.objects\n ordering = self.get_ordering(request)\n if ordering:\n query_set = query_set.order_by(*ordering)\n return query_set\n\n\nclass CertifyingOrganisationCertificateAdminInline(admin.TabularInline):\n model = CertifyingOrganisationCertificate\n extra = 0\n\n\nclass CertifyingOrganisationCertificateAdmin(SimpleHistoryAdmin):\n history_list_display = ['issued', 'valid']\n\n\nclass CertifyingOrganisationAdmin(SimpleHistoryAdmin):\n \"\"\"Certifying organisation admin model.\"\"\"\n\n filter_horizontal = ('organisation_owners',)\n search_fields = ['name']\n list_display = ('name', 'country', 'approved', 'rejected')\n list_filter = ('country', 'approved', 'rejected', 'status')\n inlines = (CertifyingOrganisationCertificateAdminInline, )\n history_list_display = ['status', 'remarks']\n\n def queryset(self, request):\n \"\"\"Ensure we use the correct manager.\n\n :param request: HttpRequest object\n \"\"\"\n query_set = self.model.objects\n ordering = self.get_ordering(request)\n if ordering:\n query_set = query_set.order_by(*ordering)\n return query_set\n\n\nclass StatusAdmin(admin.ModelAdmin):\n list_display = ('name', 'project', 'order')\n\n\nadmin.site.register(Certificate, CertificateAdmin)\nadmin.site.register(Attendee, AttendeeAdmin)\nadmin.site.register(Course, CourseAdmin)\nadmin.site.register(CourseType, CourseTypeAdmin)\nadmin.site.register(TrainingCenter, TrainingCenterAdmin)\nadmin.site.register(CourseConvener, CourseConvenerAdmin)\nadmin.site.register(CertifyingOrganisation, CertifyingOrganisationAdmin)\nadmin.site.register(CourseAttendee, CourseAttendeeAdmin)\nadmin.site.register(\n CertifyingOrganisationCertificate, CertifyingOrganisationCertificateAdmin)\nadmin.site.register(Status, StatusAdmin)\n", "path": "django_project/certification/admin.py"}], "after_files": [{"content": "# coding=UTF-8\n\"\"\"Model admin class definitions.\"\"\"\n\nfrom django.contrib.gis import admin\nfrom simple_history.admin import SimpleHistoryAdmin\nfrom certification.models.certificate import Certificate\nfrom certification.models.course import Course\nfrom certification.models.training_center import TrainingCenter\nfrom certification.models.course_convener import CourseConvener\nfrom certification.models.course_type import CourseType\nfrom certification.models.attendee import Attendee\nfrom certification.models.course_attendee import CourseAttendee\nfrom certification.models.certifying_organisation import CertifyingOrganisation\nfrom certification.models.organisation_certificate import \\\n CertifyingOrganisationCertificate\nfrom certification.models.status import Status\n\n\nclass CertificateAdmin(admin.ModelAdmin):\n \"\"\"Certificate admin model.\"\"\"\n\n list_display = ('__unicode__', 'course')\n search_fields = ('certificateID', 'course__name',)\n\n def queryset(self, request):\n \"\"\"Ensure we use the correct manager.\n\n :param request: HttpRequest object\n \"\"\"\n query_set = self.model.objects\n ordering = self.get_ordering(request)\n if ordering:\n query_set = query_set.order_by(*ordering)\n return query_set\n\n\nclass AttendeeAdmin(admin.ModelAdmin):\n \"\"\"Attendee admin model.\"\"\"\n list_display = ('firstname', 'surname', 'email', 'certifying_organisation')\n search_fields = ['firstname', 'surname']\n\n def queryset(self, request):\n \"\"\"Ensure we use the correct manager.\n\n :param request: HttpRequest object\n \"\"\"\n query_set = self.model.objects\n ordering = self.get_ordering(request)\n if ordering:\n query_set = query_set.order_by(*ordering)\n return query_set\n\n\nclass CourseAttendeeAdmin(admin.ModelAdmin):\n \"\"\"Certificate admin model.\"\"\"\n list_display = ('course', 'attendee', 'author')\n\n def queryset(self, request):\n \"\"\"Ensure we use the correct manager.\n\n :param request: HttpRequest object\n \"\"\"\n query_set = self.model.objects\n ordering = self.get_ordering(request)\n if ordering:\n query_set = query_set.order_by(*ordering)\n return query_set\n\n\nclass CourseAdmin(admin.ModelAdmin):\n \"\"\"Course admin model.\"\"\"\n\n def queryset(self, request):\n \"\"\"Ensure we use the correct manager.\n\n :param request: HttpRequest object\n \"\"\"\n query_set = self.model.objects\n ordering = self.get_ordering(request)\n if ordering:\n query_set = query_set.order_by(*ordering)\n return query_set\n\n\nclass CourseTypeAdmin(admin.ModelAdmin):\n \"\"\"Course type admin model.\"\"\"\n\n def queryset(self, request):\n \"\"\"Ensure we use the correct manager.\n\n :param request: HttpRequest object\n \"\"\"\n query_set = self.model.objects\n ordering = self.get_ordering(request)\n if ordering:\n query_set = query_set.order_by(*ordering)\n return query_set\n\n\nclass TrainingCenterAdmin(admin.GeoModelAdmin):\n \"\"\"Training center admin model.\"\"\"\n\n def queryset(self, request):\n \"\"\"Ensure we use the correct manager.\n\n :param request: HttpRequest object\n \"\"\"\n query_set = self.model.objects\n ordering = self.get_ordering(request)\n if ordering:\n query_set = query_set.order_by(*ordering)\n return query_set\n\n\nclass CourseConvenerAdmin(admin.ModelAdmin):\n \"\"\"Course convener admin model.\"\"\"\n\n def queryset(self, request):\n \"\"\"Ensure we use the correct manager.\n\n :param request: HttpRequest object\n \"\"\"\n query_set = self.model.objects\n ordering = self.get_ordering(request)\n if ordering:\n query_set = query_set.order_by(*ordering)\n return query_set\n\n\nclass CertifyingOrganisationCertificateAdminInline(admin.TabularInline):\n model = CertifyingOrganisationCertificate\n extra = 0\n\n\nclass CertifyingOrganisationCertificateAdmin(SimpleHistoryAdmin):\n history_list_display = ['issued', 'valid']\n\n\nclass CertifyingOrganisationAdmin(SimpleHistoryAdmin):\n \"\"\"Certifying organisation admin model.\"\"\"\n\n filter_horizontal = ('organisation_owners',)\n search_fields = ['name']\n list_display = ('name', 'project', 'country', 'approved', 'rejected')\n list_filter = ('country', 'approved', 'rejected', 'status')\n inlines = (CertifyingOrganisationCertificateAdminInline, )\n history_list_display = ['status', 'remarks']\n\n def queryset(self, request):\n \"\"\"Ensure we use the correct manager.\n\n :param request: HttpRequest object\n \"\"\"\n query_set = self.model.objects\n ordering = self.get_ordering(request)\n if ordering:\n query_set = query_set.order_by(*ordering)\n return query_set\n\n\nclass StatusAdmin(admin.ModelAdmin):\n list_display = ('name', 'project', 'order')\n\n\nadmin.site.register(Certificate, CertificateAdmin)\nadmin.site.register(Attendee, AttendeeAdmin)\nadmin.site.register(Course, CourseAdmin)\nadmin.site.register(CourseType, CourseTypeAdmin)\nadmin.site.register(TrainingCenter, TrainingCenterAdmin)\nadmin.site.register(CourseConvener, CourseConvenerAdmin)\nadmin.site.register(CertifyingOrganisation, CertifyingOrganisationAdmin)\nadmin.site.register(CourseAttendee, CourseAttendeeAdmin)\nadmin.site.register(\n CertifyingOrganisationCertificate, CertifyingOrganisationCertificateAdmin)\nadmin.site.register(Status, StatusAdmin)\n", "path": "django_project/certification/admin.py"}]} | 1,921 | 153 |
gh_patches_debug_26619 | rasdani/github-patches | git_diff | benoitc__gunicorn-826 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Upcoming Tornado change breaks gunicorn.workers.gtornado
The next release of Tornado (will be 4.0 when released, although the current master branch hasn't been updated to change all mentions of 3.3 to 4.0) makes some major changes to the HTTP internals and breaks gunicorn's monkey-patching of HTTPConnection.finish. Fortunately, there is now a cleaner way to do the tracking that gunicorn wants to do here, since the interface between HTTPServer and Application is more formally defined by the HTTPServerConnectionDelegate and HTTPMessageDelegate interfaces, so you should be able to wrap the Application (which implements/subclasses HTTPServerConnectionDelegate) and the HTTPMessageDelegate its start_request method returns.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gunicorn/workers/gtornado.py`
Content:
```
1 # -*- coding: utf-8 -
2 #
3 # This file is part of gunicorn released under the MIT license.
4 # See the NOTICE for more information.
5
6 import os
7 import sys
8
9 try:
10 import tornado.web
11 except ImportError:
12 raise RuntimeError("You need tornado installed to use this worker.")
13 import tornado.httpserver
14 from tornado.ioloop import IOLoop, PeriodicCallback
15 from tornado.wsgi import WSGIContainer
16 from gunicorn.workers.base import Worker
17 from gunicorn import __version__ as gversion
18
19
20 class TornadoWorker(Worker):
21
22 @classmethod
23 def setup(cls):
24 web = sys.modules.pop("tornado.web")
25 old_clear = web.RequestHandler.clear
26
27 def clear(self):
28 old_clear(self)
29 self._headers["Server"] += " (Gunicorn/%s)" % gversion
30 web.RequestHandler.clear = clear
31 sys.modules["tornado.web"] = web
32
33 def handle_exit(self, sig, frame):
34 if self.alive:
35 super(TornadoWorker, self).handle_exit(sig, frame)
36 self.stop()
37
38 def handle_request(self):
39 self.nr += 1
40 if self.alive and self.nr >= self.max_requests:
41 self.alive = False
42 self.log.info("Autorestarting worker after current request.")
43 self.stop()
44
45 def watchdog(self):
46 if self.alive:
47 self.notify()
48
49 if self.ppid != os.getppid():
50 self.log.info("Parent changed, shutting down: %s", self)
51 self.stop()
52
53 def run(self):
54 self.ioloop = IOLoop.instance()
55 self.alive = True
56 PeriodicCallback(self.watchdog, 1000, io_loop=self.ioloop).start()
57
58 # Assume the app is a WSGI callable if its not an
59 # instance of tornado.web.Application or is an
60 # instance of tornado.wsgi.WSGIApplication
61 app = self.wsgi
62 if not isinstance(app, tornado.web.Application) or \
63 isinstance(app, tornado.wsgi.WSGIApplication):
64 app = WSGIContainer(app)
65
66 # Monkey-patching HTTPConnection.finish to count the
67 # number of requests being handled by Tornado. This
68 # will help gunicorn shutdown the worker if max_requests
69 # is exceeded.
70 httpserver = sys.modules["tornado.httpserver"]
71 old_connection_finish = httpserver.HTTPConnection.finish
72
73 def finish(other):
74 self.handle_request()
75 old_connection_finish(other)
76 httpserver.HTTPConnection.finish = finish
77 sys.modules["tornado.httpserver"] = httpserver
78
79 if self.cfg.is_ssl:
80 server = tornado.httpserver.HTTPServer(app, io_loop=self.ioloop,
81 ssl_options=self.cfg.ssl_options)
82 else:
83 server = tornado.httpserver.HTTPServer(app,
84 io_loop=self.ioloop)
85
86 self.server = server
87
88 for s in self.sockets:
89 s.setblocking(0)
90 if hasattr(server, "add_socket"): # tornado > 2.0
91 server.add_socket(s)
92 elif hasattr(server, "_sockets"): # tornado 2.0
93 server._sockets[s.fileno()] = s
94
95 server.no_keep_alive = self.cfg.keepalive <= 0
96 server.start(num_processes=1)
97
98 self.ioloop.start()
99
100 def stop(self):
101 if hasattr(self, 'server'):
102 try:
103 self.server.stop()
104 except Exception:
105 pass
106 PeriodicCallback(self.stop_ioloop, 1000, io_loop=self.ioloop).start()
107
108 def stop_ioloop(self):
109 if not self.ioloop._callbacks and len(self.ioloop._timeouts) <= 1:
110 self.ioloop.stop()
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gunicorn/workers/gtornado.py b/gunicorn/workers/gtornado.py
--- a/gunicorn/workers/gtornado.py
+++ b/gunicorn/workers/gtornado.py
@@ -68,20 +68,31 @@
# will help gunicorn shutdown the worker if max_requests
# is exceeded.
httpserver = sys.modules["tornado.httpserver"]
- old_connection_finish = httpserver.HTTPConnection.finish
+ if hasattr(httpserver, 'HTTPConnection'):
+ old_connection_finish = httpserver.HTTPConnection.finish
- def finish(other):
- self.handle_request()
- old_connection_finish(other)
- httpserver.HTTPConnection.finish = finish
- sys.modules["tornado.httpserver"] = httpserver
+ def finish(other):
+ self.handle_request()
+ old_connection_finish(other)
+ httpserver.HTTPConnection.finish = finish
+ sys.modules["tornado.httpserver"] = httpserver
+
+ server_class = tornado.httpserver.HTTPServer
+ else:
+
+ class _HTTPServer(tornado.httpserver.HTTPServer):
+
+ def on_close(instance, server_conn):
+ self.handle_request()
+ super(_HTTPServer, instance).on_close(server_conn)
+
+ server_class = _HTTPServer
if self.cfg.is_ssl:
- server = tornado.httpserver.HTTPServer(app, io_loop=self.ioloop,
+ server = server_class(app, io_loop=self.ioloop,
ssl_options=self.cfg.ssl_options)
else:
- server = tornado.httpserver.HTTPServer(app,
- io_loop=self.ioloop)
+ server = server_class(app, io_loop=self.ioloop)
self.server = server
| {"golden_diff": "diff --git a/gunicorn/workers/gtornado.py b/gunicorn/workers/gtornado.py\n--- a/gunicorn/workers/gtornado.py\n+++ b/gunicorn/workers/gtornado.py\n@@ -68,20 +68,31 @@\n # will help gunicorn shutdown the worker if max_requests\n # is exceeded.\n httpserver = sys.modules[\"tornado.httpserver\"]\n- old_connection_finish = httpserver.HTTPConnection.finish\n+ if hasattr(httpserver, 'HTTPConnection'):\n+ old_connection_finish = httpserver.HTTPConnection.finish\n \n- def finish(other):\n- self.handle_request()\n- old_connection_finish(other)\n- httpserver.HTTPConnection.finish = finish\n- sys.modules[\"tornado.httpserver\"] = httpserver\n+ def finish(other):\n+ self.handle_request()\n+ old_connection_finish(other)\n+ httpserver.HTTPConnection.finish = finish\n+ sys.modules[\"tornado.httpserver\"] = httpserver\n+\n+ server_class = tornado.httpserver.HTTPServer\n+ else:\n+\n+ class _HTTPServer(tornado.httpserver.HTTPServer):\n+\n+ def on_close(instance, server_conn):\n+ self.handle_request()\n+ super(_HTTPServer, instance).on_close(server_conn)\n+\n+ server_class = _HTTPServer\n \n if self.cfg.is_ssl:\n- server = tornado.httpserver.HTTPServer(app, io_loop=self.ioloop,\n+ server = server_class(app, io_loop=self.ioloop,\n ssl_options=self.cfg.ssl_options)\n else:\n- server = tornado.httpserver.HTTPServer(app,\n- io_loop=self.ioloop)\n+ server = server_class(app, io_loop=self.ioloop)\n \n self.server = server\n", "issue": "Upcoming Tornado change breaks gunicorn.workers.gtornado\nThe next release of Tornado (will be 4.0 when released, although the current master branch hasn't been updated to change all mentions of 3.3 to 4.0) makes some major changes to the HTTP internals and breaks gunicorn's monkey-patching of HTTPConnection.finish. Fortunately, there is now a cleaner way to do the tracking that gunicorn wants to do here, since the interface between HTTPServer and Application is more formally defined by the HTTPServerConnectionDelegate and HTTPMessageDelegate interfaces, so you should be able to wrap the Application (which implements/subclasses HTTPServerConnectionDelegate) and the HTTPMessageDelegate its start_request method returns. \n\n", "before_files": [{"content": "# -*- coding: utf-8 -\n#\n# This file is part of gunicorn released under the MIT license.\n# See the NOTICE for more information.\n\nimport os\nimport sys\n\ntry:\n import tornado.web\nexcept ImportError:\n raise RuntimeError(\"You need tornado installed to use this worker.\")\nimport tornado.httpserver\nfrom tornado.ioloop import IOLoop, PeriodicCallback\nfrom tornado.wsgi import WSGIContainer\nfrom gunicorn.workers.base import Worker\nfrom gunicorn import __version__ as gversion\n\n\nclass TornadoWorker(Worker):\n\n @classmethod\n def setup(cls):\n web = sys.modules.pop(\"tornado.web\")\n old_clear = web.RequestHandler.clear\n\n def clear(self):\n old_clear(self)\n self._headers[\"Server\"] += \" (Gunicorn/%s)\" % gversion\n web.RequestHandler.clear = clear\n sys.modules[\"tornado.web\"] = web\n\n def handle_exit(self, sig, frame):\n if self.alive:\n super(TornadoWorker, self).handle_exit(sig, frame)\n self.stop()\n\n def handle_request(self):\n self.nr += 1\n if self.alive and self.nr >= self.max_requests:\n self.alive = False\n self.log.info(\"Autorestarting worker after current request.\")\n self.stop()\n\n def watchdog(self):\n if self.alive:\n self.notify()\n\n if self.ppid != os.getppid():\n self.log.info(\"Parent changed, shutting down: %s\", self)\n self.stop()\n\n def run(self):\n self.ioloop = IOLoop.instance()\n self.alive = True\n PeriodicCallback(self.watchdog, 1000, io_loop=self.ioloop).start()\n\n # Assume the app is a WSGI callable if its not an\n # instance of tornado.web.Application or is an\n # instance of tornado.wsgi.WSGIApplication\n app = self.wsgi\n if not isinstance(app, tornado.web.Application) or \\\n isinstance(app, tornado.wsgi.WSGIApplication):\n app = WSGIContainer(app)\n\n # Monkey-patching HTTPConnection.finish to count the\n # number of requests being handled by Tornado. This\n # will help gunicorn shutdown the worker if max_requests\n # is exceeded.\n httpserver = sys.modules[\"tornado.httpserver\"]\n old_connection_finish = httpserver.HTTPConnection.finish\n\n def finish(other):\n self.handle_request()\n old_connection_finish(other)\n httpserver.HTTPConnection.finish = finish\n sys.modules[\"tornado.httpserver\"] = httpserver\n\n if self.cfg.is_ssl:\n server = tornado.httpserver.HTTPServer(app, io_loop=self.ioloop,\n ssl_options=self.cfg.ssl_options)\n else:\n server = tornado.httpserver.HTTPServer(app,\n io_loop=self.ioloop)\n\n self.server = server\n\n for s in self.sockets:\n s.setblocking(0)\n if hasattr(server, \"add_socket\"): # tornado > 2.0\n server.add_socket(s)\n elif hasattr(server, \"_sockets\"): # tornado 2.0\n server._sockets[s.fileno()] = s\n\n server.no_keep_alive = self.cfg.keepalive <= 0\n server.start(num_processes=1)\n\n self.ioloop.start()\n\n def stop(self):\n if hasattr(self, 'server'):\n try:\n self.server.stop()\n except Exception:\n pass\n PeriodicCallback(self.stop_ioloop, 1000, io_loop=self.ioloop).start()\n\n def stop_ioloop(self):\n if not self.ioloop._callbacks and len(self.ioloop._timeouts) <= 1:\n self.ioloop.stop()\n", "path": "gunicorn/workers/gtornado.py"}], "after_files": [{"content": "# -*- coding: utf-8 -\n#\n# This file is part of gunicorn released under the MIT license.\n# See the NOTICE for more information.\n\nimport os\nimport sys\n\ntry:\n import tornado.web\nexcept ImportError:\n raise RuntimeError(\"You need tornado installed to use this worker.\")\nimport tornado.httpserver\nfrom tornado.ioloop import IOLoop, PeriodicCallback\nfrom tornado.wsgi import WSGIContainer\nfrom gunicorn.workers.base import Worker\nfrom gunicorn import __version__ as gversion\n\n\nclass TornadoWorker(Worker):\n\n @classmethod\n def setup(cls):\n web = sys.modules.pop(\"tornado.web\")\n old_clear = web.RequestHandler.clear\n\n def clear(self):\n old_clear(self)\n self._headers[\"Server\"] += \" (Gunicorn/%s)\" % gversion\n web.RequestHandler.clear = clear\n sys.modules[\"tornado.web\"] = web\n\n def handle_exit(self, sig, frame):\n if self.alive:\n super(TornadoWorker, self).handle_exit(sig, frame)\n self.stop()\n\n def handle_request(self):\n self.nr += 1\n if self.alive and self.nr >= self.max_requests:\n self.alive = False\n self.log.info(\"Autorestarting worker after current request.\")\n self.stop()\n\n def watchdog(self):\n if self.alive:\n self.notify()\n\n if self.ppid != os.getppid():\n self.log.info(\"Parent changed, shutting down: %s\", self)\n self.stop()\n\n def run(self):\n self.ioloop = IOLoop.instance()\n self.alive = True\n PeriodicCallback(self.watchdog, 1000, io_loop=self.ioloop).start()\n\n # Assume the app is a WSGI callable if its not an\n # instance of tornado.web.Application or is an\n # instance of tornado.wsgi.WSGIApplication\n app = self.wsgi\n if not isinstance(app, tornado.web.Application) or \\\n isinstance(app, tornado.wsgi.WSGIApplication):\n app = WSGIContainer(app)\n\n # Monkey-patching HTTPConnection.finish to count the\n # number of requests being handled by Tornado. This\n # will help gunicorn shutdown the worker if max_requests\n # is exceeded.\n httpserver = sys.modules[\"tornado.httpserver\"]\n if hasattr(httpserver, 'HTTPConnection'):\n old_connection_finish = httpserver.HTTPConnection.finish\n\n def finish(other):\n self.handle_request()\n old_connection_finish(other)\n httpserver.HTTPConnection.finish = finish\n sys.modules[\"tornado.httpserver\"] = httpserver\n\n server_class = tornado.httpserver.HTTPServer\n else:\n\n class _HTTPServer(tornado.httpserver.HTTPServer):\n\n def on_close(instance, server_conn):\n self.handle_request()\n super(_HTTPServer, instance).on_close(server_conn)\n\n server_class = _HTTPServer\n\n if self.cfg.is_ssl:\n server = server_class(app, io_loop=self.ioloop,\n ssl_options=self.cfg.ssl_options)\n else:\n server = server_class(app, io_loop=self.ioloop)\n\n self.server = server\n\n for s in self.sockets:\n s.setblocking(0)\n if hasattr(server, \"add_socket\"): # tornado > 2.0\n server.add_socket(s)\n elif hasattr(server, \"_sockets\"): # tornado 2.0\n server._sockets[s.fileno()] = s\n\n server.no_keep_alive = self.cfg.keepalive <= 0\n server.start(num_processes=1)\n\n self.ioloop.start()\n\n def stop(self):\n if hasattr(self, 'server'):\n try:\n self.server.stop()\n except Exception:\n pass\n PeriodicCallback(self.stop_ioloop, 1000, io_loop=self.ioloop).start()\n\n def stop_ioloop(self):\n if not self.ioloop._callbacks and len(self.ioloop._timeouts) <= 1:\n self.ioloop.stop()\n", "path": "gunicorn/workers/gtornado.py"}]} | 1,455 | 373 |
gh_patches_debug_4286 | rasdani/github-patches | git_diff | mozilla__pontoon-2379 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Redirected to a non-existing page while tring to translate to en-US
## Description
When I try to translate an application to `en-US`, I am redirected (`301`) to a non-existing page:

→ `/en-US/my-project/` is redirected to `/my-project/`
## Steps to reproduce
* Add the `en-US` locale to a project
* Browse project's language
* Click on English (en-US)
* → 404
## More...
This seems to be caused by some legacy code in `pontoon/urls.py:26`:
```python
urlpatterns = [
# Legacy: Locale redirect for compatibility with i18n ready URL scheme
path("en-US<path:url>", RedirectView.as_view(url="%(url)s", permanent=True)),
```
Removing this line fixes the problem and does not seem to cause additional issues (I am still testing on a production instance).
* This issue may be related to → #2192
* Another issue to look at about `en-US` locale → #2260
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pontoon/urls.py`
Content:
```
1 from django.urls import include, path, register_converter
2 from django.urls.converters import StringConverter
3 from django.contrib import admin
4 from django.contrib.auth import logout
5 from django.views.generic import RedirectView, TemplateView
6
7 from pontoon.teams.views import team
8
9
10 class LocaleConverter(StringConverter):
11 regex = r"[A-Za-z0-9\-\@\.]+"
12
13
14 register_converter(LocaleConverter, "locale")
15
16 pontoon_js_view = TemplateView.as_view(
17 template_name="js/pontoon.js", content_type="text/javascript"
18 )
19
20 permission_denied_view = TemplateView.as_view(template_name="403.html")
21 page_not_found_view = TemplateView.as_view(template_name="404.html")
22 server_error_view = TemplateView.as_view(template_name="500.html")
23
24 urlpatterns = [
25 # Legacy: Locale redirect for compatibility with i18n ready URL scheme
26 path("en-US<path:url>", RedirectView.as_view(url="%(url)s", permanent=True)),
27 # Redirect legacy Aurora projects
28 path(
29 "projects/firefox-aurora/<path:url>",
30 RedirectView.as_view(url="/projects/firefox/%(url)s", permanent=True),
31 ),
32 path(
33 "projects/firefox-for-android-aurora/<path:url>",
34 RedirectView.as_view(
35 url="/projects/firefox-for-android/%(url)s", permanent=True
36 ),
37 ),
38 path(
39 "projects/thunderbird-aurora/<path:url>",
40 RedirectView.as_view(url="/projects/thunderbird/%(url)s", permanent=True),
41 ),
42 path(
43 "projects/lightning-aurora/<path:url>",
44 RedirectView.as_view(url="/projects/lightning/%(url)s", permanent=True),
45 ),
46 path(
47 "projects/seamonkey-aurora/<path:url>",
48 RedirectView.as_view(url="/projects/seamonkey/%(url)s", permanent=True),
49 ),
50 path(
51 "<locale:locale>/firefox-aurora/<path:url>",
52 RedirectView.as_view(url="/%(locale)s/firefox/%(url)s", permanent=True),
53 ),
54 path(
55 "<locale:locale>/firefox-for-android-aurora/<path:url>",
56 RedirectView.as_view(
57 url="/%(locale)s/firefox-for-android/%(url)s", permanent=True
58 ),
59 ),
60 path(
61 "<locale:locale>/thunderbird-aurora/<path:url>",
62 RedirectView.as_view(url="/%(locale)s/thunderbird/%(url)s", permanent=True),
63 ),
64 path(
65 "<locale:locale>/lightning-aurora/<path:url>",
66 RedirectView.as_view(url="/%(locale)s/lightning/%(url)s", permanent=True),
67 ),
68 path(
69 "<locale:locale>/seamonkey-aurora/<path:url>",
70 RedirectView.as_view(url="/%(locale)s/seamonkey/%(url)s", permanent=True),
71 ),
72 # Accounts
73 path("accounts/", include("pontoon.allauth_urls")),
74 # Admin
75 path("admin/", include("pontoon.administration.urls")),
76 # Django admin: Disable the login form
77 path("a/login/", permission_denied_view),
78 # Django admin
79 path("a/", admin.site.urls),
80 # Logout
81 path("signout/", logout, {"next_page": "/"}, name="signout"),
82 # Error pages
83 path("403/", permission_denied_view),
84 path("404/", page_not_found_view),
85 path("500/", server_error_view),
86 # Robots.txt
87 path(
88 "robots.txt",
89 TemplateView.as_view(template_name="robots.txt", content_type="text/plain"),
90 ),
91 # contribute.json
92 path(
93 "contribute.json",
94 TemplateView.as_view(
95 template_name="contribute.json", content_type="text/plain"
96 ),
97 ),
98 # Favicon
99 path(
100 "favicon.ico",
101 RedirectView.as_view(url="/static/img/favicon.ico", permanent=True),
102 ),
103 # Include script
104 path("pontoon.js", pontoon_js_view),
105 path("static/js/pontoon.js", pontoon_js_view),
106 # Include URL configurations from installed apps
107 path("terminology/", include("pontoon.terminology.urls")),
108 path("translations/", include("pontoon.translations.urls")),
109 path("", include("pontoon.teams.urls")),
110 path("", include("pontoon.tour.urls")),
111 path("", include("pontoon.tags.urls")),
112 path("", include("pontoon.sync.urls")),
113 path("", include("pontoon.projects.urls")),
114 path("", include("pontoon.machinery.urls")),
115 path("", include("pontoon.contributors.urls")),
116 path("", include("pontoon.localizations.urls")),
117 path("", include("pontoon.base.urls")),
118 path("", include("pontoon.translate.urls")),
119 path("", include("pontoon.batch.urls")),
120 path("", include("pontoon.api.urls")),
121 path("", include("pontoon.homepage.urls")),
122 path("", include("pontoon.in_context.urls")),
123 path("", include("pontoon.uxactionlog.urls")),
124 # Team page: Must be at the end
125 path("<locale:locale>/", team, name="pontoon.teams.team"),
126 ]
127
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pontoon/urls.py b/pontoon/urls.py
--- a/pontoon/urls.py
+++ b/pontoon/urls.py
@@ -22,8 +22,6 @@
server_error_view = TemplateView.as_view(template_name="500.html")
urlpatterns = [
- # Legacy: Locale redirect for compatibility with i18n ready URL scheme
- path("en-US<path:url>", RedirectView.as_view(url="%(url)s", permanent=True)),
# Redirect legacy Aurora projects
path(
"projects/firefox-aurora/<path:url>",
| {"golden_diff": "diff --git a/pontoon/urls.py b/pontoon/urls.py\n--- a/pontoon/urls.py\n+++ b/pontoon/urls.py\n@@ -22,8 +22,6 @@\n server_error_view = TemplateView.as_view(template_name=\"500.html\")\n \n urlpatterns = [\n- # Legacy: Locale redirect for compatibility with i18n ready URL scheme\n- path(\"en-US<path:url>\", RedirectView.as_view(url=\"%(url)s\", permanent=True)),\n # Redirect legacy Aurora projects\n path(\n \"projects/firefox-aurora/<path:url>\",\n", "issue": "Redirected to a non-existing page while tring to translate to en-US\n## Description\r\n\r\nWhen I try to translate an application to `en-US`, I am redirected (`301`) to a non-existing page:\r\n\r\n\r\n\r\n\u2192 `/en-US/my-project/` is redirected to `/my-project/`\r\n\r\n## Steps to reproduce\r\n\r\n* Add the `en-US` locale to a project\r\n* Browse project's language\r\n* Click on English (en-US)\r\n* \u2192 404\r\n\r\n## More...\r\n\r\nThis seems to be caused by some legacy code in `pontoon/urls.py:26`:\r\n\r\n```python\r\nurlpatterns = [\r\n # Legacy: Locale redirect for compatibility with i18n ready URL scheme\r\n path(\"en-US<path:url>\", RedirectView.as_view(url=\"%(url)s\", permanent=True)),\r\n```\r\n\r\nRemoving this line fixes the problem and does not seem to cause additional issues (I am still testing on a production instance).\r\n\r\n* This issue may be related to \u2192 #2192 \r\n* Another issue to look at about `en-US` locale \u2192 #2260\r\n\n", "before_files": [{"content": "from django.urls import include, path, register_converter\nfrom django.urls.converters import StringConverter\nfrom django.contrib import admin\nfrom django.contrib.auth import logout\nfrom django.views.generic import RedirectView, TemplateView\n\nfrom pontoon.teams.views import team\n\n\nclass LocaleConverter(StringConverter):\n regex = r\"[A-Za-z0-9\\-\\@\\.]+\"\n\n\nregister_converter(LocaleConverter, \"locale\")\n\npontoon_js_view = TemplateView.as_view(\n template_name=\"js/pontoon.js\", content_type=\"text/javascript\"\n)\n\npermission_denied_view = TemplateView.as_view(template_name=\"403.html\")\npage_not_found_view = TemplateView.as_view(template_name=\"404.html\")\nserver_error_view = TemplateView.as_view(template_name=\"500.html\")\n\nurlpatterns = [\n # Legacy: Locale redirect for compatibility with i18n ready URL scheme\n path(\"en-US<path:url>\", RedirectView.as_view(url=\"%(url)s\", permanent=True)),\n # Redirect legacy Aurora projects\n path(\n \"projects/firefox-aurora/<path:url>\",\n RedirectView.as_view(url=\"/projects/firefox/%(url)s\", permanent=True),\n ),\n path(\n \"projects/firefox-for-android-aurora/<path:url>\",\n RedirectView.as_view(\n url=\"/projects/firefox-for-android/%(url)s\", permanent=True\n ),\n ),\n path(\n \"projects/thunderbird-aurora/<path:url>\",\n RedirectView.as_view(url=\"/projects/thunderbird/%(url)s\", permanent=True),\n ),\n path(\n \"projects/lightning-aurora/<path:url>\",\n RedirectView.as_view(url=\"/projects/lightning/%(url)s\", permanent=True),\n ),\n path(\n \"projects/seamonkey-aurora/<path:url>\",\n RedirectView.as_view(url=\"/projects/seamonkey/%(url)s\", permanent=True),\n ),\n path(\n \"<locale:locale>/firefox-aurora/<path:url>\",\n RedirectView.as_view(url=\"/%(locale)s/firefox/%(url)s\", permanent=True),\n ),\n path(\n \"<locale:locale>/firefox-for-android-aurora/<path:url>\",\n RedirectView.as_view(\n url=\"/%(locale)s/firefox-for-android/%(url)s\", permanent=True\n ),\n ),\n path(\n \"<locale:locale>/thunderbird-aurora/<path:url>\",\n RedirectView.as_view(url=\"/%(locale)s/thunderbird/%(url)s\", permanent=True),\n ),\n path(\n \"<locale:locale>/lightning-aurora/<path:url>\",\n RedirectView.as_view(url=\"/%(locale)s/lightning/%(url)s\", permanent=True),\n ),\n path(\n \"<locale:locale>/seamonkey-aurora/<path:url>\",\n RedirectView.as_view(url=\"/%(locale)s/seamonkey/%(url)s\", permanent=True),\n ),\n # Accounts\n path(\"accounts/\", include(\"pontoon.allauth_urls\")),\n # Admin\n path(\"admin/\", include(\"pontoon.administration.urls\")),\n # Django admin: Disable the login form\n path(\"a/login/\", permission_denied_view),\n # Django admin\n path(\"a/\", admin.site.urls),\n # Logout\n path(\"signout/\", logout, {\"next_page\": \"/\"}, name=\"signout\"),\n # Error pages\n path(\"403/\", permission_denied_view),\n path(\"404/\", page_not_found_view),\n path(\"500/\", server_error_view),\n # Robots.txt\n path(\n \"robots.txt\",\n TemplateView.as_view(template_name=\"robots.txt\", content_type=\"text/plain\"),\n ),\n # contribute.json\n path(\n \"contribute.json\",\n TemplateView.as_view(\n template_name=\"contribute.json\", content_type=\"text/plain\"\n ),\n ),\n # Favicon\n path(\n \"favicon.ico\",\n RedirectView.as_view(url=\"/static/img/favicon.ico\", permanent=True),\n ),\n # Include script\n path(\"pontoon.js\", pontoon_js_view),\n path(\"static/js/pontoon.js\", pontoon_js_view),\n # Include URL configurations from installed apps\n path(\"terminology/\", include(\"pontoon.terminology.urls\")),\n path(\"translations/\", include(\"pontoon.translations.urls\")),\n path(\"\", include(\"pontoon.teams.urls\")),\n path(\"\", include(\"pontoon.tour.urls\")),\n path(\"\", include(\"pontoon.tags.urls\")),\n path(\"\", include(\"pontoon.sync.urls\")),\n path(\"\", include(\"pontoon.projects.urls\")),\n path(\"\", include(\"pontoon.machinery.urls\")),\n path(\"\", include(\"pontoon.contributors.urls\")),\n path(\"\", include(\"pontoon.localizations.urls\")),\n path(\"\", include(\"pontoon.base.urls\")),\n path(\"\", include(\"pontoon.translate.urls\")),\n path(\"\", include(\"pontoon.batch.urls\")),\n path(\"\", include(\"pontoon.api.urls\")),\n path(\"\", include(\"pontoon.homepage.urls\")),\n path(\"\", include(\"pontoon.in_context.urls\")),\n path(\"\", include(\"pontoon.uxactionlog.urls\")),\n # Team page: Must be at the end\n path(\"<locale:locale>/\", team, name=\"pontoon.teams.team\"),\n]\n", "path": "pontoon/urls.py"}], "after_files": [{"content": "from django.urls import include, path, register_converter\nfrom django.urls.converters import StringConverter\nfrom django.contrib import admin\nfrom django.contrib.auth import logout\nfrom django.views.generic import RedirectView, TemplateView\n\nfrom pontoon.teams.views import team\n\n\nclass LocaleConverter(StringConverter):\n regex = r\"[A-Za-z0-9\\-\\@\\.]+\"\n\n\nregister_converter(LocaleConverter, \"locale\")\n\npontoon_js_view = TemplateView.as_view(\n template_name=\"js/pontoon.js\", content_type=\"text/javascript\"\n)\n\npermission_denied_view = TemplateView.as_view(template_name=\"403.html\")\npage_not_found_view = TemplateView.as_view(template_name=\"404.html\")\nserver_error_view = TemplateView.as_view(template_name=\"500.html\")\n\nurlpatterns = [\n # Redirect legacy Aurora projects\n path(\n \"projects/firefox-aurora/<path:url>\",\n RedirectView.as_view(url=\"/projects/firefox/%(url)s\", permanent=True),\n ),\n path(\n \"projects/firefox-for-android-aurora/<path:url>\",\n RedirectView.as_view(\n url=\"/projects/firefox-for-android/%(url)s\", permanent=True\n ),\n ),\n path(\n \"projects/thunderbird-aurora/<path:url>\",\n RedirectView.as_view(url=\"/projects/thunderbird/%(url)s\", permanent=True),\n ),\n path(\n \"projects/lightning-aurora/<path:url>\",\n RedirectView.as_view(url=\"/projects/lightning/%(url)s\", permanent=True),\n ),\n path(\n \"projects/seamonkey-aurora/<path:url>\",\n RedirectView.as_view(url=\"/projects/seamonkey/%(url)s\", permanent=True),\n ),\n path(\n \"<locale:locale>/firefox-aurora/<path:url>\",\n RedirectView.as_view(url=\"/%(locale)s/firefox/%(url)s\", permanent=True),\n ),\n path(\n \"<locale:locale>/firefox-for-android-aurora/<path:url>\",\n RedirectView.as_view(\n url=\"/%(locale)s/firefox-for-android/%(url)s\", permanent=True\n ),\n ),\n path(\n \"<locale:locale>/thunderbird-aurora/<path:url>\",\n RedirectView.as_view(url=\"/%(locale)s/thunderbird/%(url)s\", permanent=True),\n ),\n path(\n \"<locale:locale>/lightning-aurora/<path:url>\",\n RedirectView.as_view(url=\"/%(locale)s/lightning/%(url)s\", permanent=True),\n ),\n path(\n \"<locale:locale>/seamonkey-aurora/<path:url>\",\n RedirectView.as_view(url=\"/%(locale)s/seamonkey/%(url)s\", permanent=True),\n ),\n # Accounts\n path(\"accounts/\", include(\"pontoon.allauth_urls\")),\n # Admin\n path(\"admin/\", include(\"pontoon.administration.urls\")),\n # Django admin: Disable the login form\n path(\"a/login/\", permission_denied_view),\n # Django admin\n path(\"a/\", admin.site.urls),\n # Logout\n path(\"signout/\", logout, {\"next_page\": \"/\"}, name=\"signout\"),\n # Error pages\n path(\"403/\", permission_denied_view),\n path(\"404/\", page_not_found_view),\n path(\"500/\", server_error_view),\n # Robots.txt\n path(\n \"robots.txt\",\n TemplateView.as_view(template_name=\"robots.txt\", content_type=\"text/plain\"),\n ),\n # contribute.json\n path(\n \"contribute.json\",\n TemplateView.as_view(\n template_name=\"contribute.json\", content_type=\"text/plain\"\n ),\n ),\n # Favicon\n path(\n \"favicon.ico\",\n RedirectView.as_view(url=\"/static/img/favicon.ico\", permanent=True),\n ),\n # Include script\n path(\"pontoon.js\", pontoon_js_view),\n path(\"static/js/pontoon.js\", pontoon_js_view),\n # Include URL configurations from installed apps\n path(\"terminology/\", include(\"pontoon.terminology.urls\")),\n path(\"translations/\", include(\"pontoon.translations.urls\")),\n path(\"\", include(\"pontoon.teams.urls\")),\n path(\"\", include(\"pontoon.tour.urls\")),\n path(\"\", include(\"pontoon.tags.urls\")),\n path(\"\", include(\"pontoon.sync.urls\")),\n path(\"\", include(\"pontoon.projects.urls\")),\n path(\"\", include(\"pontoon.machinery.urls\")),\n path(\"\", include(\"pontoon.contributors.urls\")),\n path(\"\", include(\"pontoon.localizations.urls\")),\n path(\"\", include(\"pontoon.base.urls\")),\n path(\"\", include(\"pontoon.translate.urls\")),\n path(\"\", include(\"pontoon.batch.urls\")),\n path(\"\", include(\"pontoon.api.urls\")),\n path(\"\", include(\"pontoon.homepage.urls\")),\n path(\"\", include(\"pontoon.in_context.urls\")),\n path(\"\", include(\"pontoon.uxactionlog.urls\")),\n # Team page: Must be at the end\n path(\"<locale:locale>/\", team, name=\"pontoon.teams.team\"),\n]\n", "path": "pontoon/urls.py"}]} | 1,952 | 130 |
gh_patches_debug_1582 | rasdani/github-patches | git_diff | ansible__ansible-modules-core-4645 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix #4412: os_security_group_rule support numbered protocols
##### ISSUE TYPE
- Bugfix Pull Request
##### COMPONENT NAME
cloud/openstack/os_security_group_rule.py
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file =
configured module search path = Default w/o overrides
```
##### SUMMARY
Bugfix correctly points out lack of VRRP support by way of numbered protocol. This commit adds this functionality, including example usage in the documentation.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cloud/openstack/os_security_group_rule.py`
Content:
```
1 #!/usr/bin/python
2
3 # Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
4 # Copyright (c) 2013, Benno Joy <[email protected]>
5 #
6 # This module is free software: you can redistribute it and/or modify
7 # it under the terms of the GNU General Public License as published by
8 # the Free Software Foundation, either version 3 of the License, or
9 # (at your option) any later version.
10 #
11 # This software is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU General Public License for more details.
15 #
16 # You should have received a copy of the GNU General Public License
17 # along with this software. If not, see <http://www.gnu.org/licenses/>.
18
19 try:
20 import shade
21 HAS_SHADE = True
22 except ImportError:
23 HAS_SHADE = False
24
25
26 DOCUMENTATION = '''
27 ---
28 module: os_security_group_rule
29 short_description: Add/Delete rule from an existing security group
30 extends_documentation_fragment: openstack
31 version_added: "2.0"
32 description:
33 - Add or Remove rule from an existing security group
34 options:
35 security_group:
36 description:
37 - Name or ID of the security group
38 required: true
39 protocol:
40 description:
41 - IP protocol
42 choices: ['tcp', 'udp', 'icmp', 112, None]
43 default: None
44 port_range_min:
45 description:
46 - Starting port
47 required: false
48 default: None
49 port_range_max:
50 description:
51 - Ending port
52 required: false
53 default: None
54 remote_ip_prefix:
55 description:
56 - Source IP address(es) in CIDR notation (exclusive with remote_group)
57 required: false
58 remote_group:
59 description:
60 - Name or ID of the Security group to link (exclusive with
61 remote_ip_prefix)
62 required: false
63 ethertype:
64 description:
65 - Must be IPv4 or IPv6, and addresses represented in CIDR must
66 match the ingress or egress rules. Not all providers support IPv6.
67 choices: ['IPv4', 'IPv6']
68 default: IPv4
69 direction:
70 description:
71 - The direction in which the security group rule is applied. Not
72 all providers support egress.
73 choices: ['egress', 'ingress']
74 default: ingress
75 state:
76 description:
77 - Should the resource be present or absent.
78 choices: [present, absent]
79 default: present
80 requirements: ["shade"]
81 '''
82
83 EXAMPLES = '''
84 # Create a security group rule
85 - os_security_group_rule:
86 cloud: mordred
87 security_group: foo
88 protocol: tcp
89 port_range_min: 80
90 port_range_max: 80
91 remote_ip_prefix: 0.0.0.0/0
92
93 # Create a security group rule for ping
94 - os_security_group_rule:
95 cloud: mordred
96 security_group: foo
97 protocol: icmp
98 remote_ip_prefix: 0.0.0.0/0
99
100 # Another way to create the ping rule
101 - os_security_group_rule:
102 cloud: mordred
103 security_group: foo
104 protocol: icmp
105 port_range_min: -1
106 port_range_max: -1
107 remote_ip_prefix: 0.0.0.0/0
108
109 # Create a TCP rule covering all ports
110 - os_security_group_rule:
111 cloud: mordred
112 security_group: foo
113 protocol: tcp
114 port_range_min: 1
115 port_range_max: 65535
116 remote_ip_prefix: 0.0.0.0/0
117
118 # Another way to create the TCP rule above (defaults to all ports)
119 - os_security_group_rule:
120 cloud: mordred
121 security_group: foo
122 protocol: tcp
123 remote_ip_prefix: 0.0.0.0/0
124
125 # Create a rule for VRRP with numbered protocol 112
126 - os_security_group_rule:
127 security_group: loadbalancer_sg
128 protocol: 112
129 remote_group: loadbalancer-node_sg
130 '''
131
132 RETURN = '''
133 id:
134 description: Unique rule UUID.
135 type: string
136 direction:
137 description: The direction in which the security group rule is applied.
138 type: string
139 sample: 'egress'
140 ethertype:
141 description: One of IPv4 or IPv6.
142 type: string
143 sample: 'IPv4'
144 port_range_min:
145 description: The minimum port number in the range that is matched by
146 the security group rule.
147 type: int
148 sample: 8000
149 port_range_max:
150 description: The maximum port number in the range that is matched by
151 the security group rule.
152 type: int
153 sample: 8000
154 protocol:
155 description: The protocol that is matched by the security group rule.
156 type: string
157 sample: 'tcp'
158 remote_ip_prefix:
159 description: The remote IP prefix to be associated with this security group rule.
160 type: string
161 sample: '0.0.0.0/0'
162 security_group_id:
163 description: The security group ID to associate with this security group rule.
164 type: string
165 '''
166
167
168 def _ports_match(protocol, module_min, module_max, rule_min, rule_max):
169 """
170 Capture the complex port matching logic.
171
172 The port values coming in for the module might be -1 (for ICMP),
173 which will work only for Nova, but this is handled by shade. Likewise,
174 they might be None, which works for Neutron, but not Nova. This too is
175 handled by shade. Since shade will consistently return these port
176 values as None, we need to convert any -1 values input to the module
177 to None here for comparison.
178
179 For TCP and UDP protocols, None values for both min and max are
180 represented as the range 1-65535 for Nova, but remain None for
181 Neutron. Shade returns the full range when Nova is the backend (since
182 that is how Nova stores them), and None values for Neutron. If None
183 values are input to the module for both values, then we need to adjust
184 for comparison.
185 """
186
187 # Check if the user is supplying -1 for ICMP.
188 if protocol == 'icmp':
189 if module_min and int(module_min) == -1:
190 module_min = None
191 if module_max and int(module_max) == -1:
192 module_max = None
193
194 # Check if user is supplying None values for full TCP/UDP port range.
195 if protocol in ['tcp', 'udp'] and module_min is None and module_max is None:
196 if (rule_min and int(rule_min) == 1
197 and rule_max and int(rule_max) == 65535):
198 # (None, None) == (1, 65535)
199 return True
200
201 # Sanity check to make sure we don't have type comparison issues.
202 if module_min:
203 module_min = int(module_min)
204 if module_max:
205 module_max = int(module_max)
206 if rule_min:
207 rule_min = int(rule_min)
208 if rule_max:
209 rule_max = int(rule_max)
210
211 return module_min == rule_min and module_max == rule_max
212
213
214 def _find_matching_rule(module, secgroup, remotegroup):
215 """
216 Find a rule in the group that matches the module parameters.
217 :returns: The matching rule dict, or None if no matches.
218 """
219 protocol = module.params['protocol']
220 remote_ip_prefix = module.params['remote_ip_prefix']
221 ethertype = module.params['ethertype']
222 direction = module.params['direction']
223 remote_group_id = remotegroup['id']
224
225 for rule in secgroup['security_group_rules']:
226 if (protocol == rule['protocol']
227 and remote_ip_prefix == rule['remote_ip_prefix']
228 and ethertype == rule['ethertype']
229 and direction == rule['direction']
230 and remote_group_id == rule['remote_group_id']
231 and _ports_match(protocol,
232 module.params['port_range_min'],
233 module.params['port_range_max'],
234 rule['port_range_min'],
235 rule['port_range_max'])):
236 return rule
237 return None
238
239
240 def _system_state_change(module, secgroup, remotegroup):
241 state = module.params['state']
242 if secgroup:
243 rule_exists = _find_matching_rule(module, secgroup, remotegroup)
244 else:
245 return False
246
247 if state == 'present' and not rule_exists:
248 return True
249 if state == 'absent' and rule_exists:
250 return True
251 return False
252
253
254 def main():
255 argument_spec = openstack_full_argument_spec(
256 security_group = dict(required=True),
257 # NOTE(Shrews): None is an acceptable protocol value for
258 # Neutron, but Nova will balk at this.
259 protocol = dict(default=None,
260 choices=[None, 'tcp', 'udp', 'icmp', 112]),
261 port_range_min = dict(required=False, type='int'),
262 port_range_max = dict(required=False, type='int'),
263 remote_ip_prefix = dict(required=False, default=None),
264 remote_group = dict(required=False, default=None),
265 ethertype = dict(default='IPv4',
266 choices=['IPv4', 'IPv6']),
267 direction = dict(default='ingress',
268 choices=['egress', 'ingress']),
269 state = dict(default='present',
270 choices=['absent', 'present']),
271 )
272
273 module_kwargs = openstack_module_kwargs(
274 mutually_exclusive=[
275 ['remote_ip_prefix', 'remote_group'],
276 ]
277 )
278
279 module = AnsibleModule(argument_spec,
280 supports_check_mode=True,
281 **module_kwargs)
282
283 if not HAS_SHADE:
284 module.fail_json(msg='shade is required for this module')
285
286 state = module.params['state']
287 security_group = module.params['security_group']
288 remote_group = module.params['remote_group']
289 changed = False
290
291 try:
292 cloud = shade.openstack_cloud(**module.params)
293 secgroup = cloud.get_security_group(security_group)
294
295 if remote_group:
296 remotegroup = cloud.get_security_group(remote_group)
297 else:
298 remotegroup = { 'id' : None }
299
300 if module.check_mode:
301 module.exit_json(changed=_system_state_change(module, secgroup, remotegroup))
302
303 if state == 'present':
304 if not secgroup:
305 module.fail_json(msg='Could not find security group %s' %
306 security_group)
307
308 rule = _find_matching_rule(module, secgroup, remotegroup)
309 if not rule:
310 rule = cloud.create_security_group_rule(
311 secgroup['id'],
312 port_range_min=module.params['port_range_min'],
313 port_range_max=module.params['port_range_max'],
314 protocol=module.params['protocol'],
315 remote_ip_prefix=module.params['remote_ip_prefix'],
316 remote_group_id=remotegroup['id'],
317 direction=module.params['direction'],
318 ethertype=module.params['ethertype']
319 )
320 changed = True
321 module.exit_json(changed=changed, rule=rule, id=rule['id'])
322
323 if state == 'absent' and secgroup:
324 rule = _find_matching_rule(module, secgroup, remotegroup)
325 if rule:
326 cloud.delete_security_group_rule(rule['id'])
327 changed = True
328
329 module.exit_json(changed=changed)
330
331 except shade.OpenStackCloudException as e:
332 module.fail_json(msg=str(e))
333
334 # this is magic, see lib/ansible/module_common.py
335 from ansible.module_utils.basic import *
336 from ansible.module_utils.openstack import *
337
338 if __name__ == '__main__':
339 main()
340
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cloud/openstack/os_security_group_rule.py b/cloud/openstack/os_security_group_rule.py
--- a/cloud/openstack/os_security_group_rule.py
+++ b/cloud/openstack/os_security_group_rule.py
@@ -38,7 +38,7 @@
required: true
protocol:
description:
- - IP protocol
+ - IP protocols TCP UDP ICMP 112 (VRRP)
choices: ['tcp', 'udp', 'icmp', 112, None]
default: None
port_range_min:
| {"golden_diff": "diff --git a/cloud/openstack/os_security_group_rule.py b/cloud/openstack/os_security_group_rule.py\n--- a/cloud/openstack/os_security_group_rule.py\n+++ b/cloud/openstack/os_security_group_rule.py\n@@ -38,7 +38,7 @@\n required: true\n protocol:\n description:\n- - IP protocol\n+ - IP protocols TCP UDP ICMP 112 (VRRP)\n choices: ['tcp', 'udp', 'icmp', 112, None]\n default: None\n port_range_min:\n", "issue": "Fix #4412: os_security_group_rule support numbered protocols\n##### ISSUE TYPE\n- Bugfix Pull Request\n##### COMPONENT NAME\n\ncloud/openstack/os_security_group_rule.py\n##### ANSIBLE VERSION\n\n```\nansible 2.1.1.0\n config file = \n configured module search path = Default w/o overrides\n```\n##### SUMMARY\n\nBugfix correctly points out lack of VRRP support by way of numbered protocol. This commit adds this functionality, including example usage in the documentation.\n\n", "before_files": [{"content": "#!/usr/bin/python\n\n# Copyright (c) 2015 Hewlett-Packard Development Company, L.P.\n# Copyright (c) 2013, Benno Joy <[email protected]>\n#\n# This module is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This software is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this software. If not, see <http://www.gnu.org/licenses/>.\n\ntry:\n import shade\n HAS_SHADE = True\nexcept ImportError:\n HAS_SHADE = False\n\n\nDOCUMENTATION = '''\n---\nmodule: os_security_group_rule\nshort_description: Add/Delete rule from an existing security group\nextends_documentation_fragment: openstack\nversion_added: \"2.0\"\ndescription:\n - Add or Remove rule from an existing security group\noptions:\n security_group:\n description:\n - Name or ID of the security group\n required: true\n protocol:\n description:\n - IP protocol\n choices: ['tcp', 'udp', 'icmp', 112, None]\n default: None\n port_range_min:\n description:\n - Starting port\n required: false\n default: None\n port_range_max:\n description:\n - Ending port\n required: false\n default: None\n remote_ip_prefix:\n description:\n - Source IP address(es) in CIDR notation (exclusive with remote_group)\n required: false\n remote_group:\n description:\n - Name or ID of the Security group to link (exclusive with\n remote_ip_prefix)\n required: false\n ethertype:\n description:\n - Must be IPv4 or IPv6, and addresses represented in CIDR must\n match the ingress or egress rules. Not all providers support IPv6.\n choices: ['IPv4', 'IPv6']\n default: IPv4\n direction:\n description:\n - The direction in which the security group rule is applied. Not\n all providers support egress.\n choices: ['egress', 'ingress']\n default: ingress\n state:\n description:\n - Should the resource be present or absent.\n choices: [present, absent]\n default: present\nrequirements: [\"shade\"]\n'''\n\nEXAMPLES = '''\n# Create a security group rule\n- os_security_group_rule:\n cloud: mordred\n security_group: foo\n protocol: tcp\n port_range_min: 80\n port_range_max: 80\n remote_ip_prefix: 0.0.0.0/0\n\n# Create a security group rule for ping\n- os_security_group_rule:\n cloud: mordred\n security_group: foo\n protocol: icmp\n remote_ip_prefix: 0.0.0.0/0\n\n# Another way to create the ping rule\n- os_security_group_rule:\n cloud: mordred\n security_group: foo\n protocol: icmp\n port_range_min: -1\n port_range_max: -1\n remote_ip_prefix: 0.0.0.0/0\n\n# Create a TCP rule covering all ports\n- os_security_group_rule:\n cloud: mordred\n security_group: foo\n protocol: tcp\n port_range_min: 1\n port_range_max: 65535\n remote_ip_prefix: 0.0.0.0/0\n\n# Another way to create the TCP rule above (defaults to all ports)\n- os_security_group_rule:\n cloud: mordred\n security_group: foo\n protocol: tcp\n remote_ip_prefix: 0.0.0.0/0\n\n# Create a rule for VRRP with numbered protocol 112\n- os_security_group_rule:\n security_group: loadbalancer_sg\n protocol: 112\n remote_group: loadbalancer-node_sg\n'''\n\nRETURN = '''\nid:\n description: Unique rule UUID.\n type: string\ndirection:\n description: The direction in which the security group rule is applied.\n type: string\n sample: 'egress'\nethertype:\n description: One of IPv4 or IPv6.\n type: string\n sample: 'IPv4'\nport_range_min:\n description: The minimum port number in the range that is matched by\n the security group rule.\n type: int\n sample: 8000\nport_range_max:\n description: The maximum port number in the range that is matched by\n the security group rule.\n type: int\n sample: 8000\nprotocol:\n description: The protocol that is matched by the security group rule.\n type: string\n sample: 'tcp'\nremote_ip_prefix:\n description: The remote IP prefix to be associated with this security group rule.\n type: string\n sample: '0.0.0.0/0'\nsecurity_group_id:\n description: The security group ID to associate with this security group rule.\n type: string\n'''\n\n\ndef _ports_match(protocol, module_min, module_max, rule_min, rule_max):\n \"\"\"\n Capture the complex port matching logic.\n\n The port values coming in for the module might be -1 (for ICMP),\n which will work only for Nova, but this is handled by shade. Likewise,\n they might be None, which works for Neutron, but not Nova. This too is\n handled by shade. Since shade will consistently return these port\n values as None, we need to convert any -1 values input to the module\n to None here for comparison.\n\n For TCP and UDP protocols, None values for both min and max are\n represented as the range 1-65535 for Nova, but remain None for\n Neutron. Shade returns the full range when Nova is the backend (since\n that is how Nova stores them), and None values for Neutron. If None\n values are input to the module for both values, then we need to adjust\n for comparison.\n \"\"\"\n\n # Check if the user is supplying -1 for ICMP.\n if protocol == 'icmp':\n if module_min and int(module_min) == -1:\n module_min = None\n if module_max and int(module_max) == -1:\n module_max = None\n\n # Check if user is supplying None values for full TCP/UDP port range.\n if protocol in ['tcp', 'udp'] and module_min is None and module_max is None:\n if (rule_min and int(rule_min) == 1\n and rule_max and int(rule_max) == 65535):\n # (None, None) == (1, 65535)\n return True\n\n # Sanity check to make sure we don't have type comparison issues.\n if module_min:\n module_min = int(module_min)\n if module_max:\n module_max = int(module_max)\n if rule_min:\n rule_min = int(rule_min)\n if rule_max:\n rule_max = int(rule_max)\n\n return module_min == rule_min and module_max == rule_max\n\n\ndef _find_matching_rule(module, secgroup, remotegroup):\n \"\"\"\n Find a rule in the group that matches the module parameters.\n :returns: The matching rule dict, or None if no matches.\n \"\"\"\n protocol = module.params['protocol']\n remote_ip_prefix = module.params['remote_ip_prefix']\n ethertype = module.params['ethertype']\n direction = module.params['direction']\n remote_group_id = remotegroup['id']\n\n for rule in secgroup['security_group_rules']:\n if (protocol == rule['protocol']\n and remote_ip_prefix == rule['remote_ip_prefix']\n and ethertype == rule['ethertype']\n and direction == rule['direction']\n and remote_group_id == rule['remote_group_id']\n and _ports_match(protocol,\n module.params['port_range_min'],\n module.params['port_range_max'],\n rule['port_range_min'],\n rule['port_range_max'])):\n return rule\n return None\n\n\ndef _system_state_change(module, secgroup, remotegroup):\n state = module.params['state']\n if secgroup:\n rule_exists = _find_matching_rule(module, secgroup, remotegroup)\n else:\n return False\n\n if state == 'present' and not rule_exists:\n return True\n if state == 'absent' and rule_exists:\n return True\n return False\n\n\ndef main():\n argument_spec = openstack_full_argument_spec(\n security_group = dict(required=True),\n # NOTE(Shrews): None is an acceptable protocol value for\n # Neutron, but Nova will balk at this.\n protocol = dict(default=None,\n choices=[None, 'tcp', 'udp', 'icmp', 112]),\n port_range_min = dict(required=False, type='int'),\n port_range_max = dict(required=False, type='int'),\n remote_ip_prefix = dict(required=False, default=None),\n remote_group = dict(required=False, default=None),\n ethertype = dict(default='IPv4',\n choices=['IPv4', 'IPv6']),\n direction = dict(default='ingress',\n choices=['egress', 'ingress']),\n state = dict(default='present',\n choices=['absent', 'present']),\n )\n\n module_kwargs = openstack_module_kwargs(\n mutually_exclusive=[\n ['remote_ip_prefix', 'remote_group'],\n ]\n )\n\n module = AnsibleModule(argument_spec,\n supports_check_mode=True,\n **module_kwargs)\n\n if not HAS_SHADE:\n module.fail_json(msg='shade is required for this module')\n\n state = module.params['state']\n security_group = module.params['security_group']\n remote_group = module.params['remote_group']\n changed = False\n\n try:\n cloud = shade.openstack_cloud(**module.params)\n secgroup = cloud.get_security_group(security_group)\n\n if remote_group:\n remotegroup = cloud.get_security_group(remote_group)\n else:\n remotegroup = { 'id' : None }\n\n if module.check_mode:\n module.exit_json(changed=_system_state_change(module, secgroup, remotegroup))\n\n if state == 'present':\n if not secgroup:\n module.fail_json(msg='Could not find security group %s' %\n security_group)\n\n rule = _find_matching_rule(module, secgroup, remotegroup)\n if not rule:\n rule = cloud.create_security_group_rule(\n secgroup['id'],\n port_range_min=module.params['port_range_min'],\n port_range_max=module.params['port_range_max'],\n protocol=module.params['protocol'],\n remote_ip_prefix=module.params['remote_ip_prefix'],\n remote_group_id=remotegroup['id'],\n direction=module.params['direction'],\n ethertype=module.params['ethertype']\n )\n changed = True\n module.exit_json(changed=changed, rule=rule, id=rule['id'])\n\n if state == 'absent' and secgroup:\n rule = _find_matching_rule(module, secgroup, remotegroup)\n if rule:\n cloud.delete_security_group_rule(rule['id'])\n changed = True\n\n module.exit_json(changed=changed)\n\n except shade.OpenStackCloudException as e:\n module.fail_json(msg=str(e))\n\n# this is magic, see lib/ansible/module_common.py\nfrom ansible.module_utils.basic import *\nfrom ansible.module_utils.openstack import *\n\nif __name__ == '__main__':\n main()\n", "path": "cloud/openstack/os_security_group_rule.py"}], "after_files": [{"content": "#!/usr/bin/python\n\n# Copyright (c) 2015 Hewlett-Packard Development Company, L.P.\n# Copyright (c) 2013, Benno Joy <[email protected]>\n#\n# This module is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This software is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this software. If not, see <http://www.gnu.org/licenses/>.\n\ntry:\n import shade\n HAS_SHADE = True\nexcept ImportError:\n HAS_SHADE = False\n\n\nDOCUMENTATION = '''\n---\nmodule: os_security_group_rule\nshort_description: Add/Delete rule from an existing security group\nextends_documentation_fragment: openstack\nversion_added: \"2.0\"\ndescription:\n - Add or Remove rule from an existing security group\noptions:\n security_group:\n description:\n - Name or ID of the security group\n required: true\n protocol:\n description:\n - IP protocols TCP UDP ICMP 112 (VRRP)\n choices: ['tcp', 'udp', 'icmp', 112, None]\n default: None\n port_range_min:\n description:\n - Starting port\n required: false\n default: None\n port_range_max:\n description:\n - Ending port\n required: false\n default: None\n remote_ip_prefix:\n description:\n - Source IP address(es) in CIDR notation (exclusive with remote_group)\n required: false\n remote_group:\n description:\n - Name or ID of the Security group to link (exclusive with\n remote_ip_prefix)\n required: false\n ethertype:\n description:\n - Must be IPv4 or IPv6, and addresses represented in CIDR must\n match the ingress or egress rules. Not all providers support IPv6.\n choices: ['IPv4', 'IPv6']\n default: IPv4\n direction:\n description:\n - The direction in which the security group rule is applied. Not\n all providers support egress.\n choices: ['egress', 'ingress']\n default: ingress\n state:\n description:\n - Should the resource be present or absent.\n choices: [present, absent]\n default: present\nrequirements: [\"shade\"]\n'''\n\nEXAMPLES = '''\n# Create a security group rule\n- os_security_group_rule:\n cloud: mordred\n security_group: foo\n protocol: tcp\n port_range_min: 80\n port_range_max: 80\n remote_ip_prefix: 0.0.0.0/0\n\n# Create a security group rule for ping\n- os_security_group_rule:\n cloud: mordred\n security_group: foo\n protocol: icmp\n remote_ip_prefix: 0.0.0.0/0\n\n# Another way to create the ping rule\n- os_security_group_rule:\n cloud: mordred\n security_group: foo\n protocol: icmp\n port_range_min: -1\n port_range_max: -1\n remote_ip_prefix: 0.0.0.0/0\n\n# Create a TCP rule covering all ports\n- os_security_group_rule:\n cloud: mordred\n security_group: foo\n protocol: tcp\n port_range_min: 1\n port_range_max: 65535\n remote_ip_prefix: 0.0.0.0/0\n\n# Another way to create the TCP rule above (defaults to all ports)\n- os_security_group_rule:\n cloud: mordred\n security_group: foo\n protocol: tcp\n remote_ip_prefix: 0.0.0.0/0\n\n# Create a rule for VRRP with numbered protocol 112\n- os_security_group_rule:\n security_group: loadbalancer_sg\n protocol: 112\n remote_group: loadbalancer-node_sg\n'''\n\nRETURN = '''\nid:\n description: Unique rule UUID.\n type: string\ndirection:\n description: The direction in which the security group rule is applied.\n type: string\n sample: 'egress'\nethertype:\n description: One of IPv4 or IPv6.\n type: string\n sample: 'IPv4'\nport_range_min:\n description: The minimum port number in the range that is matched by\n the security group rule.\n type: int\n sample: 8000\nport_range_max:\n description: The maximum port number in the range that is matched by\n the security group rule.\n type: int\n sample: 8000\nprotocol:\n description: The protocol that is matched by the security group rule.\n type: string\n sample: 'tcp'\nremote_ip_prefix:\n description: The remote IP prefix to be associated with this security group rule.\n type: string\n sample: '0.0.0.0/0'\nsecurity_group_id:\n description: The security group ID to associate with this security group rule.\n type: string\n'''\n\n\ndef _ports_match(protocol, module_min, module_max, rule_min, rule_max):\n \"\"\"\n Capture the complex port matching logic.\n\n The port values coming in for the module might be -1 (for ICMP),\n which will work only for Nova, but this is handled by shade. Likewise,\n they might be None, which works for Neutron, but not Nova. This too is\n handled by shade. Since shade will consistently return these port\n values as None, we need to convert any -1 values input to the module\n to None here for comparison.\n\n For TCP and UDP protocols, None values for both min and max are\n represented as the range 1-65535 for Nova, but remain None for\n Neutron. Shade returns the full range when Nova is the backend (since\n that is how Nova stores them), and None values for Neutron. If None\n values are input to the module for both values, then we need to adjust\n for comparison.\n \"\"\"\n\n # Check if the user is supplying -1 for ICMP.\n if protocol == 'icmp':\n if module_min and int(module_min) == -1:\n module_min = None\n if module_max and int(module_max) == -1:\n module_max = None\n\n # Check if user is supplying None values for full TCP/UDP port range.\n if protocol in ['tcp', 'udp'] and module_min is None and module_max is None:\n if (rule_min and int(rule_min) == 1\n and rule_max and int(rule_max) == 65535):\n # (None, None) == (1, 65535)\n return True\n\n # Sanity check to make sure we don't have type comparison issues.\n if module_min:\n module_min = int(module_min)\n if module_max:\n module_max = int(module_max)\n if rule_min:\n rule_min = int(rule_min)\n if rule_max:\n rule_max = int(rule_max)\n\n return module_min == rule_min and module_max == rule_max\n\n\ndef _find_matching_rule(module, secgroup, remotegroup):\n \"\"\"\n Find a rule in the group that matches the module parameters.\n :returns: The matching rule dict, or None if no matches.\n \"\"\"\n protocol = module.params['protocol']\n remote_ip_prefix = module.params['remote_ip_prefix']\n ethertype = module.params['ethertype']\n direction = module.params['direction']\n remote_group_id = remotegroup['id']\n\n for rule in secgroup['security_group_rules']:\n if (protocol == rule['protocol']\n and remote_ip_prefix == rule['remote_ip_prefix']\n and ethertype == rule['ethertype']\n and direction == rule['direction']\n and remote_group_id == rule['remote_group_id']\n and _ports_match(protocol,\n module.params['port_range_min'],\n module.params['port_range_max'],\n rule['port_range_min'],\n rule['port_range_max'])):\n return rule\n return None\n\n\ndef _system_state_change(module, secgroup, remotegroup):\n state = module.params['state']\n if secgroup:\n rule_exists = _find_matching_rule(module, secgroup, remotegroup)\n else:\n return False\n\n if state == 'present' and not rule_exists:\n return True\n if state == 'absent' and rule_exists:\n return True\n return False\n\n\ndef main():\n argument_spec = openstack_full_argument_spec(\n security_group = dict(required=True),\n # NOTE(Shrews): None is an acceptable protocol value for\n # Neutron, but Nova will balk at this.\n protocol = dict(default=None,\n choices=[None, 'tcp', 'udp', 'icmp', 112]),\n port_range_min = dict(required=False, type='int'),\n port_range_max = dict(required=False, type='int'),\n remote_ip_prefix = dict(required=False, default=None),\n remote_group = dict(required=False, default=None),\n ethertype = dict(default='IPv4',\n choices=['IPv4', 'IPv6']),\n direction = dict(default='ingress',\n choices=['egress', 'ingress']),\n state = dict(default='present',\n choices=['absent', 'present']),\n )\n\n module_kwargs = openstack_module_kwargs(\n mutually_exclusive=[\n ['remote_ip_prefix', 'remote_group'],\n ]\n )\n\n module = AnsibleModule(argument_spec,\n supports_check_mode=True,\n **module_kwargs)\n\n if not HAS_SHADE:\n module.fail_json(msg='shade is required for this module')\n\n state = module.params['state']\n security_group = module.params['security_group']\n remote_group = module.params['remote_group']\n changed = False\n\n try:\n cloud = shade.openstack_cloud(**module.params)\n secgroup = cloud.get_security_group(security_group)\n\n if remote_group:\n remotegroup = cloud.get_security_group(remote_group)\n else:\n remotegroup = { 'id' : None }\n\n if module.check_mode:\n module.exit_json(changed=_system_state_change(module, secgroup, remotegroup))\n\n if state == 'present':\n if not secgroup:\n module.fail_json(msg='Could not find security group %s' %\n security_group)\n\n rule = _find_matching_rule(module, secgroup, remotegroup)\n if not rule:\n rule = cloud.create_security_group_rule(\n secgroup['id'],\n port_range_min=module.params['port_range_min'],\n port_range_max=module.params['port_range_max'],\n protocol=module.params['protocol'],\n remote_ip_prefix=module.params['remote_ip_prefix'],\n remote_group_id=remotegroup['id'],\n direction=module.params['direction'],\n ethertype=module.params['ethertype']\n )\n changed = True\n module.exit_json(changed=changed, rule=rule, id=rule['id'])\n\n if state == 'absent' and secgroup:\n rule = _find_matching_rule(module, secgroup, remotegroup)\n if rule:\n cloud.delete_security_group_rule(rule['id'])\n changed = True\n\n module.exit_json(changed=changed)\n\n except shade.OpenStackCloudException as e:\n module.fail_json(msg=str(e))\n\n# this is magic, see lib/ansible/module_common.py\nfrom ansible.module_utils.basic import *\nfrom ansible.module_utils.openstack import *\n\nif __name__ == '__main__':\n main()\n", "path": "cloud/openstack/os_security_group_rule.py"}]} | 3,879 | 119 |
gh_patches_debug_61381 | rasdani/github-patches | git_diff | tensorflow__addons-1213 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Proposal: Upload the nightlies at each commit on the master branch
We already build the wheels already anyway. It's just missing the push to pypi. Yes/No?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """TensorFlow Addons.
16
17 TensorFlow Addons is a repository of contributions that conform to well-
18 established API patterns, but implement new functionality not available
19 in core TensorFlow. TensorFlow natively supports a large number of
20 operators, layers, metrics, losses, and optimizers. However, in a fast
21 moving field like ML, there are many interesting new developments that
22 cannot be integrated into core TensorFlow (because their broad
23 applicability is not yet clear, or it is mostly used by a smaller subset
24 of the community).
25 """
26
27 import os
28 import sys
29
30 from datetime import datetime
31 from setuptools import find_packages
32 from setuptools import setup
33 from setuptools.dist import Distribution
34 from setuptools import Extension
35
36 DOCLINES = __doc__.split("\n")
37
38 TFA_NIGHTLY = "tfa-nightly"
39 TFA_RELEASE = "tensorflow-addons"
40
41 if "--nightly" in sys.argv:
42 project_name = TFA_NIGHTLY
43 nightly_idx = sys.argv.index("--nightly")
44 sys.argv.pop(nightly_idx)
45 else:
46 project_name = TFA_RELEASE
47
48 # Version
49 version = {}
50 base_dir = os.path.dirname(os.path.abspath(__file__))
51 with open(os.path.join(base_dir, "tensorflow_addons", "version.py")) as fp:
52 exec(fp.read(), version)
53
54 if project_name == TFA_NIGHTLY:
55 version["__version__"] += datetime.strftime(datetime.today(), "%Y%m%d")
56
57 with open("requirements.txt") as f:
58 required_pkgs = f.read().splitlines()
59
60 # Manylinux2010 requires a patch for platlib
61 if (
62 sys.platform.startswith("linux")
63 and os.environ.get("TF_ADDONS_NO_BUILD", "0") == "0"
64 ):
65 ext_modules = [Extension("_foo", ["stub.cc"])]
66 else:
67 ext_modules = []
68
69
70 class BinaryDistribution(Distribution):
71 """This class is needed in order to create OS specific wheels."""
72
73 def has_ext_modules(self):
74 return True
75
76
77 setup(
78 name=project_name,
79 version=version["__version__"],
80 description=DOCLINES[0],
81 long_description="\n".join(DOCLINES[2:]),
82 author="Google Inc.",
83 author_email="[email protected]",
84 packages=find_packages(),
85 ext_modules=ext_modules,
86 install_requires=required_pkgs,
87 include_package_data=True,
88 zip_safe=False,
89 distclass=BinaryDistribution,
90 classifiers=[
91 "Development Status :: 4 - Beta",
92 "Intended Audience :: Developers",
93 "Intended Audience :: Education",
94 "Intended Audience :: Science/Research",
95 "License :: OSI Approved :: Apache Software License",
96 "Programming Language :: Python :: 3.5",
97 "Programming Language :: Python :: 3.6",
98 "Programming Language :: Python :: 3.7",
99 "Topic :: Scientific/Engineering :: Mathematics",
100 "Topic :: Software Development :: Libraries :: Python Modules",
101 "Topic :: Software Development :: Libraries",
102 ],
103 license="Apache 2.0",
104 keywords="tensorflow addons machine learning",
105 )
106
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -52,7 +52,7 @@
exec(fp.read(), version)
if project_name == TFA_NIGHTLY:
- version["__version__"] += datetime.strftime(datetime.today(), "%Y%m%d")
+ version["__version__"] += datetime.now().strftime("%Y%m%d%H%M%S")
with open("requirements.txt") as f:
required_pkgs = f.read().splitlines()
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -52,7 +52,7 @@\n exec(fp.read(), version)\n \n if project_name == TFA_NIGHTLY:\n- version[\"__version__\"] += datetime.strftime(datetime.today(), \"%Y%m%d\")\n+ version[\"__version__\"] += datetime.now().strftime(\"%Y%m%d%H%M%S\")\n \n with open(\"requirements.txt\") as f:\n required_pkgs = f.read().splitlines()\n", "issue": "Proposal: Upload the nightlies at each commit on the master branch\nWe already build the wheels already anyway. It's just missing the push to pypi. Yes/No?\n", "before_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"TensorFlow Addons.\n\nTensorFlow Addons is a repository of contributions that conform to well-\nestablished API patterns, but implement new functionality not available\nin core TensorFlow. TensorFlow natively supports a large number of\noperators, layers, metrics, losses, and optimizers. However, in a fast\nmoving field like ML, there are many interesting new developments that\ncannot be integrated into core TensorFlow (because their broad\napplicability is not yet clear, or it is mostly used by a smaller subset\nof the community).\n\"\"\"\n\nimport os\nimport sys\n\nfrom datetime import datetime\nfrom setuptools import find_packages\nfrom setuptools import setup\nfrom setuptools.dist import Distribution\nfrom setuptools import Extension\n\nDOCLINES = __doc__.split(\"\\n\")\n\nTFA_NIGHTLY = \"tfa-nightly\"\nTFA_RELEASE = \"tensorflow-addons\"\n\nif \"--nightly\" in sys.argv:\n project_name = TFA_NIGHTLY\n nightly_idx = sys.argv.index(\"--nightly\")\n sys.argv.pop(nightly_idx)\nelse:\n project_name = TFA_RELEASE\n\n# Version\nversion = {}\nbase_dir = os.path.dirname(os.path.abspath(__file__))\nwith open(os.path.join(base_dir, \"tensorflow_addons\", \"version.py\")) as fp:\n exec(fp.read(), version)\n\nif project_name == TFA_NIGHTLY:\n version[\"__version__\"] += datetime.strftime(datetime.today(), \"%Y%m%d\")\n\nwith open(\"requirements.txt\") as f:\n required_pkgs = f.read().splitlines()\n\n# Manylinux2010 requires a patch for platlib\nif (\n sys.platform.startswith(\"linux\")\n and os.environ.get(\"TF_ADDONS_NO_BUILD\", \"0\") == \"0\"\n):\n ext_modules = [Extension(\"_foo\", [\"stub.cc\"])]\nelse:\n ext_modules = []\n\n\nclass BinaryDistribution(Distribution):\n \"\"\"This class is needed in order to create OS specific wheels.\"\"\"\n\n def has_ext_modules(self):\n return True\n\n\nsetup(\n name=project_name,\n version=version[\"__version__\"],\n description=DOCLINES[0],\n long_description=\"\\n\".join(DOCLINES[2:]),\n author=\"Google Inc.\",\n author_email=\"[email protected]\",\n packages=find_packages(),\n ext_modules=ext_modules,\n install_requires=required_pkgs,\n include_package_data=True,\n zip_safe=False,\n distclass=BinaryDistribution,\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Scientific/Engineering :: Mathematics\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n \"Topic :: Software Development :: Libraries\",\n ],\n license=\"Apache 2.0\",\n keywords=\"tensorflow addons machine learning\",\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"TensorFlow Addons.\n\nTensorFlow Addons is a repository of contributions that conform to well-\nestablished API patterns, but implement new functionality not available\nin core TensorFlow. TensorFlow natively supports a large number of\noperators, layers, metrics, losses, and optimizers. However, in a fast\nmoving field like ML, there are many interesting new developments that\ncannot be integrated into core TensorFlow (because their broad\napplicability is not yet clear, or it is mostly used by a smaller subset\nof the community).\n\"\"\"\n\nimport os\nimport sys\n\nfrom datetime import datetime\nfrom setuptools import find_packages\nfrom setuptools import setup\nfrom setuptools.dist import Distribution\nfrom setuptools import Extension\n\nDOCLINES = __doc__.split(\"\\n\")\n\nTFA_NIGHTLY = \"tfa-nightly\"\nTFA_RELEASE = \"tensorflow-addons\"\n\nif \"--nightly\" in sys.argv:\n project_name = TFA_NIGHTLY\n nightly_idx = sys.argv.index(\"--nightly\")\n sys.argv.pop(nightly_idx)\nelse:\n project_name = TFA_RELEASE\n\n# Version\nversion = {}\nbase_dir = os.path.dirname(os.path.abspath(__file__))\nwith open(os.path.join(base_dir, \"tensorflow_addons\", \"version.py\")) as fp:\n exec(fp.read(), version)\n\nif project_name == TFA_NIGHTLY:\n version[\"__version__\"] += datetime.now().strftime(\"%Y%m%d%H%M%S\")\n\nwith open(\"requirements.txt\") as f:\n required_pkgs = f.read().splitlines()\n\n# Manylinux2010 requires a patch for platlib\nif (\n sys.platform.startswith(\"linux\")\n and os.environ.get(\"TF_ADDONS_NO_BUILD\", \"0\") == \"0\"\n):\n ext_modules = [Extension(\"_foo\", [\"stub.cc\"])]\nelse:\n ext_modules = []\n\n\nclass BinaryDistribution(Distribution):\n \"\"\"This class is needed in order to create OS specific wheels.\"\"\"\n\n def has_ext_modules(self):\n return True\n\n\nsetup(\n name=project_name,\n version=version[\"__version__\"],\n description=DOCLINES[0],\n long_description=\"\\n\".join(DOCLINES[2:]),\n author=\"Google Inc.\",\n author_email=\"[email protected]\",\n packages=find_packages(),\n ext_modules=ext_modules,\n install_requires=required_pkgs,\n include_package_data=True,\n zip_safe=False,\n distclass=BinaryDistribution,\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Scientific/Engineering :: Mathematics\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n \"Topic :: Software Development :: Libraries\",\n ],\n license=\"Apache 2.0\",\n keywords=\"tensorflow addons machine learning\",\n)\n", "path": "setup.py"}]} | 1,293 | 111 |
gh_patches_debug_35842 | rasdani/github-patches | git_diff | dmlc__dgl-5543 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Sparse] Support SparseMatrix element-wise multiplication with different sparsities.
## 🔨Work Item
**IMPORTANT:**
* This template is only for dev team to track project progress. For feature request or bug report, please use the corresponding issue templates.
* DO NOT create a new work item if the purpose is to fix an existing issue or feature request. We will directly use the issue in the project tracker.
Project tracker: https://github.com/orgs/dmlc/projects/2
## Description
<del>
Breakdown into two work items.
1. Support on CPU (#5519).
3. Support on GPU.
</del>
Implemented by concatenation and unique on two COO matrices.
## Depending work items or issues
<!-- what must be done before this -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/dgl/sparse/elementwise_op_sp.py`
Content:
```
1 """DGL elementwise operators for sparse matrix module."""
2 from typing import Union
3
4 import torch
5
6 from .sparse_matrix import diag, SparseMatrix, val_like
7 from .utils import is_scalar, Scalar
8
9
10 def spsp_add(A, B):
11 """Invoke C++ sparse library for addition"""
12 return SparseMatrix(
13 torch.ops.dgl_sparse.spsp_add(A.c_sparse_matrix, B.c_sparse_matrix)
14 )
15
16
17 def sp_add(A: SparseMatrix, B: SparseMatrix) -> SparseMatrix:
18 """Elementwise addition
19
20 Parameters
21 ----------
22 A : SparseMatrix
23 Sparse matrix
24 B : SparseMatrix
25 Sparse matrix
26
27 Returns
28 -------
29 SparseMatrix
30 Sparse matrix
31
32 Examples
33 --------
34
35 >>> indices = torch.tensor([[1, 0, 2], [0, 3, 2]])
36 >>> val = torch.tensor([10, 20, 30])
37 >>> A = dglsp.spmatrix(indices, val, shape=(3, 4))
38 >>> A + A
39 SparseMatrix(indices=tensor([[0, 1, 2],
40 [3, 0, 2]]),
41 values=tensor([40, 20, 60]),
42 shape=(3, 4), nnz=3)
43 """
44 # Python falls back to B.__radd__ then TypeError when NotImplemented is
45 # returned.
46 return spsp_add(A, B) if isinstance(B, SparseMatrix) else NotImplemented
47
48
49 def sp_sub(A: SparseMatrix, B: SparseMatrix) -> SparseMatrix:
50 """Elementwise subtraction
51
52 Parameters
53 ----------
54 A : SparseMatrix
55 Sparse matrix
56 B : SparseMatrix
57 Sparse matrix
58
59 Returns
60 -------
61 SparseMatrix
62 Sparse matrix
63
64 Examples
65 --------
66
67 >>> indices = torch.tensor([[1, 0, 2], [0, 3, 2]])
68 >>> val = torch.tensor([10, 20, 30])
69 >>> val2 = torch.tensor([5, 10, 15])
70 >>> A = dglsp.spmatrix(indices, val, shape=(3, 4))
71 >>> B = dglsp.spmatrix(indices, val2, shape=(3, 4))
72 >>> A - B
73 SparseMatrix(indices=tensor([[0, 1, 2],
74 [3, 0, 2]]),
75 values=tensor([10, 5, 15]),
76 shape=(3, 4), nnz=3)
77 """
78 # Python falls back to B.__rsub__ then TypeError when NotImplemented is
79 # returned.
80 return spsp_add(A, -B) if isinstance(B, SparseMatrix) else NotImplemented
81
82
83 def sp_mul(A: SparseMatrix, B: Union[SparseMatrix, Scalar]) -> SparseMatrix:
84 """Elementwise multiplication
85
86 If :attr:`B` is a sparse matrix, both :attr:`A` and :attr:`B` must be
87 diagonal matrices.
88
89 Parameters
90 ----------
91 A : SparseMatrix
92 First operand
93 B : SparseMatrix or Scalar
94 Second operand
95
96 Returns
97 -------
98 SparseMatrix
99 Result of A * B
100
101 Examples
102 --------
103
104 >>> indices = torch.tensor([[1, 0, 2], [0, 3, 2]])
105 >>> val = torch.tensor([1, 2, 3])
106 >>> A = dglsp.spmatrix(indices, val, shape=(3, 4))
107
108 >>> A * 2
109 SparseMatrix(indices=tensor([[1, 0, 2],
110 [0, 3, 2]]),
111 values=tensor([2, 4, 6]),
112 shape=(3, 4), nnz=3)
113
114 >>> 2 * A
115 SparseMatrix(indices=tensor([[1, 0, 2],
116 [0, 3, 2]]),
117 values=tensor([2, 4, 6]),
118 shape=(3, 4), nnz=3)
119 """
120 if is_scalar(B):
121 return val_like(A, A.val * B)
122 if A.is_diag() and B.is_diag():
123 assert A.shape == B.shape, (
124 f"The shape of diagonal matrix A {A.shape} and B {B.shape} must"
125 f"match for elementwise multiplication."
126 )
127 return diag(A.val * B.val, A.shape)
128 # Python falls back to B.__rmul__(A) then TypeError when NotImplemented is
129 # returned.
130 # So this also handles the case of scalar * SparseMatrix since we set
131 # SparseMatrix.__rmul__ to be the same as SparseMatrix.__mul__.
132 return NotImplemented
133
134
135 def sp_div(A: SparseMatrix, B: Union[SparseMatrix, Scalar]) -> SparseMatrix:
136 """Elementwise division
137
138 If :attr:`B` is a sparse matrix, both :attr:`A` and :attr:`B` must be
139 diagonal matrices.
140
141 Parameters
142 ----------
143 A : SparseMatrix
144 First operand
145 B : SparseMatrix or Scalar
146 Second operand
147
148 Returns
149 -------
150 SparseMatrix
151 Result of A / B
152
153 Examples
154 --------
155 >>> indices = torch.tensor([[1, 0, 2], [0, 3, 2]])
156 >>> val = torch.tensor([1, 2, 3])
157 >>> A = dglsp.spmatrix(indices, val, shape=(3, 4))
158 >>> A / 2
159 SparseMatrix(indices=tensor([[1, 0, 2],
160 [0, 3, 2]]),
161 values=tensor([0.5000, 1.0000, 1.5000]),
162 shape=(3, 4), nnz=3)
163 """
164 if is_scalar(B):
165 return val_like(A, A.val / B)
166 if A.is_diag() and B.is_diag():
167 assert A.shape == B.shape, (
168 f"The shape of diagonal matrix A {A.shape} and B {B.shape} must"
169 f"match for elementwise division."
170 )
171 return diag(A.val / B.val, A.shape)
172 # Python falls back to B.__rtruediv__(A) then TypeError when NotImplemented
173 # is returned.
174 return NotImplemented
175
176
177 def sp_power(A: SparseMatrix, scalar: Scalar) -> SparseMatrix:
178 """Take the power of each nonzero element and return a sparse matrix with
179 the result.
180
181 Parameters
182 ----------
183 A : SparseMatrix
184 Sparse matrix
185 scalar : float or int
186 Exponent
187
188 Returns
189 -------
190 SparseMatrix
191 Sparse matrix
192
193 Examples
194 --------
195 >>> indices = torch.tensor([[1, 0, 2], [0, 3, 2]])
196 >>> val = torch.tensor([10, 20, 30])
197 >>> A = dglsp.spmatrix(indices, val)
198 >>> A ** 2
199 SparseMatrix(indices=tensor([[1, 0, 2],
200 [0, 3, 2]]),
201 values=tensor([100, 400, 900]),
202 shape=(3, 4), nnz=3)
203 """
204 # Python falls back to scalar.__rpow__ then TypeError when NotImplemented
205 # is returned.
206 return val_like(A, A.val**scalar) if is_scalar(scalar) else NotImplemented
207
208
209 SparseMatrix.__add__ = sp_add
210 SparseMatrix.__sub__ = sp_sub
211 SparseMatrix.__mul__ = sp_mul
212 SparseMatrix.__rmul__ = sp_mul
213 SparseMatrix.__truediv__ = sp_div
214 SparseMatrix.__pow__ = sp_power
215
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/python/dgl/sparse/elementwise_op_sp.py b/python/dgl/sparse/elementwise_op_sp.py
--- a/python/dgl/sparse/elementwise_op_sp.py
+++ b/python/dgl/sparse/elementwise_op_sp.py
@@ -14,6 +14,13 @@
)
+def spsp_mul(A, B):
+ """Invoke C++ sparse library for multiplication"""
+ return SparseMatrix(
+ torch.ops.dgl_sparse.spsp_mul(A.c_sparse_matrix, B.c_sparse_matrix)
+ )
+
+
def sp_add(A: SparseMatrix, B: SparseMatrix) -> SparseMatrix:
"""Elementwise addition
@@ -83,8 +90,8 @@
def sp_mul(A: SparseMatrix, B: Union[SparseMatrix, Scalar]) -> SparseMatrix:
"""Elementwise multiplication
- If :attr:`B` is a sparse matrix, both :attr:`A` and :attr:`B` must be
- diagonal matrices.
+ Note that if both :attr:`A` and :attr:`B` are sparse matrices, both of them
+ need to be diagonal or on CPU.
Parameters
----------
@@ -108,28 +115,27 @@
>>> A * 2
SparseMatrix(indices=tensor([[1, 0, 2],
[0, 3, 2]]),
- values=tensor([2, 4, 6]),
- shape=(3, 4), nnz=3)
+ values=tensor([2, 4, 6]),
+ shape=(3, 4), nnz=3)
>>> 2 * A
SparseMatrix(indices=tensor([[1, 0, 2],
[0, 3, 2]]),
- values=tensor([2, 4, 6]),
- shape=(3, 4), nnz=3)
+ values=tensor([2, 4, 6]),
+ shape=(3, 4), nnz=3)
+
+ >>> indices2 = torch.tensor([[2, 0, 1], [0, 3, 2]])
+ >>> val2 = torch.tensor([3, 2, 1])
+ >>> B = dglsp.spmatrix(indices2, val2, shape=(3, 4))
+ >>> A * B
+ SparseMatrix(indices=tensor([[0],
+ [3]]),
+ values=tensor([4]),
+ shape=(3, 4), nnz=1)
"""
if is_scalar(B):
return val_like(A, A.val * B)
- if A.is_diag() and B.is_diag():
- assert A.shape == B.shape, (
- f"The shape of diagonal matrix A {A.shape} and B {B.shape} must"
- f"match for elementwise multiplication."
- )
- return diag(A.val * B.val, A.shape)
- # Python falls back to B.__rmul__(A) then TypeError when NotImplemented is
- # returned.
- # So this also handles the case of scalar * SparseMatrix since we set
- # SparseMatrix.__rmul__ to be the same as SparseMatrix.__mul__.
- return NotImplemented
+ return spsp_mul(A, B)
def sp_div(A: SparseMatrix, B: Union[SparseMatrix, Scalar]) -> SparseMatrix:
| {"golden_diff": "diff --git a/python/dgl/sparse/elementwise_op_sp.py b/python/dgl/sparse/elementwise_op_sp.py\n--- a/python/dgl/sparse/elementwise_op_sp.py\n+++ b/python/dgl/sparse/elementwise_op_sp.py\n@@ -14,6 +14,13 @@\n )\n \n \n+def spsp_mul(A, B):\n+ \"\"\"Invoke C++ sparse library for multiplication\"\"\"\n+ return SparseMatrix(\n+ torch.ops.dgl_sparse.spsp_mul(A.c_sparse_matrix, B.c_sparse_matrix)\n+ )\n+\n+\n def sp_add(A: SparseMatrix, B: SparseMatrix) -> SparseMatrix:\n \"\"\"Elementwise addition\n \n@@ -83,8 +90,8 @@\n def sp_mul(A: SparseMatrix, B: Union[SparseMatrix, Scalar]) -> SparseMatrix:\n \"\"\"Elementwise multiplication\n \n- If :attr:`B` is a sparse matrix, both :attr:`A` and :attr:`B` must be\n- diagonal matrices.\n+ Note that if both :attr:`A` and :attr:`B` are sparse matrices, both of them\n+ need to be diagonal or on CPU.\n \n Parameters\n ----------\n@@ -108,28 +115,27 @@\n >>> A * 2\n SparseMatrix(indices=tensor([[1, 0, 2],\n [0, 3, 2]]),\n- values=tensor([2, 4, 6]),\n- shape=(3, 4), nnz=3)\n+ values=tensor([2, 4, 6]),\n+ shape=(3, 4), nnz=3)\n \n >>> 2 * A\n SparseMatrix(indices=tensor([[1, 0, 2],\n [0, 3, 2]]),\n- values=tensor([2, 4, 6]),\n- shape=(3, 4), nnz=3)\n+ values=tensor([2, 4, 6]),\n+ shape=(3, 4), nnz=3)\n+\n+ >>> indices2 = torch.tensor([[2, 0, 1], [0, 3, 2]])\n+ >>> val2 = torch.tensor([3, 2, 1])\n+ >>> B = dglsp.spmatrix(indices2, val2, shape=(3, 4))\n+ >>> A * B\n+ SparseMatrix(indices=tensor([[0],\n+ [3]]),\n+ values=tensor([4]),\n+ shape=(3, 4), nnz=1)\n \"\"\"\n if is_scalar(B):\n return val_like(A, A.val * B)\n- if A.is_diag() and B.is_diag():\n- assert A.shape == B.shape, (\n- f\"The shape of diagonal matrix A {A.shape} and B {B.shape} must\"\n- f\"match for elementwise multiplication.\"\n- )\n- return diag(A.val * B.val, A.shape)\n- # Python falls back to B.__rmul__(A) then TypeError when NotImplemented is\n- # returned.\n- # So this also handles the case of scalar * SparseMatrix since we set\n- # SparseMatrix.__rmul__ to be the same as SparseMatrix.__mul__.\n- return NotImplemented\n+ return spsp_mul(A, B)\n \n \n def sp_div(A: SparseMatrix, B: Union[SparseMatrix, Scalar]) -> SparseMatrix:\n", "issue": "[Sparse] Support SparseMatrix element-wise multiplication with different sparsities.\n## \ud83d\udd28Work Item\r\n\r\n**IMPORTANT:**\r\n* This template is only for dev team to track project progress. For feature request or bug report, please use the corresponding issue templates.\r\n* DO NOT create a new work item if the purpose is to fix an existing issue or feature request. We will directly use the issue in the project tracker.\r\n\r\nProject tracker: https://github.com/orgs/dmlc/projects/2\r\n\r\n## Description\r\n\r\n<del>\r\nBreakdown into two work items.\r\n\r\n1. Support on CPU (#5519).\r\n\r\n3. Support on GPU.\r\n\r\n</del>\r\n\r\nImplemented by concatenation and unique on two COO matrices.\r\n## Depending work items or issues\r\n\r\n<!-- what must be done before this -->\r\n\n", "before_files": [{"content": "\"\"\"DGL elementwise operators for sparse matrix module.\"\"\"\nfrom typing import Union\n\nimport torch\n\nfrom .sparse_matrix import diag, SparseMatrix, val_like\nfrom .utils import is_scalar, Scalar\n\n\ndef spsp_add(A, B):\n \"\"\"Invoke C++ sparse library for addition\"\"\"\n return SparseMatrix(\n torch.ops.dgl_sparse.spsp_add(A.c_sparse_matrix, B.c_sparse_matrix)\n )\n\n\ndef sp_add(A: SparseMatrix, B: SparseMatrix) -> SparseMatrix:\n \"\"\"Elementwise addition\n\n Parameters\n ----------\n A : SparseMatrix\n Sparse matrix\n B : SparseMatrix\n Sparse matrix\n\n Returns\n -------\n SparseMatrix\n Sparse matrix\n\n Examples\n --------\n\n >>> indices = torch.tensor([[1, 0, 2], [0, 3, 2]])\n >>> val = torch.tensor([10, 20, 30])\n >>> A = dglsp.spmatrix(indices, val, shape=(3, 4))\n >>> A + A\n SparseMatrix(indices=tensor([[0, 1, 2],\n [3, 0, 2]]),\n values=tensor([40, 20, 60]),\n shape=(3, 4), nnz=3)\n \"\"\"\n # Python falls back to B.__radd__ then TypeError when NotImplemented is\n # returned.\n return spsp_add(A, B) if isinstance(B, SparseMatrix) else NotImplemented\n\n\ndef sp_sub(A: SparseMatrix, B: SparseMatrix) -> SparseMatrix:\n \"\"\"Elementwise subtraction\n\n Parameters\n ----------\n A : SparseMatrix\n Sparse matrix\n B : SparseMatrix\n Sparse matrix\n\n Returns\n -------\n SparseMatrix\n Sparse matrix\n\n Examples\n --------\n\n >>> indices = torch.tensor([[1, 0, 2], [0, 3, 2]])\n >>> val = torch.tensor([10, 20, 30])\n >>> val2 = torch.tensor([5, 10, 15])\n >>> A = dglsp.spmatrix(indices, val, shape=(3, 4))\n >>> B = dglsp.spmatrix(indices, val2, shape=(3, 4))\n >>> A - B\n SparseMatrix(indices=tensor([[0, 1, 2],\n [3, 0, 2]]),\n values=tensor([10, 5, 15]),\n shape=(3, 4), nnz=3)\n \"\"\"\n # Python falls back to B.__rsub__ then TypeError when NotImplemented is\n # returned.\n return spsp_add(A, -B) if isinstance(B, SparseMatrix) else NotImplemented\n\n\ndef sp_mul(A: SparseMatrix, B: Union[SparseMatrix, Scalar]) -> SparseMatrix:\n \"\"\"Elementwise multiplication\n\n If :attr:`B` is a sparse matrix, both :attr:`A` and :attr:`B` must be\n diagonal matrices.\n\n Parameters\n ----------\n A : SparseMatrix\n First operand\n B : SparseMatrix or Scalar\n Second operand\n\n Returns\n -------\n SparseMatrix\n Result of A * B\n\n Examples\n --------\n\n >>> indices = torch.tensor([[1, 0, 2], [0, 3, 2]])\n >>> val = torch.tensor([1, 2, 3])\n >>> A = dglsp.spmatrix(indices, val, shape=(3, 4))\n\n >>> A * 2\n SparseMatrix(indices=tensor([[1, 0, 2],\n [0, 3, 2]]),\n values=tensor([2, 4, 6]),\n shape=(3, 4), nnz=3)\n\n >>> 2 * A\n SparseMatrix(indices=tensor([[1, 0, 2],\n [0, 3, 2]]),\n values=tensor([2, 4, 6]),\n shape=(3, 4), nnz=3)\n \"\"\"\n if is_scalar(B):\n return val_like(A, A.val * B)\n if A.is_diag() and B.is_diag():\n assert A.shape == B.shape, (\n f\"The shape of diagonal matrix A {A.shape} and B {B.shape} must\"\n f\"match for elementwise multiplication.\"\n )\n return diag(A.val * B.val, A.shape)\n # Python falls back to B.__rmul__(A) then TypeError when NotImplemented is\n # returned.\n # So this also handles the case of scalar * SparseMatrix since we set\n # SparseMatrix.__rmul__ to be the same as SparseMatrix.__mul__.\n return NotImplemented\n\n\ndef sp_div(A: SparseMatrix, B: Union[SparseMatrix, Scalar]) -> SparseMatrix:\n \"\"\"Elementwise division\n\n If :attr:`B` is a sparse matrix, both :attr:`A` and :attr:`B` must be\n diagonal matrices.\n\n Parameters\n ----------\n A : SparseMatrix\n First operand\n B : SparseMatrix or Scalar\n Second operand\n\n Returns\n -------\n SparseMatrix\n Result of A / B\n\n Examples\n --------\n >>> indices = torch.tensor([[1, 0, 2], [0, 3, 2]])\n >>> val = torch.tensor([1, 2, 3])\n >>> A = dglsp.spmatrix(indices, val, shape=(3, 4))\n >>> A / 2\n SparseMatrix(indices=tensor([[1, 0, 2],\n [0, 3, 2]]),\n values=tensor([0.5000, 1.0000, 1.5000]),\n shape=(3, 4), nnz=3)\n \"\"\"\n if is_scalar(B):\n return val_like(A, A.val / B)\n if A.is_diag() and B.is_diag():\n assert A.shape == B.shape, (\n f\"The shape of diagonal matrix A {A.shape} and B {B.shape} must\"\n f\"match for elementwise division.\"\n )\n return diag(A.val / B.val, A.shape)\n # Python falls back to B.__rtruediv__(A) then TypeError when NotImplemented\n # is returned.\n return NotImplemented\n\n\ndef sp_power(A: SparseMatrix, scalar: Scalar) -> SparseMatrix:\n \"\"\"Take the power of each nonzero element and return a sparse matrix with\n the result.\n\n Parameters\n ----------\n A : SparseMatrix\n Sparse matrix\n scalar : float or int\n Exponent\n\n Returns\n -------\n SparseMatrix\n Sparse matrix\n\n Examples\n --------\n >>> indices = torch.tensor([[1, 0, 2], [0, 3, 2]])\n >>> val = torch.tensor([10, 20, 30])\n >>> A = dglsp.spmatrix(indices, val)\n >>> A ** 2\n SparseMatrix(indices=tensor([[1, 0, 2],\n [0, 3, 2]]),\n values=tensor([100, 400, 900]),\n shape=(3, 4), nnz=3)\n \"\"\"\n # Python falls back to scalar.__rpow__ then TypeError when NotImplemented\n # is returned.\n return val_like(A, A.val**scalar) if is_scalar(scalar) else NotImplemented\n\n\nSparseMatrix.__add__ = sp_add\nSparseMatrix.__sub__ = sp_sub\nSparseMatrix.__mul__ = sp_mul\nSparseMatrix.__rmul__ = sp_mul\nSparseMatrix.__truediv__ = sp_div\nSparseMatrix.__pow__ = sp_power\n", "path": "python/dgl/sparse/elementwise_op_sp.py"}], "after_files": [{"content": "\"\"\"DGL elementwise operators for sparse matrix module.\"\"\"\nfrom typing import Union\n\nimport torch\n\nfrom .sparse_matrix import diag, SparseMatrix, val_like\nfrom .utils import is_scalar, Scalar\n\n\ndef spsp_add(A, B):\n \"\"\"Invoke C++ sparse library for addition\"\"\"\n return SparseMatrix(\n torch.ops.dgl_sparse.spsp_add(A.c_sparse_matrix, B.c_sparse_matrix)\n )\n\n\ndef spsp_mul(A, B):\n \"\"\"Invoke C++ sparse library for multiplication\"\"\"\n return SparseMatrix(\n torch.ops.dgl_sparse.spsp_mul(A.c_sparse_matrix, B.c_sparse_matrix)\n )\n\n\ndef sp_add(A: SparseMatrix, B: SparseMatrix) -> SparseMatrix:\n \"\"\"Elementwise addition\n\n Parameters\n ----------\n A : SparseMatrix\n Sparse matrix\n B : SparseMatrix\n Sparse matrix\n\n Returns\n -------\n SparseMatrix\n Sparse matrix\n\n Examples\n --------\n\n >>> indices = torch.tensor([[1, 0, 2], [0, 3, 2]])\n >>> val = torch.tensor([10, 20, 30])\n >>> A = dglsp.spmatrix(indices, val, shape=(3, 4))\n >>> A + A\n SparseMatrix(indices=tensor([[0, 1, 2],\n [3, 0, 2]]),\n values=tensor([40, 20, 60]),\n shape=(3, 4), nnz=3)\n \"\"\"\n # Python falls back to B.__radd__ then TypeError when NotImplemented is\n # returned.\n return spsp_add(A, B) if isinstance(B, SparseMatrix) else NotImplemented\n\n\ndef sp_sub(A: SparseMatrix, B: SparseMatrix) -> SparseMatrix:\n \"\"\"Elementwise subtraction\n\n Parameters\n ----------\n A : SparseMatrix\n Sparse matrix\n B : SparseMatrix\n Sparse matrix\n\n Returns\n -------\n SparseMatrix\n Sparse matrix\n\n Examples\n --------\n\n >>> indices = torch.tensor([[1, 0, 2], [0, 3, 2]])\n >>> val = torch.tensor([10, 20, 30])\n >>> val2 = torch.tensor([5, 10, 15])\n >>> A = dglsp.spmatrix(indices, val, shape=(3, 4))\n >>> B = dglsp.spmatrix(indices, val2, shape=(3, 4))\n >>> A - B\n SparseMatrix(indices=tensor([[0, 1, 2],\n [3, 0, 2]]),\n values=tensor([10, 5, 15]),\n shape=(3, 4), nnz=3)\n \"\"\"\n # Python falls back to B.__rsub__ then TypeError when NotImplemented is\n # returned.\n return spsp_add(A, -B) if isinstance(B, SparseMatrix) else NotImplemented\n\n\ndef sp_mul(A: SparseMatrix, B: Union[SparseMatrix, Scalar]) -> SparseMatrix:\n \"\"\"Elementwise multiplication\n\n Note that if both :attr:`A` and :attr:`B` are sparse matrices, both of them\n need to be diagonal or on CPU.\n\n Parameters\n ----------\n A : SparseMatrix\n First operand\n B : SparseMatrix or Scalar\n Second operand\n\n Returns\n -------\n SparseMatrix\n Result of A * B\n\n Examples\n --------\n\n >>> indices = torch.tensor([[1, 0, 2], [0, 3, 2]])\n >>> val = torch.tensor([1, 2, 3])\n >>> A = dglsp.spmatrix(indices, val, shape=(3, 4))\n\n >>> A * 2\n SparseMatrix(indices=tensor([[1, 0, 2],\n [0, 3, 2]]),\n values=tensor([2, 4, 6]),\n shape=(3, 4), nnz=3)\n\n >>> 2 * A\n SparseMatrix(indices=tensor([[1, 0, 2],\n [0, 3, 2]]),\n values=tensor([2, 4, 6]),\n shape=(3, 4), nnz=3)\n\n >>> indices2 = torch.tensor([[2, 0, 1], [0, 3, 2]])\n >>> val2 = torch.tensor([3, 2, 1])\n >>> B = dglsp.spmatrix(indices2, val2, shape=(3, 4))\n >>> A * B\n SparseMatrix(indices=tensor([[0],\n [3]]),\n values=tensor([4]),\n shape=(3, 4), nnz=1)\n \"\"\"\n if is_scalar(B):\n return val_like(A, A.val * B)\n return spsp_mul(A, B)\n\n\ndef sp_div(A: SparseMatrix, B: Union[SparseMatrix, Scalar]) -> SparseMatrix:\n \"\"\"Elementwise division\n\n If :attr:`B` is a sparse matrix, both :attr:`A` and :attr:`B` must be\n diagonal matrices.\n\n Parameters\n ----------\n A : SparseMatrix\n First operand\n B : SparseMatrix or Scalar\n Second operand\n\n Returns\n -------\n SparseMatrix\n Result of A / B\n\n Examples\n --------\n >>> indices = torch.tensor([[1, 0, 2], [0, 3, 2]])\n >>> val = torch.tensor([1, 2, 3])\n >>> A = dglsp.spmatrix(indices, val, shape=(3, 4))\n >>> A / 2\n SparseMatrix(indices=tensor([[1, 0, 2],\n [0, 3, 2]]),\n values=tensor([0.5000, 1.0000, 1.5000]),\n shape=(3, 4), nnz=3)\n \"\"\"\n if is_scalar(B):\n return val_like(A, A.val / B)\n if A.is_diag() and B.is_diag():\n assert A.shape == B.shape, (\n f\"The shape of diagonal matrix A {A.shape} and B {B.shape} must\"\n f\"match for elementwise division.\"\n )\n return diag(A.val / B.val, A.shape)\n # Python falls back to B.__rtruediv__(A) then TypeError when NotImplemented\n # is returned.\n return NotImplemented\n\n\ndef sp_power(A: SparseMatrix, scalar: Scalar) -> SparseMatrix:\n \"\"\"Take the power of each nonzero element and return a sparse matrix with\n the result.\n\n Parameters\n ----------\n A : SparseMatrix\n Sparse matrix\n scalar : float or int\n Exponent\n\n Returns\n -------\n SparseMatrix\n Sparse matrix\n\n Examples\n --------\n >>> indices = torch.tensor([[1, 0, 2], [0, 3, 2]])\n >>> val = torch.tensor([10, 20, 30])\n >>> A = dglsp.spmatrix(indices, val)\n >>> A ** 2\n SparseMatrix(indices=tensor([[1, 0, 2],\n [0, 3, 2]]),\n values=tensor([100, 400, 900]),\n shape=(3, 4), nnz=3)\n \"\"\"\n # Python falls back to scalar.__rpow__ then TypeError when NotImplemented\n # is returned.\n return val_like(A, A.val**scalar) if is_scalar(scalar) else NotImplemented\n\n\nSparseMatrix.__add__ = sp_add\nSparseMatrix.__sub__ = sp_sub\nSparseMatrix.__mul__ = sp_mul\nSparseMatrix.__rmul__ = sp_mul\nSparseMatrix.__truediv__ = sp_div\nSparseMatrix.__pow__ = sp_power\n", "path": "python/dgl/sparse/elementwise_op_sp.py"}]} | 2,674 | 756 |
gh_patches_debug_34411 | rasdani/github-patches | git_diff | ESMCI__cime-3079 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Branch a single instance case from a multi-instance case
When a multi-instance CAM forecast fails, I want to use one of the failed instances
as an exact restart in a single instance case, to speed debugging.
Building it as hybrid or startup won't work because for CAM
those are not exact restarts; they use the CAM initial file
I first tried building a single instance branch run with the multi-instance
as the refcase. That branched the entire multi-instance run.
Then I tried copying all of the restart files for 1 instance into a new directory,
"Restarts", with instance numbers removed from the file names.
I built the single instance case with Restarts as the RUN_REFDIR.
It built, but when I tried to run it it complained about a mozart file
already existing in the run directory. I believe that an earlier stage
of the submit process created that file, so I'm stumped about how
to fix this problem. I've played with GET_REFCASE and CONTINUE_RUN
and other things, but have not found the magic combination.
Am I missing something, or is this a new capability that hasn't been implemented?
$CASEROOT = /gpfs/fs1/work/raeder/Exp/Debug_lwdn4
$RUNDIR = /gpfs/fs1/scratch/raeder/Debug_lwdn4/run
$CESMROOT = /glade/work/raeder/Models/cesm2_2_maint-5.6
Thanks
Kevin
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/lib/CIME/Servers/wget.py`
Content:
```
1 """
2 WGET Server class. Interact with a server using WGET protocol
3 """
4 # pylint: disable=super-init-not-called
5 from CIME.XML.standard_module_setup import *
6 from CIME.Servers.generic_server import GenericServer
7 logger = logging.getLogger(__name__)
8
9 class WGET(GenericServer):
10 def __init__(self, address, user='', passwd=''):
11 self._args = ''
12 if user:
13 self._args += "--user {} ".format(user)
14 if passwd:
15 self._args += "--password {} ".format(passwd)
16 self._server_loc = address
17
18 err = run_cmd("wget {} --spider {}".format(self._args, address))[0]
19 expect(err == 0,"Could not connect to repo '{0}'\nThis is most likely either a proxy, or network issue .")
20
21
22 def fileexists(self, rel_path):
23 full_url = os.path.join(self._server_loc, rel_path)
24 stat, out, err = run_cmd("wget {} --spider {}".format(self._args, full_url))
25 if (stat != 0):
26 logging.warning("FAIL: Repo '{}' does not have file '{}'\nReason:{}\n{}\n".format(self._server_loc, full_url, out.encode('utf-8'), err.encode('utf-8')))
27 return False
28 return True
29
30 def getfile(self, rel_path, full_path):
31 full_url = os.path.join(self._server_loc, rel_path)
32 stat, output, errput = \
33 run_cmd("wget {} {} -nc --output-document {}".format(self._args, full_url, full_path))
34 if (stat != 0):
35 logging.warning("wget failed with output: {} and errput {}\n".format(output, errput))
36 # wget puts an empty file if it fails.
37 try:
38 os.remove(full_path)
39 except OSError:
40 pass
41 return False
42 else:
43 logging.info("SUCCESS\n")
44 return True
45
46 def getdirectory(self, rel_path, full_path):
47 full_url = os.path.join(self._server_loc, rel_path)
48 stat, output, errput = \
49 run_cmd("wget {} {} -r -N --no-directories ".format(self._args, full_url+os.sep), from_dir=full_path)
50 logger.debug(output)
51 logger.debug(errput)
52 if (stat != 0):
53 logging.warning("wget failed with output: {} and errput {}\n".format(output, errput))
54 # wget puts an empty file if it fails.
55 try:
56 os.remove(full_path)
57 except OSError:
58 pass
59 return False
60 else:
61 logging.info("SUCCESS\n")
62 return True
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scripts/lib/CIME/Servers/wget.py b/scripts/lib/CIME/Servers/wget.py
--- a/scripts/lib/CIME/Servers/wget.py
+++ b/scripts/lib/CIME/Servers/wget.py
@@ -15,9 +15,9 @@
self._args += "--password {} ".format(passwd)
self._server_loc = address
- err = run_cmd("wget {} --spider {}".format(self._args, address))[0]
- expect(err == 0,"Could not connect to repo '{0}'\nThis is most likely either a proxy, or network issue .")
-
+ cmd = "wget {} --no-check-certificate --spider {}".format(self._args, address)
+ err, output, _ = run_cmd(cmd, combine_output=True)
+ expect(err == 0,"Could not connect to repo via '{}'\nThis is most likely either a proxy, or network issue.\nOutput:\n{}".format(cmd, output.encode('utf-8')))
def fileexists(self, rel_path):
full_url = os.path.join(self._server_loc, rel_path)
@@ -30,9 +30,9 @@
def getfile(self, rel_path, full_path):
full_url = os.path.join(self._server_loc, rel_path)
stat, output, errput = \
- run_cmd("wget {} {} -nc --output-document {}".format(self._args, full_url, full_path))
+ run_cmd("wget {} {} -nc --no-check-certificate --output-document {}".format(self._args, full_url, full_path))
if (stat != 0):
- logging.warning("wget failed with output: {} and errput {}\n".format(output, errput))
+ logging.warning("wget failed with output: {} and errput {}\n".format(output.encode('utf-8'), errput.encode('utf-8')))
# wget puts an empty file if it fails.
try:
os.remove(full_path)
@@ -46,7 +46,7 @@
def getdirectory(self, rel_path, full_path):
full_url = os.path.join(self._server_loc, rel_path)
stat, output, errput = \
- run_cmd("wget {} {} -r -N --no-directories ".format(self._args, full_url+os.sep), from_dir=full_path)
+ run_cmd("wget {} {} -r -N --no-check-certificate --no-directories ".format(self._args, full_url+os.sep), from_dir=full_path)
logger.debug(output)
logger.debug(errput)
if (stat != 0):
| {"golden_diff": "diff --git a/scripts/lib/CIME/Servers/wget.py b/scripts/lib/CIME/Servers/wget.py\n--- a/scripts/lib/CIME/Servers/wget.py\n+++ b/scripts/lib/CIME/Servers/wget.py\n@@ -15,9 +15,9 @@\n self._args += \"--password {} \".format(passwd)\n self._server_loc = address\n \n- err = run_cmd(\"wget {} --spider {}\".format(self._args, address))[0]\n- expect(err == 0,\"Could not connect to repo '{0}'\\nThis is most likely either a proxy, or network issue .\")\n-\n+ cmd = \"wget {} --no-check-certificate --spider {}\".format(self._args, address)\n+ err, output, _ = run_cmd(cmd, combine_output=True)\n+ expect(err == 0,\"Could not connect to repo via '{}'\\nThis is most likely either a proxy, or network issue.\\nOutput:\\n{}\".format(cmd, output.encode('utf-8')))\n \n def fileexists(self, rel_path):\n full_url = os.path.join(self._server_loc, rel_path)\n@@ -30,9 +30,9 @@\n def getfile(self, rel_path, full_path):\n full_url = os.path.join(self._server_loc, rel_path)\n stat, output, errput = \\\n- run_cmd(\"wget {} {} -nc --output-document {}\".format(self._args, full_url, full_path))\n+ run_cmd(\"wget {} {} -nc --no-check-certificate --output-document {}\".format(self._args, full_url, full_path))\n if (stat != 0):\n- logging.warning(\"wget failed with output: {} and errput {}\\n\".format(output, errput))\n+ logging.warning(\"wget failed with output: {} and errput {}\\n\".format(output.encode('utf-8'), errput.encode('utf-8')))\n # wget puts an empty file if it fails.\n try:\n os.remove(full_path)\n@@ -46,7 +46,7 @@\n def getdirectory(self, rel_path, full_path):\n full_url = os.path.join(self._server_loc, rel_path)\n stat, output, errput = \\\n- run_cmd(\"wget {} {} -r -N --no-directories \".format(self._args, full_url+os.sep), from_dir=full_path)\n+ run_cmd(\"wget {} {} -r -N --no-check-certificate --no-directories \".format(self._args, full_url+os.sep), from_dir=full_path)\n logger.debug(output)\n logger.debug(errput)\n if (stat != 0):\n", "issue": "Branch a single instance case from a multi-instance case\nWhen a multi-instance CAM forecast fails, I want to use one of the failed instances\r\nas an exact restart in a single instance case, to speed debugging.\r\nBuilding it as hybrid or startup won't work because for CAM \r\nthose are not exact restarts; they use the CAM initial file\r\nI first tried building a single instance branch run with the multi-instance\r\nas the refcase. That branched the entire multi-instance run.\r\n\r\nThen I tried copying all of the restart files for 1 instance into a new directory, \r\n\"Restarts\", with instance numbers removed from the file names.\r\nI built the single instance case with Restarts as the RUN_REFDIR.\r\nIt built, but when I tried to run it it complained about a mozart file\r\nalready existing in the run directory. I believe that an earlier stage\r\nof the submit process created that file, so I'm stumped about how\r\nto fix this problem. I've played with GET_REFCASE and CONTINUE_RUN\r\nand other things, but have not found the magic combination.\r\nAm I missing something, or is this a new capability that hasn't been implemented?\r\n\r\n$CASEROOT = /gpfs/fs1/work/raeder/Exp/Debug_lwdn4\r\n$RUNDIR = /gpfs/fs1/scratch/raeder/Debug_lwdn4/run\r\n$CESMROOT = /glade/work/raeder/Models/cesm2_2_maint-5.6\r\n\r\nThanks\r\nKevin\r\n\n", "before_files": [{"content": "\"\"\"\nWGET Server class. Interact with a server using WGET protocol\n\"\"\"\n# pylint: disable=super-init-not-called\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.Servers.generic_server import GenericServer\nlogger = logging.getLogger(__name__)\n\nclass WGET(GenericServer):\n def __init__(self, address, user='', passwd=''):\n self._args = ''\n if user:\n self._args += \"--user {} \".format(user)\n if passwd:\n self._args += \"--password {} \".format(passwd)\n self._server_loc = address\n\n err = run_cmd(\"wget {} --spider {}\".format(self._args, address))[0]\n expect(err == 0,\"Could not connect to repo '{0}'\\nThis is most likely either a proxy, or network issue .\")\n\n\n def fileexists(self, rel_path):\n full_url = os.path.join(self._server_loc, rel_path)\n stat, out, err = run_cmd(\"wget {} --spider {}\".format(self._args, full_url))\n if (stat != 0):\n logging.warning(\"FAIL: Repo '{}' does not have file '{}'\\nReason:{}\\n{}\\n\".format(self._server_loc, full_url, out.encode('utf-8'), err.encode('utf-8')))\n return False\n return True\n\n def getfile(self, rel_path, full_path):\n full_url = os.path.join(self._server_loc, rel_path)\n stat, output, errput = \\\n run_cmd(\"wget {} {} -nc --output-document {}\".format(self._args, full_url, full_path))\n if (stat != 0):\n logging.warning(\"wget failed with output: {} and errput {}\\n\".format(output, errput))\n # wget puts an empty file if it fails.\n try:\n os.remove(full_path)\n except OSError:\n pass\n return False\n else:\n logging.info(\"SUCCESS\\n\")\n return True\n\n def getdirectory(self, rel_path, full_path):\n full_url = os.path.join(self._server_loc, rel_path)\n stat, output, errput = \\\n run_cmd(\"wget {} {} -r -N --no-directories \".format(self._args, full_url+os.sep), from_dir=full_path)\n logger.debug(output)\n logger.debug(errput)\n if (stat != 0):\n logging.warning(\"wget failed with output: {} and errput {}\\n\".format(output, errput))\n # wget puts an empty file if it fails.\n try:\n os.remove(full_path)\n except OSError:\n pass\n return False\n else:\n logging.info(\"SUCCESS\\n\")\n return True\n", "path": "scripts/lib/CIME/Servers/wget.py"}], "after_files": [{"content": "\"\"\"\nWGET Server class. Interact with a server using WGET protocol\n\"\"\"\n# pylint: disable=super-init-not-called\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.Servers.generic_server import GenericServer\nlogger = logging.getLogger(__name__)\n\nclass WGET(GenericServer):\n def __init__(self, address, user='', passwd=''):\n self._args = ''\n if user:\n self._args += \"--user {} \".format(user)\n if passwd:\n self._args += \"--password {} \".format(passwd)\n self._server_loc = address\n\n cmd = \"wget {} --no-check-certificate --spider {}\".format(self._args, address)\n err, output, _ = run_cmd(cmd, combine_output=True)\n expect(err == 0,\"Could not connect to repo via '{}'\\nThis is most likely either a proxy, or network issue.\\nOutput:\\n{}\".format(cmd, output.encode('utf-8')))\n\n def fileexists(self, rel_path):\n full_url = os.path.join(self._server_loc, rel_path)\n stat, out, err = run_cmd(\"wget {} --spider {}\".format(self._args, full_url))\n if (stat != 0):\n logging.warning(\"FAIL: Repo '{}' does not have file '{}'\\nReason:{}\\n{}\\n\".format(self._server_loc, full_url, out.encode('utf-8'), err.encode('utf-8')))\n return False\n return True\n\n def getfile(self, rel_path, full_path):\n full_url = os.path.join(self._server_loc, rel_path)\n stat, output, errput = \\\n run_cmd(\"wget {} {} -nc --no-check-certificate --output-document {}\".format(self._args, full_url, full_path))\n if (stat != 0):\n logging.warning(\"wget failed with output: {} and errput {}\\n\".format(output.encode('utf-8'), errput.encode('utf-8')))\n # wget puts an empty file if it fails.\n try:\n os.remove(full_path)\n except OSError:\n pass\n return False\n else:\n logging.info(\"SUCCESS\\n\")\n return True\n\n def getdirectory(self, rel_path, full_path):\n full_url = os.path.join(self._server_loc, rel_path)\n stat, output, errput = \\\n run_cmd(\"wget {} {} -r -N --no-check-certificate --no-directories \".format(self._args, full_url+os.sep), from_dir=full_path)\n logger.debug(output)\n logger.debug(errput)\n if (stat != 0):\n logging.warning(\"wget failed with output: {} and errput {}\\n\".format(output, errput))\n # wget puts an empty file if it fails.\n try:\n os.remove(full_path)\n except OSError:\n pass\n return False\n else:\n logging.info(\"SUCCESS\\n\")\n return True\n", "path": "scripts/lib/CIME/Servers/wget.py"}]} | 1,280 | 572 |
gh_patches_debug_14006 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-3341 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Spider northern_california_breweries is broken
During the global build at 2021-06-23-14-42-18, spider **northern_california_breweries** failed with **0 features** and **1 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-06-23-14-42-18/logs/northern_california_breweries.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-23-14-42-18/output/northern_california_breweries.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-23-14-42-18/output/northern_california_breweries.geojson))
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/northern_california_breweries.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import scrapy
3 from locations.items import GeojsonPointItem
4 import json
5 import re
6
7 class NorthernCaliforniaBreweriesSpider(scrapy.Spider):
8 name = "northern_california_breweries"
9 allowed_domains = ["projects.sfchronicle.com"]
10 start_urls = (
11 'http://projects.sfchronicle.com/2017/brewery-map/',
12 )
13
14 def parse(self, response):
15 beerData = response.xpath("//*[text()[contains(.,'beerData')]]").extract_first()
16 matches = re.search(r"var beerData = (\[(.*)\])", beerData)
17 jsonData = matches.group(0).replace("var beerData = ","")
18 breweryList = json.loads(jsonData)
19
20 for item in breweryList:
21 yield GeojsonPointItem(
22 ref=item.get('Brewery'),
23 lat=float(item.get('Latitude')),
24 lon=float(item.get('Longitude')),
25 addr_full=item.get('Address'),
26 city=item.get('City'),
27 state="CA",
28 website=item.get('Website'),
29 )
30
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/locations/spiders/northern_california_breweries.py b/locations/spiders/northern_california_breweries.py
--- a/locations/spiders/northern_california_breweries.py
+++ b/locations/spiders/northern_california_breweries.py
@@ -18,10 +18,19 @@
breweryList = json.loads(jsonData)
for item in breweryList:
+ latitude = None
+ longitude = None
+
+ if item.get('Latitude') is not None:
+ latitude = float(item.get('Latitude'))
+
+ if item.get('Longitude') is not None:
+ longitude = float(item.get('Longitude'))
+
yield GeojsonPointItem(
ref=item.get('Brewery'),
- lat=float(item.get('Latitude')),
- lon=float(item.get('Longitude')),
+ lat=latitude,
+ lon=longitude,
addr_full=item.get('Address'),
city=item.get('City'),
state="CA",
| {"golden_diff": "diff --git a/locations/spiders/northern_california_breweries.py b/locations/spiders/northern_california_breweries.py\n--- a/locations/spiders/northern_california_breweries.py\n+++ b/locations/spiders/northern_california_breweries.py\n@@ -18,10 +18,19 @@\n breweryList = json.loads(jsonData)\n \n for item in breweryList:\n+ latitude = None\n+ longitude = None\n+\n+ if item.get('Latitude') is not None:\n+ latitude = float(item.get('Latitude'))\n+\n+ if item.get('Longitude') is not None:\n+ longitude = float(item.get('Longitude'))\n+ \n yield GeojsonPointItem(\n ref=item.get('Brewery'),\n- lat=float(item.get('Latitude')),\n- lon=float(item.get('Longitude')),\n+ lat=latitude,\n+ lon=longitude,\n addr_full=item.get('Address'),\n city=item.get('City'),\n state=\"CA\",\n", "issue": "Spider northern_california_breweries is broken\nDuring the global build at 2021-06-23-14-42-18, spider **northern_california_breweries** failed with **0 features** and **1 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-06-23-14-42-18/logs/northern_california_breweries.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-23-14-42-18/output/northern_california_breweries.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-23-14-42-18/output/northern_california_breweries.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nfrom locations.items import GeojsonPointItem\nimport json\nimport re\n\nclass NorthernCaliforniaBreweriesSpider(scrapy.Spider):\n name = \"northern_california_breweries\"\n allowed_domains = [\"projects.sfchronicle.com\"]\n start_urls = (\n 'http://projects.sfchronicle.com/2017/brewery-map/',\n )\n\n def parse(self, response):\n beerData = response.xpath(\"//*[text()[contains(.,'beerData')]]\").extract_first()\n matches = re.search(r\"var beerData = (\\[(.*)\\])\", beerData)\n jsonData = matches.group(0).replace(\"var beerData = \",\"\")\n breweryList = json.loads(jsonData)\n\n for item in breweryList:\n yield GeojsonPointItem(\n ref=item.get('Brewery'),\n lat=float(item.get('Latitude')),\n lon=float(item.get('Longitude')),\n addr_full=item.get('Address'),\n city=item.get('City'),\n state=\"CA\",\n website=item.get('Website'),\n )\n", "path": "locations/spiders/northern_california_breweries.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nfrom locations.items import GeojsonPointItem\nimport json\nimport re\n\nclass NorthernCaliforniaBreweriesSpider(scrapy.Spider):\n name = \"northern_california_breweries\"\n allowed_domains = [\"projects.sfchronicle.com\"]\n start_urls = (\n 'http://projects.sfchronicle.com/2017/brewery-map/',\n )\n\n def parse(self, response):\n beerData = response.xpath(\"//*[text()[contains(.,'beerData')]]\").extract_first()\n matches = re.search(r\"var beerData = (\\[(.*)\\])\", beerData)\n jsonData = matches.group(0).replace(\"var beerData = \",\"\")\n breweryList = json.loads(jsonData)\n\n for item in breweryList:\n latitude = None\n longitude = None\n\n if item.get('Latitude') is not None:\n latitude = float(item.get('Latitude'))\n\n if item.get('Longitude') is not None:\n longitude = float(item.get('Longitude'))\n \n yield GeojsonPointItem(\n ref=item.get('Brewery'),\n lat=latitude,\n lon=longitude,\n addr_full=item.get('Address'),\n city=item.get('City'),\n state=\"CA\",\n website=item.get('Website'),\n )\n", "path": "locations/spiders/northern_california_breweries.py"}]} | 756 | 221 |
gh_patches_debug_59836 | rasdani/github-patches | git_diff | angr__angr-4105 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Duplicate member docs on subclasses
### Description
e.g. the documentation on SimCC's members is also present on SimCCUsercall. This is a huge problem considering that the api docs page is already fucking gigantic, this is just making it multiplicatively bigger.
### Steps to reproduce the bug
_No response_
### Environment
_No response_
### Additional context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 # Configuration file for the Sphinx documentation builder.
2 #
3 # For the full list of built-in configuration values, see the documentation:
4 # https://www.sphinx-doc.org/en/master/usage/configuration.html
5
6 import datetime
7
8 # -- Project information -----------------------------------------------------
9 # https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
10
11 project = "angr"
12 project_copyright = f"{datetime.datetime.now().year}, The angr Project contributors"
13 author = "The angr Project"
14
15 # -- General configuration ---------------------------------------------------
16 # https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
17
18 extensions = [
19 "sphinx.ext.autodoc",
20 "sphinx.ext.autosectionlabel",
21 "sphinx.ext.autosummary",
22 "sphinx.ext.coverage",
23 "sphinx.ext.intersphinx",
24 "sphinx.ext.napoleon",
25 "sphinx.ext.todo",
26 "sphinx.ext.viewcode",
27 "sphinx_autodoc_typehints",
28 "myst_parser",
29 ]
30
31 templates_path = ["_templates"]
32 exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
33
34 # -- Options for autodoc -----------------------------------------------------
35 # https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html#configuration
36 autoclass_content = "class"
37 autodoc_default_options = {
38 "members": True,
39 "member-order": "bysource",
40 "inherited-members": True,
41 "show-inheritance": True,
42 "special-members": "__init__",
43 "undoc-members": True,
44 }
45 autodoc_inherit_docstrings = True
46 autodoc_typehints = "both"
47
48 # -- Options for coverage ----------------------------------------------------
49 # https://www.sphinx-doc.org/en/master/usage/extensions/coverage.html
50 coverage_write_headline = False
51
52 coverage_ignore_pyobjects = [
53 "angr.analyses.decompiler.structured_codegen.c.StructuredCodeGenerator", # Alias to CStructuredCodeGenerator
54 "angr.sim_type.SimTypeFixedSizeArray", # Alias to SimTypeArray
55 ]
56
57 # -- Options for intersphinx -------------------------------------------------
58 # https://www.sphinx-doc.org/en/master/usage/extensions/intersphinx.html
59 intersphinx_mapping = {
60 "python": ("https://docs.python.org/3", None),
61 "ailment": ("https://docs.angr.io/projects/ailment/en/latest/", None),
62 "archinfo": ("https://docs.angr.io/projects/archinfo/en/latest/", None),
63 "claripy": ("https://docs.angr.io/projects/claripy/en/latest/", None),
64 "cle": ("https://docs.angr.io/projects/cle/en/latest/", None),
65 "pypcode": ("https://docs.angr.io/projects/pypcode/en/latest/", None),
66 "pyvex": ("https://docs.angr.io/projects/pyvex/en/latest/", None),
67 }
68
69 # -- Options for todos -------------------------------------------------------
70 # https://www.sphinx-doc.org/en/master/usage/extensions/todo.html
71 todo_include_todos = True
72
73 # -- Options for HTML output -------------------------------------------------
74 # https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output
75
76 html_theme = "furo"
77 html_static_path = ["_static"]
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -37,7 +37,6 @@
autodoc_default_options = {
"members": True,
"member-order": "bysource",
- "inherited-members": True,
"show-inheritance": True,
"special-members": "__init__",
"undoc-members": True,
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -37,7 +37,6 @@\n autodoc_default_options = {\n \"members\": True,\n \"member-order\": \"bysource\",\n- \"inherited-members\": True,\n \"show-inheritance\": True,\n \"special-members\": \"__init__\",\n \"undoc-members\": True,\n", "issue": "Duplicate member docs on subclasses\n### Description\n\ne.g. the documentation on SimCC's members is also present on SimCCUsercall. This is a huge problem considering that the api docs page is already fucking gigantic, this is just making it multiplicatively bigger.\n\n### Steps to reproduce the bug\n\n_No response_\n\n### Environment\n\n_No response_\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# For the full list of built-in configuration values, see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\nimport datetime\n\n# -- Project information -----------------------------------------------------\n# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information\n\nproject = \"angr\"\nproject_copyright = f\"{datetime.datetime.now().year}, The angr Project contributors\"\nauthor = \"The angr Project\"\n\n# -- General configuration ---------------------------------------------------\n# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration\n\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.autosectionlabel\",\n \"sphinx.ext.autosummary\",\n \"sphinx.ext.coverage\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.viewcode\",\n \"sphinx_autodoc_typehints\",\n \"myst_parser\",\n]\n\ntemplates_path = [\"_templates\"]\nexclude_patterns = [\"_build\", \"Thumbs.db\", \".DS_Store\"]\n\n# -- Options for autodoc -----------------------------------------------------\n# https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html#configuration\nautoclass_content = \"class\"\nautodoc_default_options = {\n \"members\": True,\n \"member-order\": \"bysource\",\n \"inherited-members\": True,\n \"show-inheritance\": True,\n \"special-members\": \"__init__\",\n \"undoc-members\": True,\n}\nautodoc_inherit_docstrings = True\nautodoc_typehints = \"both\"\n\n# -- Options for coverage ----------------------------------------------------\n# https://www.sphinx-doc.org/en/master/usage/extensions/coverage.html\ncoverage_write_headline = False\n\ncoverage_ignore_pyobjects = [\n \"angr.analyses.decompiler.structured_codegen.c.StructuredCodeGenerator\", # Alias to CStructuredCodeGenerator\n \"angr.sim_type.SimTypeFixedSizeArray\", # Alias to SimTypeArray\n]\n\n# -- Options for intersphinx -------------------------------------------------\n# https://www.sphinx-doc.org/en/master/usage/extensions/intersphinx.html\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/3\", None),\n \"ailment\": (\"https://docs.angr.io/projects/ailment/en/latest/\", None),\n \"archinfo\": (\"https://docs.angr.io/projects/archinfo/en/latest/\", None),\n \"claripy\": (\"https://docs.angr.io/projects/claripy/en/latest/\", None),\n \"cle\": (\"https://docs.angr.io/projects/cle/en/latest/\", None),\n \"pypcode\": (\"https://docs.angr.io/projects/pypcode/en/latest/\", None),\n \"pyvex\": (\"https://docs.angr.io/projects/pyvex/en/latest/\", None),\n}\n\n# -- Options for todos -------------------------------------------------------\n# https://www.sphinx-doc.org/en/master/usage/extensions/todo.html\ntodo_include_todos = True\n\n# -- Options for HTML output -------------------------------------------------\n# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output\n\nhtml_theme = \"furo\"\nhtml_static_path = [\"_static\"]\n", "path": "docs/conf.py"}], "after_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# For the full list of built-in configuration values, see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\nimport datetime\n\n# -- Project information -----------------------------------------------------\n# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information\n\nproject = \"angr\"\nproject_copyright = f\"{datetime.datetime.now().year}, The angr Project contributors\"\nauthor = \"The angr Project\"\n\n# -- General configuration ---------------------------------------------------\n# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration\n\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.autosectionlabel\",\n \"sphinx.ext.autosummary\",\n \"sphinx.ext.coverage\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.viewcode\",\n \"sphinx_autodoc_typehints\",\n \"myst_parser\",\n]\n\ntemplates_path = [\"_templates\"]\nexclude_patterns = [\"_build\", \"Thumbs.db\", \".DS_Store\"]\n\n# -- Options for autodoc -----------------------------------------------------\n# https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html#configuration\nautoclass_content = \"class\"\nautodoc_default_options = {\n \"members\": True,\n \"member-order\": \"bysource\",\n \"show-inheritance\": True,\n \"special-members\": \"__init__\",\n \"undoc-members\": True,\n}\nautodoc_inherit_docstrings = True\nautodoc_typehints = \"both\"\n\n# -- Options for coverage ----------------------------------------------------\n# https://www.sphinx-doc.org/en/master/usage/extensions/coverage.html\ncoverage_write_headline = False\n\ncoverage_ignore_pyobjects = [\n \"angr.analyses.decompiler.structured_codegen.c.StructuredCodeGenerator\", # Alias to CStructuredCodeGenerator\n \"angr.sim_type.SimTypeFixedSizeArray\", # Alias to SimTypeArray\n]\n\n# -- Options for intersphinx -------------------------------------------------\n# https://www.sphinx-doc.org/en/master/usage/extensions/intersphinx.html\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/3\", None),\n \"ailment\": (\"https://docs.angr.io/projects/ailment/en/latest/\", None),\n \"archinfo\": (\"https://docs.angr.io/projects/archinfo/en/latest/\", None),\n \"claripy\": (\"https://docs.angr.io/projects/claripy/en/latest/\", None),\n \"cle\": (\"https://docs.angr.io/projects/cle/en/latest/\", None),\n \"pypcode\": (\"https://docs.angr.io/projects/pypcode/en/latest/\", None),\n \"pyvex\": (\"https://docs.angr.io/projects/pyvex/en/latest/\", None),\n}\n\n# -- Options for todos -------------------------------------------------------\n# https://www.sphinx-doc.org/en/master/usage/extensions/todo.html\ntodo_include_todos = True\n\n# -- Options for HTML output -------------------------------------------------\n# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output\n\nhtml_theme = \"furo\"\nhtml_static_path = [\"_static\"]\n", "path": "docs/conf.py"}]} | 1,179 | 93 |
gh_patches_debug_40226 | rasdani/github-patches | git_diff | hpcaitech__ColossalAI-3113 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `applications/ChatGPT/chatgpt/models/gpt/gpt_critic.py`
Content:
```
1 from typing import Optional
2
3 import torch.nn as nn
4 from transformers.models.gpt2.configuration_gpt2 import GPT2Config
5 from transformers.models.gpt2.modeling_gpt2 import GPT2Model
6
7 from ..base import Critic
8
9
10 class GPTCritic(Critic):
11 """
12 GPT Critic model.
13
14 Args:
15 pretrained (str): Pretrained model name or path.
16 config (GPT2Config): Model config.
17 checkpoint (bool): Enable gradient checkpointing.
18 """
19
20 def __init__(self,
21 pretrained: Optional[str] = None,
22 config: Optional[GPT2Config] = None,
23 checkpoint: bool = False,
24 **kwargs) -> None:
25 if pretrained is not None:
26 model = GPT2Model.from_pretrained(pretrained)
27 elif config is not None:
28 model = GPT2Model(config)
29 else:
30 model = GPT2Model(GPT2Config())
31 if checkpoint:
32 model.gradient_checkpointing_enable()
33 value_head = nn.Linear(model.config.n_embd, 1)
34 super().__init__(model, value_head, **kwargs)
35
```
Path: `applications/ChatGPT/chatgpt/models/opt/opt_critic.py`
Content:
```
1 from typing import Optional
2
3 import torch.nn as nn
4 from transformers.models.opt.configuration_opt import OPTConfig
5 from transformers.models.opt.modeling_opt import OPTModel
6
7 from ..base import Critic
8
9
10 class OPTCritic(Critic):
11 """
12 OPT Critic model.
13
14 Args:
15 pretrained (str): Pretrained model name or path.
16 config (OPTConfig): Model config.
17 checkpoint (bool): Enable gradient checkpointing.
18 lora_rank (int): Rank of the low-rank approximation.
19 lora_train_bias (str): LoRA bias training mode.
20 """
21
22 def __init__(self,
23 pretrained: Optional[str] = None,
24 config: Optional[OPTConfig] = None,
25 checkpoint: bool = False,
26 lora_rank: int = 0,
27 lora_train_bias: str = 'none',
28 **kwargs) -> None:
29 if pretrained is not None:
30 model = OPTModel.from_pretrained(pretrained)
31 elif config is not None:
32 model = OPTModel(config)
33 else:
34 model = OPTModel(OPTConfig())
35 if checkpoint:
36 model.gradient_checkpointing_enable()
37 value_head = nn.Linear(model.config.hidden_size, 1)
38 super().__init__(model, value_head, lora_rank, lora_train_bias, **kwargs)
39
```
Path: `applications/ChatGPT/chatgpt/models/gpt/gpt_actor.py`
Content:
```
1 from typing import Optional
2
3 from transformers.models.gpt2.configuration_gpt2 import GPT2Config
4 from transformers.models.gpt2.modeling_gpt2 import GPT2LMHeadModel
5
6 from ..base import Actor
7
8
9 class GPTActor(Actor):
10 """
11 GPT Actor model.
12
13 Args:
14 pretrained (str): Pretrained model name or path.
15 config (GPT2Config): Model config.
16 checkpoint (bool): Enable gradient checkpointing.
17 """
18
19 def __init__(self,
20 pretrained: Optional[str] = None,
21 config: Optional[GPT2Config] = None,
22 checkpoint: bool = False) -> None:
23 if pretrained is not None:
24 model = GPT2LMHeadModel.from_pretrained(pretrained)
25 elif config is not None:
26 model = GPT2LMHeadModel(config)
27 else:
28 model = GPT2LMHeadModel(GPT2Config())
29 if checkpoint:
30 model.gradient_checkpointing_enable()
31 super().__init__(model)
32
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/applications/ChatGPT/chatgpt/models/gpt/gpt_actor.py b/applications/ChatGPT/chatgpt/models/gpt/gpt_actor.py
--- a/applications/ChatGPT/chatgpt/models/gpt/gpt_actor.py
+++ b/applications/ChatGPT/chatgpt/models/gpt/gpt_actor.py
@@ -14,12 +14,16 @@
pretrained (str): Pretrained model name or path.
config (GPT2Config): Model config.
checkpoint (bool): Enable gradient checkpointing.
+ lora_rank (int): Rank of the LoRa layer.
+ lora_train_bias (str): Bias training strategy for the LoRa layer.
"""
def __init__(self,
pretrained: Optional[str] = None,
config: Optional[GPT2Config] = None,
- checkpoint: bool = False) -> None:
+ checkpoint: bool = False,
+ lora_rank: int = 0,
+ lora_train_bias: str = 'none') -> None:
if pretrained is not None:
model = GPT2LMHeadModel.from_pretrained(pretrained)
elif config is not None:
@@ -28,4 +32,4 @@
model = GPT2LMHeadModel(GPT2Config())
if checkpoint:
model.gradient_checkpointing_enable()
- super().__init__(model)
+ super().__init__(model, lora_rank, lora_train_bias)
diff --git a/applications/ChatGPT/chatgpt/models/gpt/gpt_critic.py b/applications/ChatGPT/chatgpt/models/gpt/gpt_critic.py
--- a/applications/ChatGPT/chatgpt/models/gpt/gpt_critic.py
+++ b/applications/ChatGPT/chatgpt/models/gpt/gpt_critic.py
@@ -15,13 +15,16 @@
pretrained (str): Pretrained model name or path.
config (GPT2Config): Model config.
checkpoint (bool): Enable gradient checkpointing.
+ lora_rank (int): Rank of the LO-RA decomposition.
+ lora_train_bias (str): LoRA bias training mode.
"""
def __init__(self,
pretrained: Optional[str] = None,
config: Optional[GPT2Config] = None,
checkpoint: bool = False,
- **kwargs) -> None:
+ lora_rank: int = 0,
+ lora_train_bias: str = 'none') -> None:
if pretrained is not None:
model = GPT2Model.from_pretrained(pretrained)
elif config is not None:
@@ -31,4 +34,4 @@
if checkpoint:
model.gradient_checkpointing_enable()
value_head = nn.Linear(model.config.n_embd, 1)
- super().__init__(model, value_head, **kwargs)
+ super().__init__(model, value_head, lora_rank, lora_train_bias)
diff --git a/applications/ChatGPT/chatgpt/models/opt/opt_critic.py b/applications/ChatGPT/chatgpt/models/opt/opt_critic.py
--- a/applications/ChatGPT/chatgpt/models/opt/opt_critic.py
+++ b/applications/ChatGPT/chatgpt/models/opt/opt_critic.py
@@ -34,5 +34,5 @@
model = OPTModel(OPTConfig())
if checkpoint:
model.gradient_checkpointing_enable()
- value_head = nn.Linear(model.config.hidden_size, 1)
+ value_head = nn.Linear(model.config.word_embed_proj_dim, 1)
super().__init__(model, value_head, lora_rank, lora_train_bias, **kwargs)
| {"golden_diff": "diff --git a/applications/ChatGPT/chatgpt/models/gpt/gpt_actor.py b/applications/ChatGPT/chatgpt/models/gpt/gpt_actor.py\n--- a/applications/ChatGPT/chatgpt/models/gpt/gpt_actor.py\n+++ b/applications/ChatGPT/chatgpt/models/gpt/gpt_actor.py\n@@ -14,12 +14,16 @@\n pretrained (str): Pretrained model name or path.\n config (GPT2Config): Model config.\n checkpoint (bool): Enable gradient checkpointing.\n+ lora_rank (int): Rank of the LoRa layer.\n+ lora_train_bias (str): Bias training strategy for the LoRa layer.\n \"\"\"\n \n def __init__(self,\n pretrained: Optional[str] = None,\n config: Optional[GPT2Config] = None,\n- checkpoint: bool = False) -> None:\n+ checkpoint: bool = False,\n+ lora_rank: int = 0,\n+ lora_train_bias: str = 'none') -> None:\n if pretrained is not None:\n model = GPT2LMHeadModel.from_pretrained(pretrained)\n elif config is not None:\n@@ -28,4 +32,4 @@\n model = GPT2LMHeadModel(GPT2Config())\n if checkpoint:\n model.gradient_checkpointing_enable()\n- super().__init__(model)\n+ super().__init__(model, lora_rank, lora_train_bias)\ndiff --git a/applications/ChatGPT/chatgpt/models/gpt/gpt_critic.py b/applications/ChatGPT/chatgpt/models/gpt/gpt_critic.py\n--- a/applications/ChatGPT/chatgpt/models/gpt/gpt_critic.py\n+++ b/applications/ChatGPT/chatgpt/models/gpt/gpt_critic.py\n@@ -15,13 +15,16 @@\n pretrained (str): Pretrained model name or path.\n config (GPT2Config): Model config.\n checkpoint (bool): Enable gradient checkpointing.\n+ lora_rank (int): Rank of the LO-RA decomposition.\n+ lora_train_bias (str): LoRA bias training mode.\n \"\"\"\n \n def __init__(self,\n pretrained: Optional[str] = None,\n config: Optional[GPT2Config] = None,\n checkpoint: bool = False,\n- **kwargs) -> None:\n+ lora_rank: int = 0,\n+ lora_train_bias: str = 'none') -> None:\n if pretrained is not None:\n model = GPT2Model.from_pretrained(pretrained)\n elif config is not None:\n@@ -31,4 +34,4 @@\n if checkpoint:\n model.gradient_checkpointing_enable()\n value_head = nn.Linear(model.config.n_embd, 1)\n- super().__init__(model, value_head, **kwargs)\n+ super().__init__(model, value_head, lora_rank, lora_train_bias)\ndiff --git a/applications/ChatGPT/chatgpt/models/opt/opt_critic.py b/applications/ChatGPT/chatgpt/models/opt/opt_critic.py\n--- a/applications/ChatGPT/chatgpt/models/opt/opt_critic.py\n+++ b/applications/ChatGPT/chatgpt/models/opt/opt_critic.py\n@@ -34,5 +34,5 @@\n model = OPTModel(OPTConfig())\n if checkpoint:\n model.gradient_checkpointing_enable()\n- value_head = nn.Linear(model.config.hidden_size, 1)\n+ value_head = nn.Linear(model.config.word_embed_proj_dim, 1)\n super().__init__(model, value_head, lora_rank, lora_train_bias, **kwargs)\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "from typing import Optional\n\nimport torch.nn as nn\nfrom transformers.models.gpt2.configuration_gpt2 import GPT2Config\nfrom transformers.models.gpt2.modeling_gpt2 import GPT2Model\n\nfrom ..base import Critic\n\n\nclass GPTCritic(Critic):\n \"\"\"\n GPT Critic model.\n\n Args:\n pretrained (str): Pretrained model name or path.\n config (GPT2Config): Model config.\n checkpoint (bool): Enable gradient checkpointing.\n \"\"\"\n\n def __init__(self,\n pretrained: Optional[str] = None,\n config: Optional[GPT2Config] = None,\n checkpoint: bool = False,\n **kwargs) -> None:\n if pretrained is not None:\n model = GPT2Model.from_pretrained(pretrained)\n elif config is not None:\n model = GPT2Model(config)\n else:\n model = GPT2Model(GPT2Config())\n if checkpoint:\n model.gradient_checkpointing_enable()\n value_head = nn.Linear(model.config.n_embd, 1)\n super().__init__(model, value_head, **kwargs)\n", "path": "applications/ChatGPT/chatgpt/models/gpt/gpt_critic.py"}, {"content": "from typing import Optional\n\nimport torch.nn as nn\nfrom transformers.models.opt.configuration_opt import OPTConfig\nfrom transformers.models.opt.modeling_opt import OPTModel\n\nfrom ..base import Critic\n\n\nclass OPTCritic(Critic):\n \"\"\"\n OPT Critic model.\n\n Args:\n pretrained (str): Pretrained model name or path.\n config (OPTConfig): Model config.\n checkpoint (bool): Enable gradient checkpointing.\n lora_rank (int): Rank of the low-rank approximation.\n lora_train_bias (str): LoRA bias training mode.\n \"\"\"\n\n def __init__(self,\n pretrained: Optional[str] = None,\n config: Optional[OPTConfig] = None,\n checkpoint: bool = False,\n lora_rank: int = 0,\n lora_train_bias: str = 'none',\n **kwargs) -> None:\n if pretrained is not None:\n model = OPTModel.from_pretrained(pretrained)\n elif config is not None:\n model = OPTModel(config)\n else:\n model = OPTModel(OPTConfig())\n if checkpoint:\n model.gradient_checkpointing_enable()\n value_head = nn.Linear(model.config.hidden_size, 1)\n super().__init__(model, value_head, lora_rank, lora_train_bias, **kwargs)\n", "path": "applications/ChatGPT/chatgpt/models/opt/opt_critic.py"}, {"content": "from typing import Optional\n\nfrom transformers.models.gpt2.configuration_gpt2 import GPT2Config\nfrom transformers.models.gpt2.modeling_gpt2 import GPT2LMHeadModel\n\nfrom ..base import Actor\n\n\nclass GPTActor(Actor):\n \"\"\"\n GPT Actor model.\n\n Args:\n pretrained (str): Pretrained model name or path.\n config (GPT2Config): Model config.\n checkpoint (bool): Enable gradient checkpointing.\n \"\"\"\n\n def __init__(self,\n pretrained: Optional[str] = None,\n config: Optional[GPT2Config] = None,\n checkpoint: bool = False) -> None:\n if pretrained is not None:\n model = GPT2LMHeadModel.from_pretrained(pretrained)\n elif config is not None:\n model = GPT2LMHeadModel(config)\n else:\n model = GPT2LMHeadModel(GPT2Config())\n if checkpoint:\n model.gradient_checkpointing_enable()\n super().__init__(model)\n", "path": "applications/ChatGPT/chatgpt/models/gpt/gpt_actor.py"}], "after_files": [{"content": "from typing import Optional\n\nimport torch.nn as nn\nfrom transformers.models.gpt2.configuration_gpt2 import GPT2Config\nfrom transformers.models.gpt2.modeling_gpt2 import GPT2Model\n\nfrom ..base import Critic\n\n\nclass GPTCritic(Critic):\n \"\"\"\n GPT Critic model.\n\n Args:\n pretrained (str): Pretrained model name or path.\n config (GPT2Config): Model config.\n checkpoint (bool): Enable gradient checkpointing.\n lora_rank (int): Rank of the LO-RA decomposition.\n lora_train_bias (str): LoRA bias training mode.\n \"\"\"\n\n def __init__(self,\n pretrained: Optional[str] = None,\n config: Optional[GPT2Config] = None,\n checkpoint: bool = False,\n lora_rank: int = 0,\n lora_train_bias: str = 'none') -> None:\n if pretrained is not None:\n model = GPT2Model.from_pretrained(pretrained)\n elif config is not None:\n model = GPT2Model(config)\n else:\n model = GPT2Model(GPT2Config())\n if checkpoint:\n model.gradient_checkpointing_enable()\n value_head = nn.Linear(model.config.n_embd, 1)\n super().__init__(model, value_head, lora_rank, lora_train_bias)\n", "path": "applications/ChatGPT/chatgpt/models/gpt/gpt_critic.py"}, {"content": "from typing import Optional\n\nimport torch.nn as nn\nfrom transformers.models.opt.configuration_opt import OPTConfig\nfrom transformers.models.opt.modeling_opt import OPTModel\n\nfrom ..base import Critic\n\n\nclass OPTCritic(Critic):\n \"\"\"\n OPT Critic model.\n\n Args:\n pretrained (str): Pretrained model name or path.\n config (OPTConfig): Model config.\n checkpoint (bool): Enable gradient checkpointing.\n lora_rank (int): Rank of the low-rank approximation.\n lora_train_bias (str): LoRA bias training mode.\n \"\"\"\n\n def __init__(self,\n pretrained: Optional[str] = None,\n config: Optional[OPTConfig] = None,\n checkpoint: bool = False,\n lora_rank: int = 0,\n lora_train_bias: str = 'none',\n **kwargs) -> None:\n if pretrained is not None:\n model = OPTModel.from_pretrained(pretrained)\n elif config is not None:\n model = OPTModel(config)\n else:\n model = OPTModel(OPTConfig())\n if checkpoint:\n model.gradient_checkpointing_enable()\n value_head = nn.Linear(model.config.word_embed_proj_dim, 1)\n super().__init__(model, value_head, lora_rank, lora_train_bias, **kwargs)\n", "path": "applications/ChatGPT/chatgpt/models/opt/opt_critic.py"}, {"content": "from typing import Optional\n\nfrom transformers.models.gpt2.configuration_gpt2 import GPT2Config\nfrom transformers.models.gpt2.modeling_gpt2 import GPT2LMHeadModel\n\nfrom ..base import Actor\n\n\nclass GPTActor(Actor):\n \"\"\"\n GPT Actor model.\n\n Args:\n pretrained (str): Pretrained model name or path.\n config (GPT2Config): Model config.\n checkpoint (bool): Enable gradient checkpointing.\n lora_rank (int): Rank of the LoRa layer.\n lora_train_bias (str): Bias training strategy for the LoRa layer.\n \"\"\"\n\n def __init__(self,\n pretrained: Optional[str] = None,\n config: Optional[GPT2Config] = None,\n checkpoint: bool = False,\n lora_rank: int = 0,\n lora_train_bias: str = 'none') -> None:\n if pretrained is not None:\n model = GPT2LMHeadModel.from_pretrained(pretrained)\n elif config is not None:\n model = GPT2LMHeadModel(config)\n else:\n model = GPT2LMHeadModel(GPT2Config())\n if checkpoint:\n model.gradient_checkpointing_enable()\n super().__init__(model, lora_rank, lora_train_bias)\n", "path": "applications/ChatGPT/chatgpt/models/gpt/gpt_actor.py"}]} | 1,270 | 810 |
gh_patches_debug_10701 | rasdani/github-patches | git_diff | scikit-image__scikit-image-1360 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
skimage.novice not handling alpha values (or maybe something worse)
We'd like to use `skimage.novice` to teach image manipulation in Software Carpentry, but it looks like `skimage.novice` isn't handling alpha values correctly (though the problem may be deeper).
Test image is a shrunken survey map of Antarctic:

Step 1: load and display using ipython 2.2.0 via conda 3.7.3 on Mac OS X 10.10.1:
```
In [1]: from skimage import novice
In [2]: p1 = novice.open('ant.jpg')
In [3]: p1.show()
```

That's not right...
Step 2: paint the lower left green:
```
In [4]: p1.size
Out[4]: (120, 121)
In [5]: p1[0:60, 0:60] = (0, 255, 0)
In [6]: p1.show()
```

Looks like alpha blending is going on.
Step 3: create a new blank white canvas:
```
In [7]: p2 = novice.Picture.from_size((200, 200), (255, 255, 255))
In [8]: p2.show()
```

That looks OK, but now color part of it green:
```
In [9]: p2[0:60, 0:60] = (0, 255, 0)
In [10]: p2.show()
```

and the green doesn't show up at all.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `skimage/io/_plugins/pil_plugin.py`
Content:
```
1 __all__ = ['imread', 'imsave']
2
3 import numpy as np
4 from six import string_types
5 from PIL import Image
6
7 from ...util import img_as_ubyte, img_as_uint
8 from ...external.tifffile import imread as tif_imread, imsave as tif_imsave
9
10
11 def imread(fname, dtype=None, img_num=None, **kwargs):
12 """Load an image from file.
13
14 Parameters
15 ----------
16 fname : str
17 File name.
18 dtype : numpy dtype object or string specifier
19 Specifies data type of array elements.
20 img_num : int, optional
21 Specifies which image to read in a file with multiple images
22 (zero-indexed).
23 kwargs : keyword pairs, optional
24 Addition keyword arguments to pass through (only applicable to Tiff
25 files for now, see `tifffile`'s `imread` function).
26
27 Notes
28 -----
29 Tiff files are handled by Christophe Golhke's tifffile.py [1]_, and support many
30 advanced image types including multi-page and floating point.
31
32 All other files are read using the Python Imaging Libary.
33 See PIL docs [2]_ for a list of supported formats.
34
35 References
36 ----------
37 .. [1] http://www.lfd.uci.edu/~gohlke/code/tifffile.py.html
38 .. [2] http://pillow.readthedocs.org/en/latest/handbook/image-file-formats.html
39
40 """
41 if hasattr(fname, 'lower') and dtype is None:
42 kwargs.setdefault('key', img_num)
43 if fname.lower().endswith(('.tiff', '.tif')):
44 return tif_imread(fname, **kwargs)
45
46 im = Image.open(fname)
47 try:
48 # this will raise an IOError if the file is not readable
49 im.getdata()[0]
50 except IOError:
51 site = "http://pillow.readthedocs.org/en/latest/installation.html#external-libraries"
52 raise ValueError('Could not load "%s"\nPlease see documentation at: %s' % (fname, site))
53 else:
54 return pil_to_ndarray(im, dtype=dtype, img_num=img_num)
55
56
57 def pil_to_ndarray(im, dtype=None, img_num=None):
58 """Import a PIL Image object to an ndarray, in memory.
59
60 Parameters
61 ----------
62 Refer to ``imread``.
63
64 """
65 frames = []
66 grayscale = None
67 i = 0
68 while 1:
69 try:
70 im.seek(i)
71 except EOFError:
72 break
73
74 frame = im
75
76 if not img_num is None and img_num != i:
77 im.getdata()[0]
78 i += 1
79 continue
80
81 if im.mode == 'P':
82 if grayscale is None:
83 grayscale = _palette_is_grayscale(im)
84
85 if grayscale:
86 frame = im.convert('L')
87 else:
88 frame = im.convert('RGB')
89
90 elif im.mode == '1':
91 frame = im.convert('L')
92
93 elif 'A' in im.mode:
94 frame = im.convert('RGBA')
95
96
97 if im.mode.startswith('I;16'):
98 shape = im.size
99 dtype = '>u2' if im.mode.endswith('B') else '<u2'
100 if 'S' in im.mode:
101 dtype = dtype.replace('u', 'i')
102 frame = np.fromstring(frame.tobytes(), dtype)
103 frame.shape = shape[::-1]
104
105 else:
106 frame = np.array(frame, dtype=dtype)
107
108 frames.append(frame)
109 i += 1
110
111 if hasattr(im, 'fp') and im.fp:
112 im.fp.close()
113
114 if img_num is None and len(frames) > 1:
115 return np.array(frames)
116 elif frames:
117 return frames[0]
118 elif img_num:
119 raise IndexError('Could not find image #%s' % img_num)
120
121
122 def _palette_is_grayscale(pil_image):
123 """Return True if PIL image in palette mode is grayscale.
124
125 Parameters
126 ----------
127 pil_image : PIL image
128 PIL Image that is in Palette mode.
129
130 Returns
131 -------
132 is_grayscale : bool
133 True if all colors in image palette are gray.
134 """
135 assert pil_image.mode == 'P'
136 # get palette as an array with R, G, B columns
137 palette = np.asarray(pil_image.getpalette()).reshape((256, 3))
138 # Not all palette colors are used; unused colors have junk values.
139 start, stop = pil_image.getextrema()
140 valid_palette = palette[start:stop]
141 # Image is grayscale if channel differences (R - G and G - B)
142 # are all zero.
143 return np.allclose(np.diff(valid_palette), 0)
144
145
146 def ndarray_to_pil(arr, format_str=None):
147 """Export an ndarray to a PIL object.
148
149 Parameters
150 ----------
151 Refer to ``imsave``.
152
153 """
154 if arr.ndim == 3:
155 arr = img_as_ubyte(arr)
156 mode = {3: 'RGB', 4: 'RGBA'}[arr.shape[2]]
157
158 elif format_str in ['png', 'PNG']:
159 mode = 'I;16'
160 mode_base = 'I'
161
162 if arr.dtype.kind == 'f':
163 arr = img_as_uint(arr)
164
165 elif arr.max() < 256 and arr.min() >= 0:
166 arr = arr.astype(np.uint8)
167 mode = mode_base = 'L'
168
169 else:
170 arr = img_as_uint(arr)
171
172 else:
173 arr = img_as_ubyte(arr)
174 mode = 'L'
175 mode_base = 'L'
176
177 if arr.ndim == 2:
178 im = Image.new(mode_base, arr.T.shape)
179 try:
180 im.frombytes(arr.tobytes(), 'raw', mode)
181 except AttributeError:
182 im.frombytes(arr.tostring(), 'raw', mode)
183
184 else:
185 try:
186 im = Image.frombytes(mode, (arr.shape[1], arr.shape[0]),
187 arr.tobytes())
188 except AttributeError:
189 im = Image.frombytes(mode, (arr.shape[1], arr.shape[0]),
190 arr.tostring())
191 return im
192
193
194 def imsave(fname, arr, format_str=None):
195 """Save an image to disk.
196
197 Parameters
198 ----------
199 fname : str or file-like object
200 Name of destination file.
201 arr : ndarray of uint8 or float
202 Array (image) to save. Arrays of data-type uint8 should have
203 values in [0, 255], whereas floating-point arrays must be
204 in [0, 1].
205 format_str: str
206 Format to save as, this is defaulted to PNG if using a file-like
207 object; this will be derived from the extension if fname is a string
208
209 Notes
210 -----
211 Tiff files are handled by Christophe Golhke's tifffile.py [1]_,
212 and support many advanced image types including multi-page and
213 floating point.
214
215 All other image formats use the Python Imaging Libary.
216 See PIL docs [2]_ for a list of other supported formats.
217 All images besides single channel PNGs are converted using `img_as_uint8`.
218 Single Channel PNGs have the following behavior:
219 - Integer values in [0, 255] and Boolean types -> img_as_uint8
220 - Floating point and other integers -> img_as_uint16
221
222 References
223 ----------
224 .. [1] http://www.lfd.uci.edu/~gohlke/code/tifffile.py.html
225 .. [2] http://pillow.readthedocs.org/en/latest/handbook/image-file-formats.html
226 """
227 # default to PNG if file-like object
228 if not isinstance(fname, string_types) and format_str is None:
229 format_str = "PNG"
230 # Check for png in filename
231 if (isinstance(fname, string_types)
232 and fname.lower().endswith(".png")):
233 format_str = "PNG"
234
235 arr = np.asanyarray(arr).squeeze()
236
237 if arr.dtype.kind == 'b':
238 arr = arr.astype(np.uint8)
239
240 use_tif = False
241 if hasattr(fname, 'lower'):
242 if fname.lower().endswith(('.tiff', '.tif')):
243 use_tif = True
244 if not format_str is None:
245 if format_str.lower() in ['tiff', 'tif']:
246 use_tif = True
247
248 if use_tif:
249 tif_imsave(fname, arr)
250 return
251
252 if arr.ndim not in (2, 3):
253 raise ValueError("Invalid shape for image array: %s" % arr.shape)
254
255 if arr.ndim == 3:
256 if arr.shape[2] not in (3, 4):
257 raise ValueError("Invalid number of channels in image array.")
258
259 img = ndarray_to_pil(arr, format_str=format_str)
260 img.save(fname, format=format_str)
261
262
263 def imshow(arr):
264 """Display an image, using PIL's default display command.
265
266 Parameters
267 ----------
268 arr : ndarray
269 Image to display. Images of dtype float are assumed to be in
270 [0, 1]. Images of dtype uint8 are in [0, 255].
271
272 """
273 Image.fromarray(img_as_ubyte(arr)).show()
274
275
276 def _app_show():
277 pass
278
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/skimage/io/_plugins/pil_plugin.py b/skimage/io/_plugins/pil_plugin.py
--- a/skimage/io/_plugins/pil_plugin.py
+++ b/skimage/io/_plugins/pil_plugin.py
@@ -73,7 +73,7 @@
frame = im
- if not img_num is None and img_num != i:
+ if img_num is not None and img_num != i:
im.getdata()[0]
i += 1
continue
@@ -93,6 +93,8 @@
elif 'A' in im.mode:
frame = im.convert('RGBA')
+ elif im.mode == 'CMYK':
+ frame = im.convert('RGB')
if im.mode.startswith('I;16'):
shape = im.size
| {"golden_diff": "diff --git a/skimage/io/_plugins/pil_plugin.py b/skimage/io/_plugins/pil_plugin.py\n--- a/skimage/io/_plugins/pil_plugin.py\n+++ b/skimage/io/_plugins/pil_plugin.py\n@@ -73,7 +73,7 @@\n \n frame = im\n \n- if not img_num is None and img_num != i:\n+ if img_num is not None and img_num != i:\n im.getdata()[0]\n i += 1\n continue\n@@ -93,6 +93,8 @@\n elif 'A' in im.mode:\n frame = im.convert('RGBA')\n \n+ elif im.mode == 'CMYK':\n+ frame = im.convert('RGB')\n \n if im.mode.startswith('I;16'):\n shape = im.size\n", "issue": "skimage.novice not handling alpha values (or maybe something worse)\nWe'd like to use `skimage.novice` to teach image manipulation in Software Carpentry, but it looks like `skimage.novice` isn't handling alpha values correctly (though the problem may be deeper).\n\nTest image is a shrunken survey map of Antarctic:\n\n\n\nStep 1: load and display using ipython 2.2.0 via conda 3.7.3 on Mac OS X 10.10.1:\n\n```\nIn [1]: from skimage import novice\nIn [2]: p1 = novice.open('ant.jpg')\nIn [3]: p1.show()\n```\n\n\n\nThat's not right...\n\nStep 2: paint the lower left green:\n\n```\nIn [4]: p1.size\nOut[4]: (120, 121)\nIn [5]: p1[0:60, 0:60] = (0, 255, 0)\nIn [6]: p1.show()\n```\n\n\n\nLooks like alpha blending is going on.\n\nStep 3: create a new blank white canvas:\n\n```\nIn [7]: p2 = novice.Picture.from_size((200, 200), (255, 255, 255))\nIn [8]: p2.show()\n```\n\n\n\nThat looks OK, but now color part of it green:\n\n```\nIn [9]: p2[0:60, 0:60] = (0, 255, 0)\nIn [10]: p2.show()\n```\n\n\n\nand the green doesn't show up at all.\n\n", "before_files": [{"content": "__all__ = ['imread', 'imsave']\n\nimport numpy as np\nfrom six import string_types\nfrom PIL import Image\n\nfrom ...util import img_as_ubyte, img_as_uint\nfrom ...external.tifffile import imread as tif_imread, imsave as tif_imsave\n\n\ndef imread(fname, dtype=None, img_num=None, **kwargs):\n \"\"\"Load an image from file.\n\n Parameters\n ----------\n fname : str\n File name.\n dtype : numpy dtype object or string specifier\n Specifies data type of array elements.\n img_num : int, optional\n Specifies which image to read in a file with multiple images\n (zero-indexed).\n kwargs : keyword pairs, optional\n Addition keyword arguments to pass through (only applicable to Tiff\n files for now, see `tifffile`'s `imread` function).\n\n Notes\n -----\n Tiff files are handled by Christophe Golhke's tifffile.py [1]_, and support many\n advanced image types including multi-page and floating point.\n\n All other files are read using the Python Imaging Libary.\n See PIL docs [2]_ for a list of supported formats.\n\n References\n ----------\n .. [1] http://www.lfd.uci.edu/~gohlke/code/tifffile.py.html\n .. [2] http://pillow.readthedocs.org/en/latest/handbook/image-file-formats.html\n\n \"\"\"\n if hasattr(fname, 'lower') and dtype is None:\n kwargs.setdefault('key', img_num)\n if fname.lower().endswith(('.tiff', '.tif')):\n return tif_imread(fname, **kwargs)\n\n im = Image.open(fname)\n try:\n # this will raise an IOError if the file is not readable\n im.getdata()[0]\n except IOError:\n site = \"http://pillow.readthedocs.org/en/latest/installation.html#external-libraries\"\n raise ValueError('Could not load \"%s\"\\nPlease see documentation at: %s' % (fname, site))\n else:\n return pil_to_ndarray(im, dtype=dtype, img_num=img_num)\n\n\ndef pil_to_ndarray(im, dtype=None, img_num=None):\n \"\"\"Import a PIL Image object to an ndarray, in memory.\n\n Parameters\n ----------\n Refer to ``imread``.\n\n \"\"\"\n frames = []\n grayscale = None\n i = 0\n while 1:\n try:\n im.seek(i)\n except EOFError:\n break\n\n frame = im\n\n if not img_num is None and img_num != i:\n im.getdata()[0]\n i += 1\n continue\n\n if im.mode == 'P':\n if grayscale is None:\n grayscale = _palette_is_grayscale(im)\n\n if grayscale:\n frame = im.convert('L')\n else:\n frame = im.convert('RGB')\n\n elif im.mode == '1':\n frame = im.convert('L')\n\n elif 'A' in im.mode:\n frame = im.convert('RGBA')\n\n\n if im.mode.startswith('I;16'):\n shape = im.size\n dtype = '>u2' if im.mode.endswith('B') else '<u2'\n if 'S' in im.mode:\n dtype = dtype.replace('u', 'i')\n frame = np.fromstring(frame.tobytes(), dtype)\n frame.shape = shape[::-1]\n\n else:\n frame = np.array(frame, dtype=dtype)\n\n frames.append(frame)\n i += 1\n\n if hasattr(im, 'fp') and im.fp:\n im.fp.close()\n\n if img_num is None and len(frames) > 1:\n return np.array(frames)\n elif frames:\n return frames[0]\n elif img_num:\n raise IndexError('Could not find image #%s' % img_num)\n\n\ndef _palette_is_grayscale(pil_image):\n \"\"\"Return True if PIL image in palette mode is grayscale.\n\n Parameters\n ----------\n pil_image : PIL image\n PIL Image that is in Palette mode.\n\n Returns\n -------\n is_grayscale : bool\n True if all colors in image palette are gray.\n \"\"\"\n assert pil_image.mode == 'P'\n # get palette as an array with R, G, B columns\n palette = np.asarray(pil_image.getpalette()).reshape((256, 3))\n # Not all palette colors are used; unused colors have junk values.\n start, stop = pil_image.getextrema()\n valid_palette = palette[start:stop]\n # Image is grayscale if channel differences (R - G and G - B)\n # are all zero.\n return np.allclose(np.diff(valid_palette), 0)\n\n\ndef ndarray_to_pil(arr, format_str=None):\n \"\"\"Export an ndarray to a PIL object.\n\n Parameters\n ----------\n Refer to ``imsave``.\n\n \"\"\"\n if arr.ndim == 3:\n arr = img_as_ubyte(arr)\n mode = {3: 'RGB', 4: 'RGBA'}[arr.shape[2]]\n\n elif format_str in ['png', 'PNG']:\n mode = 'I;16'\n mode_base = 'I'\n\n if arr.dtype.kind == 'f':\n arr = img_as_uint(arr)\n\n elif arr.max() < 256 and arr.min() >= 0:\n arr = arr.astype(np.uint8)\n mode = mode_base = 'L'\n\n else:\n arr = img_as_uint(arr)\n\n else:\n arr = img_as_ubyte(arr)\n mode = 'L'\n mode_base = 'L'\n\n if arr.ndim == 2:\n im = Image.new(mode_base, arr.T.shape)\n try:\n im.frombytes(arr.tobytes(), 'raw', mode)\n except AttributeError:\n im.frombytes(arr.tostring(), 'raw', mode)\n\n else:\n try:\n im = Image.frombytes(mode, (arr.shape[1], arr.shape[0]),\n arr.tobytes())\n except AttributeError:\n im = Image.frombytes(mode, (arr.shape[1], arr.shape[0]),\n arr.tostring())\n return im\n\n\ndef imsave(fname, arr, format_str=None):\n \"\"\"Save an image to disk.\n\n Parameters\n ----------\n fname : str or file-like object\n Name of destination file.\n arr : ndarray of uint8 or float\n Array (image) to save. Arrays of data-type uint8 should have\n values in [0, 255], whereas floating-point arrays must be\n in [0, 1].\n format_str: str\n Format to save as, this is defaulted to PNG if using a file-like\n object; this will be derived from the extension if fname is a string\n\n Notes\n -----\n Tiff files are handled by Christophe Golhke's tifffile.py [1]_,\n and support many advanced image types including multi-page and\n floating point.\n\n All other image formats use the Python Imaging Libary.\n See PIL docs [2]_ for a list of other supported formats.\n All images besides single channel PNGs are converted using `img_as_uint8`.\n Single Channel PNGs have the following behavior:\n - Integer values in [0, 255] and Boolean types -> img_as_uint8\n - Floating point and other integers -> img_as_uint16\n\n References\n ----------\n .. [1] http://www.lfd.uci.edu/~gohlke/code/tifffile.py.html\n .. [2] http://pillow.readthedocs.org/en/latest/handbook/image-file-formats.html\n \"\"\"\n # default to PNG if file-like object\n if not isinstance(fname, string_types) and format_str is None:\n format_str = \"PNG\"\n # Check for png in filename\n if (isinstance(fname, string_types)\n and fname.lower().endswith(\".png\")):\n format_str = \"PNG\"\n\n arr = np.asanyarray(arr).squeeze()\n\n if arr.dtype.kind == 'b':\n arr = arr.astype(np.uint8)\n\n use_tif = False\n if hasattr(fname, 'lower'):\n if fname.lower().endswith(('.tiff', '.tif')):\n use_tif = True\n if not format_str is None:\n if format_str.lower() in ['tiff', 'tif']:\n use_tif = True\n\n if use_tif:\n tif_imsave(fname, arr)\n return\n\n if arr.ndim not in (2, 3):\n raise ValueError(\"Invalid shape for image array: %s\" % arr.shape)\n\n if arr.ndim == 3:\n if arr.shape[2] not in (3, 4):\n raise ValueError(\"Invalid number of channels in image array.\")\n\n img = ndarray_to_pil(arr, format_str=format_str)\n img.save(fname, format=format_str)\n\n\ndef imshow(arr):\n \"\"\"Display an image, using PIL's default display command.\n\n Parameters\n ----------\n arr : ndarray\n Image to display. Images of dtype float are assumed to be in\n [0, 1]. Images of dtype uint8 are in [0, 255].\n\n \"\"\"\n Image.fromarray(img_as_ubyte(arr)).show()\n\n\ndef _app_show():\n pass\n", "path": "skimage/io/_plugins/pil_plugin.py"}], "after_files": [{"content": "__all__ = ['imread', 'imsave']\n\nimport numpy as np\nfrom six import string_types\nfrom PIL import Image\n\nfrom ...util import img_as_ubyte, img_as_uint\nfrom ...external.tifffile import imread as tif_imread, imsave as tif_imsave\n\n\ndef imread(fname, dtype=None, img_num=None, **kwargs):\n \"\"\"Load an image from file.\n\n Parameters\n ----------\n fname : str\n File name.\n dtype : numpy dtype object or string specifier\n Specifies data type of array elements.\n img_num : int, optional\n Specifies which image to read in a file with multiple images\n (zero-indexed).\n kwargs : keyword pairs, optional\n Addition keyword arguments to pass through (only applicable to Tiff\n files for now, see `tifffile`'s `imread` function).\n\n Notes\n -----\n Tiff files are handled by Christophe Golhke's tifffile.py [1]_, and support many\n advanced image types including multi-page and floating point.\n\n All other files are read using the Python Imaging Libary.\n See PIL docs [2]_ for a list of supported formats.\n\n References\n ----------\n .. [1] http://www.lfd.uci.edu/~gohlke/code/tifffile.py.html\n .. [2] http://pillow.readthedocs.org/en/latest/handbook/image-file-formats.html\n\n \"\"\"\n if hasattr(fname, 'lower') and dtype is None:\n kwargs.setdefault('key', img_num)\n if fname.lower().endswith(('.tiff', '.tif')):\n return tif_imread(fname, **kwargs)\n\n im = Image.open(fname)\n try:\n # this will raise an IOError if the file is not readable\n im.getdata()[0]\n except IOError:\n site = \"http://pillow.readthedocs.org/en/latest/installation.html#external-libraries\"\n raise ValueError('Could not load \"%s\"\\nPlease see documentation at: %s' % (fname, site))\n else:\n return pil_to_ndarray(im, dtype=dtype, img_num=img_num)\n\n\ndef pil_to_ndarray(im, dtype=None, img_num=None):\n \"\"\"Import a PIL Image object to an ndarray, in memory.\n\n Parameters\n ----------\n Refer to ``imread``.\n\n \"\"\"\n frames = []\n grayscale = None\n i = 0\n while 1:\n try:\n im.seek(i)\n except EOFError:\n break\n\n frame = im\n\n if img_num is not None and img_num != i:\n im.getdata()[0]\n i += 1\n continue\n\n if im.mode == 'P':\n if grayscale is None:\n grayscale = _palette_is_grayscale(im)\n\n if grayscale:\n frame = im.convert('L')\n else:\n frame = im.convert('RGB')\n\n elif im.mode == '1':\n frame = im.convert('L')\n\n elif 'A' in im.mode:\n frame = im.convert('RGBA')\n\n elif im.mode == 'CMYK':\n frame = im.convert('RGB')\n\n if im.mode.startswith('I;16'):\n shape = im.size\n dtype = '>u2' if im.mode.endswith('B') else '<u2'\n if 'S' in im.mode:\n dtype = dtype.replace('u', 'i')\n frame = np.fromstring(frame.tobytes(), dtype)\n frame.shape = shape[::-1]\n\n else:\n frame = np.array(frame, dtype=dtype)\n\n frames.append(frame)\n i += 1\n\n if hasattr(im, 'fp') and im.fp:\n im.fp.close()\n\n if img_num is None and len(frames) > 1:\n return np.array(frames)\n elif frames:\n return frames[0]\n elif img_num:\n raise IndexError('Could not find image #%s' % img_num)\n\n\ndef _palette_is_grayscale(pil_image):\n \"\"\"Return True if PIL image in palette mode is grayscale.\n\n Parameters\n ----------\n pil_image : PIL image\n PIL Image that is in Palette mode.\n\n Returns\n -------\n is_grayscale : bool\n True if all colors in image palette are gray.\n \"\"\"\n assert pil_image.mode == 'P'\n # get palette as an array with R, G, B columns\n palette = np.asarray(pil_image.getpalette()).reshape((256, 3))\n # Not all palette colors are used; unused colors have junk values.\n start, stop = pil_image.getextrema()\n valid_palette = palette[start:stop]\n # Image is grayscale if channel differences (R - G and G - B)\n # are all zero.\n return np.allclose(np.diff(valid_palette), 0)\n\n\ndef ndarray_to_pil(arr, format_str=None):\n \"\"\"Export an ndarray to a PIL object.\n\n Parameters\n ----------\n Refer to ``imsave``.\n\n \"\"\"\n if arr.ndim == 3:\n arr = img_as_ubyte(arr)\n mode = {3: 'RGB', 4: 'RGBA'}[arr.shape[2]]\n\n elif format_str in ['png', 'PNG']:\n mode = 'I;16'\n mode_base = 'I'\n\n if arr.dtype.kind == 'f':\n arr = img_as_uint(arr)\n\n elif arr.max() < 256 and arr.min() >= 0:\n arr = arr.astype(np.uint8)\n mode = mode_base = 'L'\n\n else:\n arr = img_as_uint(arr)\n\n else:\n arr = img_as_ubyte(arr)\n mode = 'L'\n mode_base = 'L'\n\n if arr.ndim == 2:\n im = Image.new(mode_base, arr.T.shape)\n try:\n im.frombytes(arr.tobytes(), 'raw', mode)\n except AttributeError:\n im.frombytes(arr.tostring(), 'raw', mode)\n\n else:\n try:\n im = Image.frombytes(mode, (arr.shape[1], arr.shape[0]),\n arr.tobytes())\n except AttributeError:\n im = Image.frombytes(mode, (arr.shape[1], arr.shape[0]),\n arr.tostring())\n return im\n\n\ndef imsave(fname, arr, format_str=None):\n \"\"\"Save an image to disk.\n\n Parameters\n ----------\n fname : str or file-like object\n Name of destination file.\n arr : ndarray of uint8 or float\n Array (image) to save. Arrays of data-type uint8 should have\n values in [0, 255], whereas floating-point arrays must be\n in [0, 1].\n format_str: str\n Format to save as, this is defaulted to PNG if using a file-like\n object; this will be derived from the extension if fname is a string\n\n Notes\n -----\n Tiff files are handled by Christophe Golhke's tifffile.py [1]_,\n and support many advanced image types including multi-page and\n floating point.\n\n All other image formats use the Python Imaging Libary.\n See PIL docs [2]_ for a list of other supported formats.\n All images besides single channel PNGs are converted using `img_as_uint8`.\n Single Channel PNGs have the following behavior:\n - Integer values in [0, 255] and Boolean types -> img_as_uint8\n - Floating point and other integers -> img_as_uint16\n\n References\n ----------\n .. [1] http://www.lfd.uci.edu/~gohlke/code/tifffile.py.html\n .. [2] http://pillow.readthedocs.org/en/latest/handbook/image-file-formats.html\n \"\"\"\n # default to PNG if file-like object\n if not isinstance(fname, string_types) and format_str is None:\n format_str = \"PNG\"\n # Check for png in filename\n if (isinstance(fname, string_types)\n and fname.lower().endswith(\".png\")):\n format_str = \"PNG\"\n\n arr = np.asanyarray(arr).squeeze()\n\n if arr.dtype.kind == 'b':\n arr = arr.astype(np.uint8)\n\n use_tif = False\n if hasattr(fname, 'lower'):\n if fname.lower().endswith(('.tiff', '.tif')):\n use_tif = True\n if not format_str is None:\n if format_str.lower() in ['tiff', 'tif']:\n use_tif = True\n\n if use_tif:\n tif_imsave(fname, arr)\n return\n\n if arr.ndim not in (2, 3):\n raise ValueError(\"Invalid shape for image array: %s\" % arr.shape)\n\n if arr.ndim == 3:\n if arr.shape[2] not in (3, 4):\n raise ValueError(\"Invalid number of channels in image array.\")\n\n img = ndarray_to_pil(arr, format_str=format_str)\n img.save(fname, format=format_str)\n\n\ndef imshow(arr):\n \"\"\"Display an image, using PIL's default display command.\n\n Parameters\n ----------\n arr : ndarray\n Image to display. Images of dtype float are assumed to be in\n [0, 1]. Images of dtype uint8 are in [0, 255].\n\n \"\"\"\n Image.fromarray(img_as_ubyte(arr)).show()\n\n\ndef _app_show():\n pass\n", "path": "skimage/io/_plugins/pil_plugin.py"}]} | 3,775 | 182 |
gh_patches_debug_31808 | rasdani/github-patches | git_diff | GPflow__GPflow-1843 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Anisotropic lengthscales
Currently, by default, all stationary kernels impose a positive transform on the lengthscales Parameter. For the case of AnisotropicStationary kernels, this reduces the viable parameter space by a factor of 2^(D-1), where D is the number of input dimensions. So it is acceptable in the case of a 1D problem, but in higher dimensions we would miss out. For example, for the Cosine kernel there is a significant difference between lengthscales of [+1, +1], and [+1, -1], they correspond to waves propagating in perpendicular directions.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gpflow/kernels/stationaries.py`
Content:
```
1 # Copyright 2017-2020 The GPflow Contributors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import Any, Optional
16
17 import numpy as np
18 import tensorflow as tf
19
20 from ..base import Parameter, TensorType
21 from ..utilities import positive
22 from ..utilities.ops import difference_matrix, square_distance
23 from .base import ActiveDims, Kernel
24
25
26 class Stationary(Kernel):
27 """
28 Base class for kernels that are stationary, that is, they only depend on
29
30 d = x - x'
31
32 This class handles 'ard' behaviour, which stands for 'Automatic Relevance
33 Determination'. This means that the kernel has one lengthscale per
34 dimension, otherwise the kernel is isotropic (has a single lengthscale).
35 """
36
37 def __init__(
38 self, variance: TensorType = 1.0, lengthscales: TensorType = 1.0, **kwargs: Any
39 ) -> None:
40 """
41 :param variance: the (initial) value for the variance parameter.
42 :param lengthscales: the (initial) value for the lengthscale
43 parameter(s), to induce ARD behaviour this must be initialised as
44 an array the same length as the the number of active dimensions
45 e.g. [1., 1., 1.]. If only a single value is passed, this value
46 is used as the lengthscale of each dimension.
47 :param kwargs: accepts `name` and `active_dims`, which is a list or
48 slice of indices which controls which columns of X are used (by
49 default, all columns are used).
50 """
51 for kwarg in kwargs:
52 if kwarg not in {"name", "active_dims"}:
53 raise TypeError(f"Unknown keyword argument: {kwarg}")
54
55 super().__init__(**kwargs)
56 self.variance = Parameter(variance, transform=positive())
57 self.lengthscales = Parameter(lengthscales, transform=positive())
58 self._validate_ard_active_dims(self.lengthscales)
59
60 @property
61 def ard(self) -> bool:
62 """
63 Whether ARD behaviour is active.
64 """
65 ndims: int = self.lengthscales.shape.ndims
66 return ndims > 0
67
68 def scale(self, X: TensorType) -> TensorType:
69 X_scaled = X / self.lengthscales if X is not None else X
70 return X_scaled
71
72 def K_diag(self, X: TensorType) -> tf.Tensor:
73 return tf.fill(tf.shape(X)[:-1], tf.squeeze(self.variance))
74
75
76 class IsotropicStationary(Stationary):
77 """
78 Base class for isotropic stationary kernels, i.e. kernels that only
79 depend on
80
81 r = ‖x - x'‖
82
83 Derived classes should implement one of:
84
85 K_r2(self, r2): Returns the kernel evaluated on r² (r2), which is the
86 squared scaled Euclidean distance Should operate element-wise on r2.
87
88 K_r(self, r): Returns the kernel evaluated on r, which is the scaled
89 Euclidean distance. Should operate element-wise on r.
90 """
91
92 def K(self, X: TensorType, X2: Optional[TensorType] = None) -> tf.Tensor:
93 r2 = self.scaled_squared_euclid_dist(X, X2)
94 return self.K_r2(r2)
95
96 def K_r2(self, r2: TensorType) -> tf.Tensor:
97 if hasattr(self, "K_r"):
98 # Clipping around the (single) float precision which is ~1e-45.
99 r = tf.sqrt(tf.maximum(r2, 1e-36))
100 return self.K_r(r) # pylint: disable=no-member
101 raise NotImplementedError
102
103 def scaled_squared_euclid_dist(
104 self, X: TensorType, X2: Optional[TensorType] = None
105 ) -> tf.Tensor:
106 """
107 Returns ‖(X - X2ᵀ) / ℓ‖², i.e. the squared L₂-norm.
108 """
109 return square_distance(self.scale(X), self.scale(X2))
110
111
112 class AnisotropicStationary(Stationary):
113 """
114 Base class for anisotropic stationary kernels, i.e. kernels that only
115 depend on
116
117 d = x - x'
118
119 Derived classes should implement K_d(self, d): Returns the kernel evaluated
120 on d, which is the pairwise difference matrix, scaled by the lengthscale
121 parameter ℓ (i.e. [(X - X2ᵀ) / ℓ]). The last axis corresponds to the
122 input dimension.
123 """
124
125 def K(self, X: TensorType, X2: Optional[TensorType] = None) -> tf.Tensor:
126 return self.K_d(self.scaled_difference_matrix(X, X2))
127
128 def scaled_difference_matrix(self, X: TensorType, X2: Optional[TensorType] = None) -> tf.Tensor:
129 """
130 Returns [(X - X2ᵀ) / ℓ]. If X has shape [..., N, D] and
131 X2 has shape [..., M, D], the output will have shape [..., N, M, D].
132 """
133 return difference_matrix(self.scale(X), self.scale(X2))
134
135 def K_d(self, d: TensorType) -> tf.Tensor:
136 raise NotImplementedError
137
138
139 class SquaredExponential(IsotropicStationary):
140 """
141 The radial basis function (RBF) or squared exponential kernel. The kernel equation is
142
143 k(r) = σ² exp{-½ r²}
144
145 where:
146 r is the Euclidean distance between the input points, scaled by the lengthscales parameter ℓ.
147 σ² is the variance parameter
148
149 Functions drawn from a GP with this kernel are infinitely differentiable!
150 """
151
152 def K_r2(self, r2: TensorType) -> tf.Tensor:
153 return self.variance * tf.exp(-0.5 * r2)
154
155
156 class RationalQuadratic(IsotropicStationary):
157 """
158 Rational Quadratic kernel,
159
160 k(r) = σ² (1 + r² / 2αℓ²)^(-α)
161
162 σ² : variance
163 ℓ : lengthscales
164 α : alpha, determines relative weighting of small-scale and large-scale fluctuations
165
166 For α → ∞, the RQ kernel becomes equivalent to the squared exponential.
167 """
168
169 def __init__(
170 self,
171 variance: TensorType = 1.0,
172 lengthscales: TensorType = 1.0,
173 alpha: TensorType = 1.0,
174 active_dims: Optional[ActiveDims] = None,
175 ) -> None:
176 super().__init__(variance=variance, lengthscales=lengthscales, active_dims=active_dims)
177 self.alpha = Parameter(alpha, transform=positive())
178
179 def K_r2(self, r2: TensorType) -> tf.Tensor:
180 return self.variance * (1 + 0.5 * r2 / self.alpha) ** (-self.alpha)
181
182
183 class Exponential(IsotropicStationary):
184 """
185 The Exponential kernel. It is equivalent to a Matern12 kernel with doubled lengthscales.
186 """
187
188 def K_r(self, r: TensorType) -> tf.Tensor:
189 return self.variance * tf.exp(-0.5 * r)
190
191
192 class Matern12(IsotropicStationary):
193 """
194 The Matern 1/2 kernel. Functions drawn from a GP with this kernel are not
195 differentiable anywhere. The kernel equation is
196
197 k(r) = σ² exp{-r}
198
199 where:
200 r is the Euclidean distance between the input points, scaled by the lengthscales parameter ℓ.
201 σ² is the variance parameter
202 """
203
204 def K_r(self, r: TensorType) -> tf.Tensor:
205 return self.variance * tf.exp(-r)
206
207
208 class Matern32(IsotropicStationary):
209 """
210 The Matern 3/2 kernel. Functions drawn from a GP with this kernel are once
211 differentiable. The kernel equation is
212
213 k(r) = σ² (1 + √3r) exp{-√3 r}
214
215 where:
216 r is the Euclidean distance between the input points, scaled by the lengthscales parameter ℓ,
217 σ² is the variance parameter.
218 """
219
220 def K_r(self, r: TensorType) -> tf.Tensor:
221 sqrt3 = np.sqrt(3.0)
222 return self.variance * (1.0 + sqrt3 * r) * tf.exp(-sqrt3 * r)
223
224
225 class Matern52(IsotropicStationary):
226 """
227 The Matern 5/2 kernel. Functions drawn from a GP with this kernel are twice
228 differentiable. The kernel equation is
229
230 k(r) = σ² (1 + √5r + 5/3r²) exp{-√5 r}
231
232 where:
233 r is the Euclidean distance between the input points, scaled by the lengthscales parameter ℓ,
234 σ² is the variance parameter.
235 """
236
237 def K_r(self, r: TensorType) -> tf.Tensor:
238 sqrt5 = np.sqrt(5.0)
239 return self.variance * (1.0 + sqrt5 * r + 5.0 / 3.0 * tf.square(r)) * tf.exp(-sqrt5 * r)
240
241
242 class Cosine(AnisotropicStationary):
243 """
244 The Cosine kernel. Functions drawn from a GP with this kernel are sinusoids
245 (with a random phase). The kernel equation is
246
247 k(r) = σ² cos{2πd}
248
249 where:
250 d is the sum of the per-dimension differences between the input points, scaled by the
251 lengthscale parameter ℓ (i.e. Σᵢ [(X - X2ᵀ) / ℓ]ᵢ),
252 σ² is the variance parameter.
253 """
254
255 def K_d(self, d: TensorType) -> tf.Tensor:
256 d = tf.reduce_sum(d, axis=-1)
257 return self.variance * tf.cos(2 * np.pi * d)
258
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gpflow/kernels/stationaries.py b/gpflow/kernels/stationaries.py
--- a/gpflow/kernels/stationaries.py
+++ b/gpflow/kernels/stationaries.py
@@ -41,7 +41,7 @@
:param variance: the (initial) value for the variance parameter.
:param lengthscales: the (initial) value for the lengthscale
parameter(s), to induce ARD behaviour this must be initialised as
- an array the same length as the the number of active dimensions
+ an array the same length as the number of active dimensions
e.g. [1., 1., 1.]. If only a single value is passed, this value
is used as the lengthscale of each dimension.
:param kwargs: accepts `name` and `active_dims`, which is a list or
@@ -122,6 +122,26 @@
input dimension.
"""
+ def __init__(
+ self, variance: TensorType = 1.0, lengthscales: TensorType = 1.0, **kwargs: Any
+ ) -> None:
+ """
+ :param variance: the (initial) value for the variance parameter.
+ :param lengthscales: the (initial) value for the lengthscale
+ parameter(s), to induce ARD behaviour this must be initialised as
+ an array the same length as the number of active dimensions
+ e.g. [1., 1., 1.]. Note that anisotropic kernels can possess
+ negative lengthscales. If only a single value is passed, this
+ value is used as the lengthscale of each dimension.
+ :param kwargs: accepts `name` and `active_dims`, which is a list or
+ slice of indices which controls which columns of X are used (by
+ default, all columns are used).
+ """
+ super().__init__(variance, lengthscales, **kwargs)
+
+ if self.ard:
+ self.lengthscales = Parameter(self.lengthscales.numpy())
+
def K(self, X: TensorType, X2: Optional[TensorType] = None) -> tf.Tensor:
return self.K_d(self.scaled_difference_matrix(X, X2))
| {"golden_diff": "diff --git a/gpflow/kernels/stationaries.py b/gpflow/kernels/stationaries.py\n--- a/gpflow/kernels/stationaries.py\n+++ b/gpflow/kernels/stationaries.py\n@@ -41,7 +41,7 @@\n :param variance: the (initial) value for the variance parameter.\n :param lengthscales: the (initial) value for the lengthscale\n parameter(s), to induce ARD behaviour this must be initialised as\n- an array the same length as the the number of active dimensions\n+ an array the same length as the number of active dimensions\n e.g. [1., 1., 1.]. If only a single value is passed, this value\n is used as the lengthscale of each dimension.\n :param kwargs: accepts `name` and `active_dims`, which is a list or\n@@ -122,6 +122,26 @@\n input dimension.\n \"\"\"\n \n+ def __init__(\n+ self, variance: TensorType = 1.0, lengthscales: TensorType = 1.0, **kwargs: Any\n+ ) -> None:\n+ \"\"\"\n+ :param variance: the (initial) value for the variance parameter.\n+ :param lengthscales: the (initial) value for the lengthscale\n+ parameter(s), to induce ARD behaviour this must be initialised as\n+ an array the same length as the number of active dimensions\n+ e.g. [1., 1., 1.]. Note that anisotropic kernels can possess\n+ negative lengthscales. If only a single value is passed, this\n+ value is used as the lengthscale of each dimension.\n+ :param kwargs: accepts `name` and `active_dims`, which is a list or\n+ slice of indices which controls which columns of X are used (by\n+ default, all columns are used).\n+ \"\"\"\n+ super().__init__(variance, lengthscales, **kwargs)\n+\n+ if self.ard:\n+ self.lengthscales = Parameter(self.lengthscales.numpy())\n+\n def K(self, X: TensorType, X2: Optional[TensorType] = None) -> tf.Tensor:\n return self.K_d(self.scaled_difference_matrix(X, X2))\n", "issue": "Anisotropic lengthscales\nCurrently, by default, all stationary kernels impose a positive transform on the lengthscales Parameter. For the case of AnisotropicStationary kernels, this reduces the viable parameter space by a factor of 2^(D-1), where D is the number of input dimensions. So it is acceptable in the case of a 1D problem, but in higher dimensions we would miss out. For example, for the Cosine kernel there is a significant difference between lengthscales of [+1, +1], and [+1, -1], they correspond to waves propagating in perpendicular directions. \r\n\n", "before_files": [{"content": "# Copyright 2017-2020 The GPflow Contributors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Any, Optional\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom ..base import Parameter, TensorType\nfrom ..utilities import positive\nfrom ..utilities.ops import difference_matrix, square_distance\nfrom .base import ActiveDims, Kernel\n\n\nclass Stationary(Kernel):\n \"\"\"\n Base class for kernels that are stationary, that is, they only depend on\n\n d = x - x'\n\n This class handles 'ard' behaviour, which stands for 'Automatic Relevance\n Determination'. This means that the kernel has one lengthscale per\n dimension, otherwise the kernel is isotropic (has a single lengthscale).\n \"\"\"\n\n def __init__(\n self, variance: TensorType = 1.0, lengthscales: TensorType = 1.0, **kwargs: Any\n ) -> None:\n \"\"\"\n :param variance: the (initial) value for the variance parameter.\n :param lengthscales: the (initial) value for the lengthscale\n parameter(s), to induce ARD behaviour this must be initialised as\n an array the same length as the the number of active dimensions\n e.g. [1., 1., 1.]. If only a single value is passed, this value\n is used as the lengthscale of each dimension.\n :param kwargs: accepts `name` and `active_dims`, which is a list or\n slice of indices which controls which columns of X are used (by\n default, all columns are used).\n \"\"\"\n for kwarg in kwargs:\n if kwarg not in {\"name\", \"active_dims\"}:\n raise TypeError(f\"Unknown keyword argument: {kwarg}\")\n\n super().__init__(**kwargs)\n self.variance = Parameter(variance, transform=positive())\n self.lengthscales = Parameter(lengthscales, transform=positive())\n self._validate_ard_active_dims(self.lengthscales)\n\n @property\n def ard(self) -> bool:\n \"\"\"\n Whether ARD behaviour is active.\n \"\"\"\n ndims: int = self.lengthscales.shape.ndims\n return ndims > 0\n\n def scale(self, X: TensorType) -> TensorType:\n X_scaled = X / self.lengthscales if X is not None else X\n return X_scaled\n\n def K_diag(self, X: TensorType) -> tf.Tensor:\n return tf.fill(tf.shape(X)[:-1], tf.squeeze(self.variance))\n\n\nclass IsotropicStationary(Stationary):\n \"\"\"\n Base class for isotropic stationary kernels, i.e. kernels that only\n depend on\n\n r = \u2016x - x'\u2016\n\n Derived classes should implement one of:\n\n K_r2(self, r2): Returns the kernel evaluated on r\u00b2 (r2), which is the\n squared scaled Euclidean distance Should operate element-wise on r2.\n\n K_r(self, r): Returns the kernel evaluated on r, which is the scaled\n Euclidean distance. Should operate element-wise on r.\n \"\"\"\n\n def K(self, X: TensorType, X2: Optional[TensorType] = None) -> tf.Tensor:\n r2 = self.scaled_squared_euclid_dist(X, X2)\n return self.K_r2(r2)\n\n def K_r2(self, r2: TensorType) -> tf.Tensor:\n if hasattr(self, \"K_r\"):\n # Clipping around the (single) float precision which is ~1e-45.\n r = tf.sqrt(tf.maximum(r2, 1e-36))\n return self.K_r(r) # pylint: disable=no-member\n raise NotImplementedError\n\n def scaled_squared_euclid_dist(\n self, X: TensorType, X2: Optional[TensorType] = None\n ) -> tf.Tensor:\n \"\"\"\n Returns \u2016(X - X2\u1d40) / \u2113\u2016\u00b2, i.e. the squared L\u2082-norm.\n \"\"\"\n return square_distance(self.scale(X), self.scale(X2))\n\n\nclass AnisotropicStationary(Stationary):\n \"\"\"\n Base class for anisotropic stationary kernels, i.e. kernels that only\n depend on\n\n d = x - x'\n\n Derived classes should implement K_d(self, d): Returns the kernel evaluated\n on d, which is the pairwise difference matrix, scaled by the lengthscale\n parameter \u2113 (i.e. [(X - X2\u1d40) / \u2113]). The last axis corresponds to the\n input dimension.\n \"\"\"\n\n def K(self, X: TensorType, X2: Optional[TensorType] = None) -> tf.Tensor:\n return self.K_d(self.scaled_difference_matrix(X, X2))\n\n def scaled_difference_matrix(self, X: TensorType, X2: Optional[TensorType] = None) -> tf.Tensor:\n \"\"\"\n Returns [(X - X2\u1d40) / \u2113]. If X has shape [..., N, D] and\n X2 has shape [..., M, D], the output will have shape [..., N, M, D].\n \"\"\"\n return difference_matrix(self.scale(X), self.scale(X2))\n\n def K_d(self, d: TensorType) -> tf.Tensor:\n raise NotImplementedError\n\n\nclass SquaredExponential(IsotropicStationary):\n \"\"\"\n The radial basis function (RBF) or squared exponential kernel. The kernel equation is\n\n k(r) = \u03c3\u00b2 exp{-\u00bd r\u00b2}\n\n where:\n r is the Euclidean distance between the input points, scaled by the lengthscales parameter \u2113.\n \u03c3\u00b2 is the variance parameter\n\n Functions drawn from a GP with this kernel are infinitely differentiable!\n \"\"\"\n\n def K_r2(self, r2: TensorType) -> tf.Tensor:\n return self.variance * tf.exp(-0.5 * r2)\n\n\nclass RationalQuadratic(IsotropicStationary):\n \"\"\"\n Rational Quadratic kernel,\n\n k(r) = \u03c3\u00b2 (1 + r\u00b2 / 2\u03b1\u2113\u00b2)^(-\u03b1)\n\n \u03c3\u00b2 : variance\n \u2113 : lengthscales\n \u03b1 : alpha, determines relative weighting of small-scale and large-scale fluctuations\n\n For \u03b1 \u2192 \u221e, the RQ kernel becomes equivalent to the squared exponential.\n \"\"\"\n\n def __init__(\n self,\n variance: TensorType = 1.0,\n lengthscales: TensorType = 1.0,\n alpha: TensorType = 1.0,\n active_dims: Optional[ActiveDims] = None,\n ) -> None:\n super().__init__(variance=variance, lengthscales=lengthscales, active_dims=active_dims)\n self.alpha = Parameter(alpha, transform=positive())\n\n def K_r2(self, r2: TensorType) -> tf.Tensor:\n return self.variance * (1 + 0.5 * r2 / self.alpha) ** (-self.alpha)\n\n\nclass Exponential(IsotropicStationary):\n \"\"\"\n The Exponential kernel. It is equivalent to a Matern12 kernel with doubled lengthscales.\n \"\"\"\n\n def K_r(self, r: TensorType) -> tf.Tensor:\n return self.variance * tf.exp(-0.5 * r)\n\n\nclass Matern12(IsotropicStationary):\n \"\"\"\n The Matern 1/2 kernel. Functions drawn from a GP with this kernel are not\n differentiable anywhere. The kernel equation is\n\n k(r) = \u03c3\u00b2 exp{-r}\n\n where:\n r is the Euclidean distance between the input points, scaled by the lengthscales parameter \u2113.\n \u03c3\u00b2 is the variance parameter\n \"\"\"\n\n def K_r(self, r: TensorType) -> tf.Tensor:\n return self.variance * tf.exp(-r)\n\n\nclass Matern32(IsotropicStationary):\n \"\"\"\n The Matern 3/2 kernel. Functions drawn from a GP with this kernel are once\n differentiable. The kernel equation is\n\n k(r) = \u03c3\u00b2 (1 + \u221a3r) exp{-\u221a3 r}\n\n where:\n r is the Euclidean distance between the input points, scaled by the lengthscales parameter \u2113,\n \u03c3\u00b2 is the variance parameter.\n \"\"\"\n\n def K_r(self, r: TensorType) -> tf.Tensor:\n sqrt3 = np.sqrt(3.0)\n return self.variance * (1.0 + sqrt3 * r) * tf.exp(-sqrt3 * r)\n\n\nclass Matern52(IsotropicStationary):\n \"\"\"\n The Matern 5/2 kernel. Functions drawn from a GP with this kernel are twice\n differentiable. The kernel equation is\n\n k(r) = \u03c3\u00b2 (1 + \u221a5r + 5/3r\u00b2) exp{-\u221a5 r}\n\n where:\n r is the Euclidean distance between the input points, scaled by the lengthscales parameter \u2113,\n \u03c3\u00b2 is the variance parameter.\n \"\"\"\n\n def K_r(self, r: TensorType) -> tf.Tensor:\n sqrt5 = np.sqrt(5.0)\n return self.variance * (1.0 + sqrt5 * r + 5.0 / 3.0 * tf.square(r)) * tf.exp(-sqrt5 * r)\n\n\nclass Cosine(AnisotropicStationary):\n \"\"\"\n The Cosine kernel. Functions drawn from a GP with this kernel are sinusoids\n (with a random phase). The kernel equation is\n\n k(r) = \u03c3\u00b2 cos{2\u03c0d}\n\n where:\n d is the sum of the per-dimension differences between the input points, scaled by the\n lengthscale parameter \u2113 (i.e. \u03a3\u1d62 [(X - X2\u1d40) / \u2113]\u1d62),\n \u03c3\u00b2 is the variance parameter.\n \"\"\"\n\n def K_d(self, d: TensorType) -> tf.Tensor:\n d = tf.reduce_sum(d, axis=-1)\n return self.variance * tf.cos(2 * np.pi * d)\n", "path": "gpflow/kernels/stationaries.py"}], "after_files": [{"content": "# Copyright 2017-2020 The GPflow Contributors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Any, Optional\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom ..base import Parameter, TensorType\nfrom ..utilities import positive\nfrom ..utilities.ops import difference_matrix, square_distance\nfrom .base import ActiveDims, Kernel\n\n\nclass Stationary(Kernel):\n \"\"\"\n Base class for kernels that are stationary, that is, they only depend on\n\n d = x - x'\n\n This class handles 'ard' behaviour, which stands for 'Automatic Relevance\n Determination'. This means that the kernel has one lengthscale per\n dimension, otherwise the kernel is isotropic (has a single lengthscale).\n \"\"\"\n\n def __init__(\n self, variance: TensorType = 1.0, lengthscales: TensorType = 1.0, **kwargs: Any\n ) -> None:\n \"\"\"\n :param variance: the (initial) value for the variance parameter.\n :param lengthscales: the (initial) value for the lengthscale\n parameter(s), to induce ARD behaviour this must be initialised as\n an array the same length as the number of active dimensions\n e.g. [1., 1., 1.]. If only a single value is passed, this value\n is used as the lengthscale of each dimension.\n :param kwargs: accepts `name` and `active_dims`, which is a list or\n slice of indices which controls which columns of X are used (by\n default, all columns are used).\n \"\"\"\n for kwarg in kwargs:\n if kwarg not in {\"name\", \"active_dims\"}:\n raise TypeError(f\"Unknown keyword argument: {kwarg}\")\n\n super().__init__(**kwargs)\n self.variance = Parameter(variance, transform=positive())\n self.lengthscales = Parameter(lengthscales, transform=positive())\n self._validate_ard_active_dims(self.lengthscales)\n\n @property\n def ard(self) -> bool:\n \"\"\"\n Whether ARD behaviour is active.\n \"\"\"\n ndims: int = self.lengthscales.shape.ndims\n return ndims > 0\n\n def scale(self, X: TensorType) -> TensorType:\n X_scaled = X / self.lengthscales if X is not None else X\n return X_scaled\n\n def K_diag(self, X: TensorType) -> tf.Tensor:\n return tf.fill(tf.shape(X)[:-1], tf.squeeze(self.variance))\n\n\nclass IsotropicStationary(Stationary):\n \"\"\"\n Base class for isotropic stationary kernels, i.e. kernels that only\n depend on\n\n r = \u2016x - x'\u2016\n\n Derived classes should implement one of:\n\n K_r2(self, r2): Returns the kernel evaluated on r\u00b2 (r2), which is the\n squared scaled Euclidean distance Should operate element-wise on r2.\n\n K_r(self, r): Returns the kernel evaluated on r, which is the scaled\n Euclidean distance. Should operate element-wise on r.\n \"\"\"\n\n def K(self, X: TensorType, X2: Optional[TensorType] = None) -> tf.Tensor:\n r2 = self.scaled_squared_euclid_dist(X, X2)\n return self.K_r2(r2)\n\n def K_r2(self, r2: TensorType) -> tf.Tensor:\n if hasattr(self, \"K_r\"):\n # Clipping around the (single) float precision which is ~1e-45.\n r = tf.sqrt(tf.maximum(r2, 1e-36))\n return self.K_r(r) # pylint: disable=no-member\n raise NotImplementedError\n\n def scaled_squared_euclid_dist(\n self, X: TensorType, X2: Optional[TensorType] = None\n ) -> tf.Tensor:\n \"\"\"\n Returns \u2016(X - X2\u1d40) / \u2113\u2016\u00b2, i.e. the squared L\u2082-norm.\n \"\"\"\n return square_distance(self.scale(X), self.scale(X2))\n\n\nclass AnisotropicStationary(Stationary):\n \"\"\"\n Base class for anisotropic stationary kernels, i.e. kernels that only\n depend on\n\n d = x - x'\n\n Derived classes should implement K_d(self, d): Returns the kernel evaluated\n on d, which is the pairwise difference matrix, scaled by the lengthscale\n parameter \u2113 (i.e. [(X - X2\u1d40) / \u2113]). The last axis corresponds to the\n input dimension.\n \"\"\"\n\n def __init__(\n self, variance: TensorType = 1.0, lengthscales: TensorType = 1.0, **kwargs: Any\n ) -> None:\n \"\"\"\n :param variance: the (initial) value for the variance parameter.\n :param lengthscales: the (initial) value for the lengthscale\n parameter(s), to induce ARD behaviour this must be initialised as\n an array the same length as the number of active dimensions\n e.g. [1., 1., 1.]. Note that anisotropic kernels can possess\n negative lengthscales. If only a single value is passed, this\n value is used as the lengthscale of each dimension.\n :param kwargs: accepts `name` and `active_dims`, which is a list or\n slice of indices which controls which columns of X are used (by\n default, all columns are used).\n \"\"\"\n super().__init__(variance, lengthscales, **kwargs)\n\n if self.ard:\n self.lengthscales = Parameter(self.lengthscales.numpy())\n\n def K(self, X: TensorType, X2: Optional[TensorType] = None) -> tf.Tensor:\n return self.K_d(self.scaled_difference_matrix(X, X2))\n\n def scaled_difference_matrix(self, X: TensorType, X2: Optional[TensorType] = None) -> tf.Tensor:\n \"\"\"\n Returns [(X - X2\u1d40) / \u2113]. If X has shape [..., N, D] and\n X2 has shape [..., M, D], the output will have shape [..., N, M, D].\n \"\"\"\n return difference_matrix(self.scale(X), self.scale(X2))\n\n def K_d(self, d: TensorType) -> tf.Tensor:\n raise NotImplementedError\n\n\nclass SquaredExponential(IsotropicStationary):\n \"\"\"\n The radial basis function (RBF) or squared exponential kernel. The kernel equation is\n\n k(r) = \u03c3\u00b2 exp{-\u00bd r\u00b2}\n\n where:\n r is the Euclidean distance between the input points, scaled by the lengthscales parameter \u2113.\n \u03c3\u00b2 is the variance parameter\n\n Functions drawn from a GP with this kernel are infinitely differentiable!\n \"\"\"\n\n def K_r2(self, r2: TensorType) -> tf.Tensor:\n return self.variance * tf.exp(-0.5 * r2)\n\n\nclass RationalQuadratic(IsotropicStationary):\n \"\"\"\n Rational Quadratic kernel,\n\n k(r) = \u03c3\u00b2 (1 + r\u00b2 / 2\u03b1\u2113\u00b2)^(-\u03b1)\n\n \u03c3\u00b2 : variance\n \u2113 : lengthscales\n \u03b1 : alpha, determines relative weighting of small-scale and large-scale fluctuations\n\n For \u03b1 \u2192 \u221e, the RQ kernel becomes equivalent to the squared exponential.\n \"\"\"\n\n def __init__(\n self,\n variance: TensorType = 1.0,\n lengthscales: TensorType = 1.0,\n alpha: TensorType = 1.0,\n active_dims: Optional[ActiveDims] = None,\n ) -> None:\n super().__init__(variance=variance, lengthscales=lengthscales, active_dims=active_dims)\n self.alpha = Parameter(alpha, transform=positive())\n\n def K_r2(self, r2: TensorType) -> tf.Tensor:\n return self.variance * (1 + 0.5 * r2 / self.alpha) ** (-self.alpha)\n\n\nclass Exponential(IsotropicStationary):\n \"\"\"\n The Exponential kernel. It is equivalent to a Matern12 kernel with doubled lengthscales.\n \"\"\"\n\n def K_r(self, r: TensorType) -> tf.Tensor:\n return self.variance * tf.exp(-0.5 * r)\n\n\nclass Matern12(IsotropicStationary):\n \"\"\"\n The Matern 1/2 kernel. Functions drawn from a GP with this kernel are not\n differentiable anywhere. The kernel equation is\n\n k(r) = \u03c3\u00b2 exp{-r}\n\n where:\n r is the Euclidean distance between the input points, scaled by the lengthscales parameter \u2113.\n \u03c3\u00b2 is the variance parameter\n \"\"\"\n\n def K_r(self, r: TensorType) -> tf.Tensor:\n return self.variance * tf.exp(-r)\n\n\nclass Matern32(IsotropicStationary):\n \"\"\"\n The Matern 3/2 kernel. Functions drawn from a GP with this kernel are once\n differentiable. The kernel equation is\n\n k(r) = \u03c3\u00b2 (1 + \u221a3r) exp{-\u221a3 r}\n\n where:\n r is the Euclidean distance between the input points, scaled by the lengthscales parameter \u2113,\n \u03c3\u00b2 is the variance parameter.\n \"\"\"\n\n def K_r(self, r: TensorType) -> tf.Tensor:\n sqrt3 = np.sqrt(3.0)\n return self.variance * (1.0 + sqrt3 * r) * tf.exp(-sqrt3 * r)\n\n\nclass Matern52(IsotropicStationary):\n \"\"\"\n The Matern 5/2 kernel. Functions drawn from a GP with this kernel are twice\n differentiable. The kernel equation is\n\n k(r) = \u03c3\u00b2 (1 + \u221a5r + 5/3r\u00b2) exp{-\u221a5 r}\n\n where:\n r is the Euclidean distance between the input points, scaled by the lengthscales parameter \u2113,\n \u03c3\u00b2 is the variance parameter.\n \"\"\"\n\n def K_r(self, r: TensorType) -> tf.Tensor:\n sqrt5 = np.sqrt(5.0)\n return self.variance * (1.0 + sqrt5 * r + 5.0 / 3.0 * tf.square(r)) * tf.exp(-sqrt5 * r)\n\n\nclass Cosine(AnisotropicStationary):\n \"\"\"\n The Cosine kernel. Functions drawn from a GP with this kernel are sinusoids\n (with a random phase). The kernel equation is\n\n k(r) = \u03c3\u00b2 cos{2\u03c0d}\n\n where:\n d is the sum of the per-dimension differences between the input points, scaled by the\n lengthscale parameter \u2113 (i.e. \u03a3\u1d62 [(X - X2\u1d40) / \u2113]\u1d62),\n \u03c3\u00b2 is the variance parameter.\n \"\"\"\n\n def K_d(self, d: TensorType) -> tf.Tensor:\n d = tf.reduce_sum(d, axis=-1)\n return self.variance * tf.cos(2 * np.pi * d)\n", "path": "gpflow/kernels/stationaries.py"}]} | 3,372 | 501 |
gh_patches_debug_35660 | rasdani/github-patches | git_diff | Lightning-AI__torchmetrics-1709 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
mismatch for docs in `perplexity`
## 📚 Documentation
I am working on the NLP models and find that the implementation of `perplexity` in https://github.com/Lightning-AI/torchmetrics/blob/e4df07109586d1f605e06aed85fbc6794e8ed833/src/torchmetrics/functional/text/perplexity.py#L114-L126 is probably wrong.
Since `_perplexity_update` in https://github.com/Lightning-AI/torchmetrics/blob/e4df07109586d1f605e06aed85fbc6794e8ed833/src/torchmetrics/functional/text/perplexity.py#L84-L88 execute the `softmax`, it indicates that the `preds` **should be a logit or a unnormalized score** and definitely NOT a log probability.
I wonder if this is intentioned move or is just a simple typo.
I also find the similar fix in issue #1366, which made the mistake. Should I make a PR to this?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/torchmetrics/functional/text/perplexity.py`
Content:
```
1 # Copyright The Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import Optional, Tuple
16
17 import torch
18 from torch import Tensor
19 from torch.nn import functional as F # noqa: N812
20
21 _TORCH_FLOAT_OR_DOUBLE = (torch.float32, torch.float64)
22
23
24 def _check_shape_and_type_consistency(preds: Tensor, target: Tensor) -> None:
25 """Check shape and type consistency of input vectors.
26
27 Args:
28 preds:
29 Probabilities assigned to each token in a sequence with shape [batch_size, seq_len, vocab_size].
30 target:
31 Ground truth values with a shape [batch_size, seq_len].
32
33 Raises:
34 ValueError:
35 If ``preds`` tensor has no 3 dimensions.
36 ValueError:
37 If ``target`` tensor has no 2 dimensions.
38 ValueError:
39 If the first two dimensions of ``preds`` and ``target`` do not equal.
40 TypeError:
41 If ``preds`` dtype is not one of ``(torch.float16, torch.float32, torch.float64)``
42 TypeError:
43 If ``target`` is not of a type LongTensor (torch.int64)
44 """
45 if len(preds.shape) != 3:
46 raise ValueError(
47 "Input tensor `preds` is expected to have 3 dimensions, [batch_size, seq_len, vocab_size],"
48 f" but got {len(preds.shape)}."
49 )
50 if len(target.shape) != 2:
51 raise ValueError(
52 "Input tensor `target` is expected to have 2 dimensions, [batch_size, seq_len],"
53 f" but got {len(target.shape)}."
54 )
55 if preds.shape[:2] != target.shape:
56 raise ValueError(
57 "Input tensors `preds` and `target` are expected to have equaling first two dimensions,"
58 f" [batch_size, seq_len], but got {preds.shape[:2]} and {target.shape}."
59 )
60 if preds.dtype not in _TORCH_FLOAT_OR_DOUBLE:
61 raise TypeError(
62 f"Input tensor `preds` is expected to be of a type one of {_TORCH_FLOAT_OR_DOUBLE} but got {preds.dtype}."
63 )
64 if target.dtype != torch.int64:
65 raise TypeError(f"Input tensor `target` is expected to be of a type {torch.int64} but got {target.dtype}.")
66
67
68 def _perplexity_update(preds: Tensor, target: Tensor, ignore_index: Optional[int] = None) -> Tuple[Tensor, Tensor]:
69 """Compute intermediate statistics for Perplexity.
70
71 Args:
72 preds:
73 Probabilities assigned to each token in a sequence with shape [batch_size, seq_len, vocab_size].
74 target:
75 Ground truth values with a shape [batch_size, seq_len].
76 ignore_index:
77 Integer specifying a target class to ignore. If given, this class index does not contribute
78 to the returned score.
79
80 Returns:
81 Log probabilities, summed over all samples
82 Number of samples
83 """
84 _check_shape_and_type_consistency(preds, target)
85
86 probs = F.softmax(preds.reshape(-1, preds.shape[-1]), dim=1)
87 target = target.reshape(-1)
88
89 if ignore_index is not None:
90 mask = target.ne(ignore_index)
91 target = target.where(target != ignore_index, torch.tensor(0, device=target.device))
92 else:
93 mask = torch.ones_like(target, dtype=torch.bool)
94
95 probs = probs[:, target].diagonal()[mask]
96 total_log_probs = -probs.log().sum()
97 count = mask.sum()
98
99 return total_log_probs, count
100
101
102 def _perplexity_compute(total: Tensor, count: Tensor) -> Tensor:
103 """Compute the Perplexity.
104
105 Args:
106 total: Log probabilities, summed over all samples
107 count: Number of samples
108 Returns:
109 Perplexity
110 """
111 return torch.exp(total / count)
112
113
114 def perplexity(preds: Tensor, target: Tensor, ignore_index: Optional[int] = None) -> Tensor:
115 """Perplexity measures how well a language model predicts a text sample.
116
117 This metric is calculated as the average number of bits per word a model needs to represent the sample.
118
119 Args:
120 preds:
121 Log probabilities assigned to each token in a sequence with shape [batch_size, seq_len, vocab_size].
122 target:
123 Ground truth values with a shape [batch_size, seq_len].
124 ignore_index:
125 Integer specifying a target class to ignore. If given, this class index does not contribute
126 to the returned score.
127
128 Returns:
129 Perplexity value
130
131 Examples:
132 >>> import torch
133 >>> preds = torch.rand(2, 8, 5, generator=torch.manual_seed(22))
134 >>> target = torch.randint(5, (2, 8), generator=torch.manual_seed(22))
135 >>> target[0, 6:] = -100
136 >>> perplexity(preds, target, ignore_index=-100)
137 tensor(5.2545)
138 """
139 total, count = _perplexity_update(preds, target, ignore_index)
140 return _perplexity_compute(total, count)
141
```
Path: `src/torchmetrics/text/perplexity.py`
Content:
```
1 # Copyright The Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import Any, Dict, Optional, Sequence, Union
16
17 from torch import Tensor, tensor
18
19 from torchmetrics.functional.text.perplexity import _perplexity_compute, _perplexity_update
20 from torchmetrics.metric import Metric
21 from torchmetrics.utilities.imports import _MATPLOTLIB_AVAILABLE
22 from torchmetrics.utilities.plot import _AX_TYPE, _PLOT_OUT_TYPE
23
24 if not _MATPLOTLIB_AVAILABLE:
25 __doctest_skip__ = ["Perplexity.plot"]
26
27
28 class Perplexity(Metric):
29 r"""Perplexity measures how well a language model predicts a text sample.
30
31 It's calculated as the average number of bits per word a model needs to represent the sample.
32
33 As input to ``forward`` and ``update`` the metric accepts the following input:
34
35 - ``preds`` (:class:`~torch.Tensor`): Probabilities assigned to each token in a sequence with shape
36 [batch_size, seq_len, vocab_size]
37 - ``target`` (:class:`~torch.Tensor`): Ground truth values with a shape [batch_size, seq_len]
38
39 As output of ``forward`` and ``compute`` the metric returns the following output:
40
41 - ``perp`` (:class:`~torch.Tensor`): A tensor with the perplexity score
42
43 Args:
44 ignore_index: Integer specifying a target class to ignore.
45 If given, this class index does not contribute to the returned score.
46 kwargs:
47 Additional keyword arguments, see :ref:`Metric kwargs` for more info.
48
49 Examples:
50 >>> from torchmetrics.text import Perplexity
51 >>> import torch
52 >>> preds = torch.rand(2, 8, 5, generator=torch.manual_seed(22))
53 >>> target = torch.randint(5, (2, 8), generator=torch.manual_seed(22))
54 >>> target[0, 6:] = -100
55 >>> perp = Perplexity(ignore_index=-100)
56 >>> perp(preds, target)
57 tensor(5.2545)
58 """
59 is_differentiable = True
60 higher_is_better = False
61 full_state_update = False
62 total_log_probs: Tensor
63 count: Tensor
64
65 def __init__(
66 self,
67 ignore_index: Optional[int] = None,
68 **kwargs: Dict[str, Any],
69 ) -> None:
70 super().__init__(**kwargs)
71 if ignore_index is not None and not isinstance(ignore_index, int):
72 raise ValueError(f"Argument `ignore_index` expected to either be `None` or an `int` but got {ignore_index}")
73 self.ignore_index = ignore_index
74 self.add_state("total_log_probs", default=tensor(0.0), dist_reduce_fx="sum")
75 self.add_state("count", default=tensor(0.0), dist_reduce_fx="sum")
76
77 def update(self, preds: Tensor, target: Tensor) -> None:
78 """Update state with predictions and targets."""
79 total_log_probs, count = _perplexity_update(preds, target, self.ignore_index)
80 self.total_log_probs += total_log_probs
81 self.count += count
82
83 def compute(self) -> Tensor:
84 """Compute the Perplexity."""
85 return _perplexity_compute(self.total_log_probs, self.count)
86
87 def plot(
88 self, val: Optional[Union[Tensor, Sequence[Tensor]]] = None, ax: Optional[_AX_TYPE] = None
89 ) -> _PLOT_OUT_TYPE:
90 """Plot a single or multiple values from the metric.
91
92 Args:
93 val: Either a single result from calling `metric.forward` or `metric.compute` or a list of these results.
94 If no value is provided, will automatically call `metric.compute` and plot that result.
95 ax: An matplotlib axis object. If provided will add plot to that axis
96
97 Returns:
98 Figure and Axes object
99
100 Raises:
101 ModuleNotFoundError:
102 If `matplotlib` is not installed
103
104 .. plot::
105 :scale: 75
106
107 >>> # Example plotting a single value
108 >>> import torch
109 >>> from torchmetrics.text import Perplexity
110 >>> metric = Perplexity()
111 >>> metric.update(torch.rand(2, 8, 5), torch.randint(5, (2, 8)))
112 >>> fig_, ax_ = metric.plot()
113
114 .. plot::
115 :scale: 75
116
117 >>> # Example plotting multiple values
118 >>> import torch
119 >>> from torchmetrics.text import Perplexity
120 >>> metric = Perplexity()
121 >>> values = [ ]
122 >>> for _ in range(10):
123 ... values.append(metric(torch.rand(2, 8, 5), torch.randint(5, (2, 8))))
124 >>> fig_, ax_ = metric.plot(values)
125 """
126 return self._plot(val, ax)
127
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/torchmetrics/functional/text/perplexity.py b/src/torchmetrics/functional/text/perplexity.py
--- a/src/torchmetrics/functional/text/perplexity.py
+++ b/src/torchmetrics/functional/text/perplexity.py
@@ -26,7 +26,8 @@
Args:
preds:
- Probabilities assigned to each token in a sequence with shape [batch_size, seq_len, vocab_size].
+ Logits or a unnormalized score assigned to each token in a sequence with shape [batch_size, seq_len,
+ vocab_size]. Scores will be normalized internally using softmax.
target:
Ground truth values with a shape [batch_size, seq_len].
@@ -70,7 +71,8 @@
Args:
preds:
- Probabilities assigned to each token in a sequence with shape [batch_size, seq_len, vocab_size].
+ Logits or a unnormalized score assigned to each token in a sequence with shape [batch_size, seq_len,
+ vocab_size]. Scores will be normalized internally using softmax.
target:
Ground truth values with a shape [batch_size, seq_len].
ignore_index:
@@ -118,7 +120,8 @@
Args:
preds:
- Log probabilities assigned to each token in a sequence with shape [batch_size, seq_len, vocab_size].
+ Logits or a unnormalized score assigned to each token in a sequence with shape [batch_size, seq_len,
+ vocab_size], which is the output of a language model. Scores will be normalized internally using softmax.
target:
Ground truth values with a shape [batch_size, seq_len].
ignore_index:
diff --git a/src/torchmetrics/text/perplexity.py b/src/torchmetrics/text/perplexity.py
--- a/src/torchmetrics/text/perplexity.py
+++ b/src/torchmetrics/text/perplexity.py
@@ -32,8 +32,9 @@
As input to ``forward`` and ``update`` the metric accepts the following input:
- - ``preds`` (:class:`~torch.Tensor`): Probabilities assigned to each token in a sequence with shape
- [batch_size, seq_len, vocab_size]
+ - ``preds`` (:class:`~torch.Tensor`): Logits or a unnormalized score assigned to each token in a sequence with shape
+ [batch_size, seq_len, vocab_size], which is the output of a language model. Scores will be normalized internally
+ using softmax.
- ``target`` (:class:`~torch.Tensor`): Ground truth values with a shape [batch_size, seq_len]
As output of ``forward`` and ``compute`` the metric returns the following output:
| {"golden_diff": "diff --git a/src/torchmetrics/functional/text/perplexity.py b/src/torchmetrics/functional/text/perplexity.py\n--- a/src/torchmetrics/functional/text/perplexity.py\n+++ b/src/torchmetrics/functional/text/perplexity.py\n@@ -26,7 +26,8 @@\n \n Args:\n preds:\n- Probabilities assigned to each token in a sequence with shape [batch_size, seq_len, vocab_size].\n+ Logits or a unnormalized score assigned to each token in a sequence with shape [batch_size, seq_len,\n+ vocab_size]. Scores will be normalized internally using softmax.\n target:\n Ground truth values with a shape [batch_size, seq_len].\n \n@@ -70,7 +71,8 @@\n \n Args:\n preds:\n- Probabilities assigned to each token in a sequence with shape [batch_size, seq_len, vocab_size].\n+ Logits or a unnormalized score assigned to each token in a sequence with shape [batch_size, seq_len,\n+ vocab_size]. Scores will be normalized internally using softmax.\n target:\n Ground truth values with a shape [batch_size, seq_len].\n ignore_index:\n@@ -118,7 +120,8 @@\n \n Args:\n preds:\n- Log probabilities assigned to each token in a sequence with shape [batch_size, seq_len, vocab_size].\n+ Logits or a unnormalized score assigned to each token in a sequence with shape [batch_size, seq_len,\n+ vocab_size], which is the output of a language model. Scores will be normalized internally using softmax.\n target:\n Ground truth values with a shape [batch_size, seq_len].\n ignore_index:\ndiff --git a/src/torchmetrics/text/perplexity.py b/src/torchmetrics/text/perplexity.py\n--- a/src/torchmetrics/text/perplexity.py\n+++ b/src/torchmetrics/text/perplexity.py\n@@ -32,8 +32,9 @@\n \n As input to ``forward`` and ``update`` the metric accepts the following input:\n \n- - ``preds`` (:class:`~torch.Tensor`): Probabilities assigned to each token in a sequence with shape\n- [batch_size, seq_len, vocab_size]\n+ - ``preds`` (:class:`~torch.Tensor`): Logits or a unnormalized score assigned to each token in a sequence with shape\n+ [batch_size, seq_len, vocab_size], which is the output of a language model. Scores will be normalized internally\n+ using softmax.\n - ``target`` (:class:`~torch.Tensor`): Ground truth values with a shape [batch_size, seq_len]\n \n As output of ``forward`` and ``compute`` the metric returns the following output:\n", "issue": "mismatch for docs in `perplexity`\n## \ud83d\udcda Documentation\r\n\r\nI am working on the NLP models and find that the implementation of `perplexity` in https://github.com/Lightning-AI/torchmetrics/blob/e4df07109586d1f605e06aed85fbc6794e8ed833/src/torchmetrics/functional/text/perplexity.py#L114-L126 is probably wrong.\r\n\r\nSince `_perplexity_update` in https://github.com/Lightning-AI/torchmetrics/blob/e4df07109586d1f605e06aed85fbc6794e8ed833/src/torchmetrics/functional/text/perplexity.py#L84-L88 execute the `softmax`, it indicates that the `preds` **should be a logit or a unnormalized score** and definitely NOT a log probability.\r\n\r\nI wonder if this is intentioned move or is just a simple typo. \r\n\r\nI also find the similar fix in issue #1366, which made the mistake. Should I make a PR to this?\r\n\r\n\n", "before_files": [{"content": "# Copyright The Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Optional, Tuple\n\nimport torch\nfrom torch import Tensor\nfrom torch.nn import functional as F # noqa: N812\n\n_TORCH_FLOAT_OR_DOUBLE = (torch.float32, torch.float64)\n\n\ndef _check_shape_and_type_consistency(preds: Tensor, target: Tensor) -> None:\n \"\"\"Check shape and type consistency of input vectors.\n\n Args:\n preds:\n Probabilities assigned to each token in a sequence with shape [batch_size, seq_len, vocab_size].\n target:\n Ground truth values with a shape [batch_size, seq_len].\n\n Raises:\n ValueError:\n If ``preds`` tensor has no 3 dimensions.\n ValueError:\n If ``target`` tensor has no 2 dimensions.\n ValueError:\n If the first two dimensions of ``preds`` and ``target`` do not equal.\n TypeError:\n If ``preds`` dtype is not one of ``(torch.float16, torch.float32, torch.float64)``\n TypeError:\n If ``target`` is not of a type LongTensor (torch.int64)\n \"\"\"\n if len(preds.shape) != 3:\n raise ValueError(\n \"Input tensor `preds` is expected to have 3 dimensions, [batch_size, seq_len, vocab_size],\"\n f\" but got {len(preds.shape)}.\"\n )\n if len(target.shape) != 2:\n raise ValueError(\n \"Input tensor `target` is expected to have 2 dimensions, [batch_size, seq_len],\"\n f\" but got {len(target.shape)}.\"\n )\n if preds.shape[:2] != target.shape:\n raise ValueError(\n \"Input tensors `preds` and `target` are expected to have equaling first two dimensions,\"\n f\" [batch_size, seq_len], but got {preds.shape[:2]} and {target.shape}.\"\n )\n if preds.dtype not in _TORCH_FLOAT_OR_DOUBLE:\n raise TypeError(\n f\"Input tensor `preds` is expected to be of a type one of {_TORCH_FLOAT_OR_DOUBLE} but got {preds.dtype}.\"\n )\n if target.dtype != torch.int64:\n raise TypeError(f\"Input tensor `target` is expected to be of a type {torch.int64} but got {target.dtype}.\")\n\n\ndef _perplexity_update(preds: Tensor, target: Tensor, ignore_index: Optional[int] = None) -> Tuple[Tensor, Tensor]:\n \"\"\"Compute intermediate statistics for Perplexity.\n\n Args:\n preds:\n Probabilities assigned to each token in a sequence with shape [batch_size, seq_len, vocab_size].\n target:\n Ground truth values with a shape [batch_size, seq_len].\n ignore_index:\n Integer specifying a target class to ignore. If given, this class index does not contribute\n to the returned score.\n\n Returns:\n Log probabilities, summed over all samples\n Number of samples\n \"\"\"\n _check_shape_and_type_consistency(preds, target)\n\n probs = F.softmax(preds.reshape(-1, preds.shape[-1]), dim=1)\n target = target.reshape(-1)\n\n if ignore_index is not None:\n mask = target.ne(ignore_index)\n target = target.where(target != ignore_index, torch.tensor(0, device=target.device))\n else:\n mask = torch.ones_like(target, dtype=torch.bool)\n\n probs = probs[:, target].diagonal()[mask]\n total_log_probs = -probs.log().sum()\n count = mask.sum()\n\n return total_log_probs, count\n\n\ndef _perplexity_compute(total: Tensor, count: Tensor) -> Tensor:\n \"\"\"Compute the Perplexity.\n\n Args:\n total: Log probabilities, summed over all samples\n count: Number of samples\n Returns:\n Perplexity\n \"\"\"\n return torch.exp(total / count)\n\n\ndef perplexity(preds: Tensor, target: Tensor, ignore_index: Optional[int] = None) -> Tensor:\n \"\"\"Perplexity measures how well a language model predicts a text sample.\n\n This metric is calculated as the average number of bits per word a model needs to represent the sample.\n\n Args:\n preds:\n Log probabilities assigned to each token in a sequence with shape [batch_size, seq_len, vocab_size].\n target:\n Ground truth values with a shape [batch_size, seq_len].\n ignore_index:\n Integer specifying a target class to ignore. If given, this class index does not contribute\n to the returned score.\n\n Returns:\n Perplexity value\n\n Examples:\n >>> import torch\n >>> preds = torch.rand(2, 8, 5, generator=torch.manual_seed(22))\n >>> target = torch.randint(5, (2, 8), generator=torch.manual_seed(22))\n >>> target[0, 6:] = -100\n >>> perplexity(preds, target, ignore_index=-100)\n tensor(5.2545)\n \"\"\"\n total, count = _perplexity_update(preds, target, ignore_index)\n return _perplexity_compute(total, count)\n", "path": "src/torchmetrics/functional/text/perplexity.py"}, {"content": "# Copyright The Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Any, Dict, Optional, Sequence, Union\n\nfrom torch import Tensor, tensor\n\nfrom torchmetrics.functional.text.perplexity import _perplexity_compute, _perplexity_update\nfrom torchmetrics.metric import Metric\nfrom torchmetrics.utilities.imports import _MATPLOTLIB_AVAILABLE\nfrom torchmetrics.utilities.plot import _AX_TYPE, _PLOT_OUT_TYPE\n\nif not _MATPLOTLIB_AVAILABLE:\n __doctest_skip__ = [\"Perplexity.plot\"]\n\n\nclass Perplexity(Metric):\n r\"\"\"Perplexity measures how well a language model predicts a text sample.\n\n It's calculated as the average number of bits per word a model needs to represent the sample.\n\n As input to ``forward`` and ``update`` the metric accepts the following input:\n\n - ``preds`` (:class:`~torch.Tensor`): Probabilities assigned to each token in a sequence with shape\n [batch_size, seq_len, vocab_size]\n - ``target`` (:class:`~torch.Tensor`): Ground truth values with a shape [batch_size, seq_len]\n\n As output of ``forward`` and ``compute`` the metric returns the following output:\n\n - ``perp`` (:class:`~torch.Tensor`): A tensor with the perplexity score\n\n Args:\n ignore_index: Integer specifying a target class to ignore.\n If given, this class index does not contribute to the returned score.\n kwargs:\n Additional keyword arguments, see :ref:`Metric kwargs` for more info.\n\n Examples:\n >>> from torchmetrics.text import Perplexity\n >>> import torch\n >>> preds = torch.rand(2, 8, 5, generator=torch.manual_seed(22))\n >>> target = torch.randint(5, (2, 8), generator=torch.manual_seed(22))\n >>> target[0, 6:] = -100\n >>> perp = Perplexity(ignore_index=-100)\n >>> perp(preds, target)\n tensor(5.2545)\n \"\"\"\n is_differentiable = True\n higher_is_better = False\n full_state_update = False\n total_log_probs: Tensor\n count: Tensor\n\n def __init__(\n self,\n ignore_index: Optional[int] = None,\n **kwargs: Dict[str, Any],\n ) -> None:\n super().__init__(**kwargs)\n if ignore_index is not None and not isinstance(ignore_index, int):\n raise ValueError(f\"Argument `ignore_index` expected to either be `None` or an `int` but got {ignore_index}\")\n self.ignore_index = ignore_index\n self.add_state(\"total_log_probs\", default=tensor(0.0), dist_reduce_fx=\"sum\")\n self.add_state(\"count\", default=tensor(0.0), dist_reduce_fx=\"sum\")\n\n def update(self, preds: Tensor, target: Tensor) -> None:\n \"\"\"Update state with predictions and targets.\"\"\"\n total_log_probs, count = _perplexity_update(preds, target, self.ignore_index)\n self.total_log_probs += total_log_probs\n self.count += count\n\n def compute(self) -> Tensor:\n \"\"\"Compute the Perplexity.\"\"\"\n return _perplexity_compute(self.total_log_probs, self.count)\n\n def plot(\n self, val: Optional[Union[Tensor, Sequence[Tensor]]] = None, ax: Optional[_AX_TYPE] = None\n ) -> _PLOT_OUT_TYPE:\n \"\"\"Plot a single or multiple values from the metric.\n\n Args:\n val: Either a single result from calling `metric.forward` or `metric.compute` or a list of these results.\n If no value is provided, will automatically call `metric.compute` and plot that result.\n ax: An matplotlib axis object. If provided will add plot to that axis\n\n Returns:\n Figure and Axes object\n\n Raises:\n ModuleNotFoundError:\n If `matplotlib` is not installed\n\n .. plot::\n :scale: 75\n\n >>> # Example plotting a single value\n >>> import torch\n >>> from torchmetrics.text import Perplexity\n >>> metric = Perplexity()\n >>> metric.update(torch.rand(2, 8, 5), torch.randint(5, (2, 8)))\n >>> fig_, ax_ = metric.plot()\n\n .. plot::\n :scale: 75\n\n >>> # Example plotting multiple values\n >>> import torch\n >>> from torchmetrics.text import Perplexity\n >>> metric = Perplexity()\n >>> values = [ ]\n >>> for _ in range(10):\n ... values.append(metric(torch.rand(2, 8, 5), torch.randint(5, (2, 8))))\n >>> fig_, ax_ = metric.plot(values)\n \"\"\"\n return self._plot(val, ax)\n", "path": "src/torchmetrics/text/perplexity.py"}], "after_files": [{"content": "# Copyright The Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Optional, Tuple\n\nimport torch\nfrom torch import Tensor\nfrom torch.nn import functional as F # noqa: N812\n\n_TORCH_FLOAT_OR_DOUBLE = (torch.float32, torch.float64)\n\n\ndef _check_shape_and_type_consistency(preds: Tensor, target: Tensor) -> None:\n \"\"\"Check shape and type consistency of input vectors.\n\n Args:\n preds:\n Logits or a unnormalized score assigned to each token in a sequence with shape [batch_size, seq_len,\n vocab_size]. Scores will be normalized internally using softmax.\n target:\n Ground truth values with a shape [batch_size, seq_len].\n\n Raises:\n ValueError:\n If ``preds`` tensor has no 3 dimensions.\n ValueError:\n If ``target`` tensor has no 2 dimensions.\n ValueError:\n If the first two dimensions of ``preds`` and ``target`` do not equal.\n TypeError:\n If ``preds`` dtype is not one of ``(torch.float16, torch.float32, torch.float64)``\n TypeError:\n If ``target`` is not of a type LongTensor (torch.int64)\n \"\"\"\n if len(preds.shape) != 3:\n raise ValueError(\n \"Input tensor `preds` is expected to have 3 dimensions, [batch_size, seq_len, vocab_size],\"\n f\" but got {len(preds.shape)}.\"\n )\n if len(target.shape) != 2:\n raise ValueError(\n \"Input tensor `target` is expected to have 2 dimensions, [batch_size, seq_len],\"\n f\" but got {len(target.shape)}.\"\n )\n if preds.shape[:2] != target.shape:\n raise ValueError(\n \"Input tensors `preds` and `target` are expected to have equaling first two dimensions,\"\n f\" [batch_size, seq_len], but got {preds.shape[:2]} and {target.shape}.\"\n )\n if preds.dtype not in _TORCH_FLOAT_OR_DOUBLE:\n raise TypeError(\n f\"Input tensor `preds` is expected to be of a type one of {_TORCH_FLOAT_OR_DOUBLE} but got {preds.dtype}.\"\n )\n if target.dtype != torch.int64:\n raise TypeError(f\"Input tensor `target` is expected to be of a type {torch.int64} but got {target.dtype}.\")\n\n\ndef _perplexity_update(preds: Tensor, target: Tensor, ignore_index: Optional[int] = None) -> Tuple[Tensor, Tensor]:\n \"\"\"Compute intermediate statistics for Perplexity.\n\n Args:\n preds:\n Logits or a unnormalized score assigned to each token in a sequence with shape [batch_size, seq_len,\n vocab_size]. Scores will be normalized internally using softmax.\n target:\n Ground truth values with a shape [batch_size, seq_len].\n ignore_index:\n Integer specifying a target class to ignore. If given, this class index does not contribute\n to the returned score.\n\n Returns:\n Log probabilities, summed over all samples\n Number of samples\n \"\"\"\n _check_shape_and_type_consistency(preds, target)\n\n probs = F.softmax(preds.reshape(-1, preds.shape[-1]), dim=1)\n target = target.reshape(-1)\n\n if ignore_index is not None:\n mask = target.ne(ignore_index)\n target = target.where(target != ignore_index, torch.tensor(0, device=target.device))\n else:\n mask = torch.ones_like(target, dtype=torch.bool)\n\n probs = probs[:, target].diagonal()[mask]\n total_log_probs = -probs.log().sum()\n count = mask.sum()\n\n return total_log_probs, count\n\n\ndef _perplexity_compute(total: Tensor, count: Tensor) -> Tensor:\n \"\"\"Compute the Perplexity.\n\n Args:\n total: Log probabilities, summed over all samples\n count: Number of samples\n Returns:\n Perplexity\n \"\"\"\n return torch.exp(total / count)\n\n\ndef perplexity(preds: Tensor, target: Tensor, ignore_index: Optional[int] = None) -> Tensor:\n \"\"\"Perplexity measures how well a language model predicts a text sample.\n\n This metric is calculated as the average number of bits per word a model needs to represent the sample.\n\n Args:\n preds:\n Logits or a unnormalized score assigned to each token in a sequence with shape [batch_size, seq_len,\n vocab_size], which is the output of a language model. Scores will be normalized internally using softmax.\n target:\n Ground truth values with a shape [batch_size, seq_len].\n ignore_index:\n Integer specifying a target class to ignore. If given, this class index does not contribute\n to the returned score.\n\n Returns:\n Perplexity value\n\n Examples:\n >>> import torch\n >>> preds = torch.rand(2, 8, 5, generator=torch.manual_seed(22))\n >>> target = torch.randint(5, (2, 8), generator=torch.manual_seed(22))\n >>> target[0, 6:] = -100\n >>> perplexity(preds, target, ignore_index=-100)\n tensor(5.2545)\n \"\"\"\n total, count = _perplexity_update(preds, target, ignore_index)\n return _perplexity_compute(total, count)\n", "path": "src/torchmetrics/functional/text/perplexity.py"}, {"content": "# Copyright The Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Any, Dict, Optional, Sequence, Union\n\nfrom torch import Tensor, tensor\n\nfrom torchmetrics.functional.text.perplexity import _perplexity_compute, _perplexity_update\nfrom torchmetrics.metric import Metric\nfrom torchmetrics.utilities.imports import _MATPLOTLIB_AVAILABLE\nfrom torchmetrics.utilities.plot import _AX_TYPE, _PLOT_OUT_TYPE\n\nif not _MATPLOTLIB_AVAILABLE:\n __doctest_skip__ = [\"Perplexity.plot\"]\n\n\nclass Perplexity(Metric):\n r\"\"\"Perplexity measures how well a language model predicts a text sample.\n\n It's calculated as the average number of bits per word a model needs to represent the sample.\n\n As input to ``forward`` and ``update`` the metric accepts the following input:\n\n - ``preds`` (:class:`~torch.Tensor`): Logits or a unnormalized score assigned to each token in a sequence with shape\n [batch_size, seq_len, vocab_size], which is the output of a language model. Scores will be normalized internally\n using softmax.\n - ``target`` (:class:`~torch.Tensor`): Ground truth values with a shape [batch_size, seq_len]\n\n As output of ``forward`` and ``compute`` the metric returns the following output:\n\n - ``perp`` (:class:`~torch.Tensor`): A tensor with the perplexity score\n\n Args:\n ignore_index: Integer specifying a target class to ignore.\n If given, this class index does not contribute to the returned score.\n kwargs:\n Additional keyword arguments, see :ref:`Metric kwargs` for more info.\n\n Examples:\n >>> from torchmetrics.text import Perplexity\n >>> import torch\n >>> preds = torch.rand(2, 8, 5, generator=torch.manual_seed(22))\n >>> target = torch.randint(5, (2, 8), generator=torch.manual_seed(22))\n >>> target[0, 6:] = -100\n >>> perp = Perplexity(ignore_index=-100)\n >>> perp(preds, target)\n tensor(5.2545)\n \"\"\"\n is_differentiable = True\n higher_is_better = False\n full_state_update = False\n total_log_probs: Tensor\n count: Tensor\n\n def __init__(\n self,\n ignore_index: Optional[int] = None,\n **kwargs: Dict[str, Any],\n ) -> None:\n super().__init__(**kwargs)\n if ignore_index is not None and not isinstance(ignore_index, int):\n raise ValueError(f\"Argument `ignore_index` expected to either be `None` or an `int` but got {ignore_index}\")\n self.ignore_index = ignore_index\n self.add_state(\"total_log_probs\", default=tensor(0.0), dist_reduce_fx=\"sum\")\n self.add_state(\"count\", default=tensor(0.0), dist_reduce_fx=\"sum\")\n\n def update(self, preds: Tensor, target: Tensor) -> None:\n \"\"\"Update state with predictions and targets.\"\"\"\n total_log_probs, count = _perplexity_update(preds, target, self.ignore_index)\n self.total_log_probs += total_log_probs\n self.count += count\n\n def compute(self) -> Tensor:\n \"\"\"Compute the Perplexity.\"\"\"\n return _perplexity_compute(self.total_log_probs, self.count)\n\n def plot(\n self, val: Optional[Union[Tensor, Sequence[Tensor]]] = None, ax: Optional[_AX_TYPE] = None\n ) -> _PLOT_OUT_TYPE:\n \"\"\"Plot a single or multiple values from the metric.\n\n Args:\n val: Either a single result from calling `metric.forward` or `metric.compute` or a list of these results.\n If no value is provided, will automatically call `metric.compute` and plot that result.\n ax: An matplotlib axis object. If provided will add plot to that axis\n\n Returns:\n Figure and Axes object\n\n Raises:\n ModuleNotFoundError:\n If `matplotlib` is not installed\n\n .. plot::\n :scale: 75\n\n >>> # Example plotting a single value\n >>> import torch\n >>> from torchmetrics.text import Perplexity\n >>> metric = Perplexity()\n >>> metric.update(torch.rand(2, 8, 5), torch.randint(5, (2, 8)))\n >>> fig_, ax_ = metric.plot()\n\n .. plot::\n :scale: 75\n\n >>> # Example plotting multiple values\n >>> import torch\n >>> from torchmetrics.text import Perplexity\n >>> metric = Perplexity()\n >>> values = [ ]\n >>> for _ in range(10):\n ... values.append(metric(torch.rand(2, 8, 5), torch.randint(5, (2, 8))))\n >>> fig_, ax_ = metric.plot(values)\n \"\"\"\n return self._plot(val, ax)\n", "path": "src/torchmetrics/text/perplexity.py"}]} | 3,551 | 592 |
gh_patches_debug_21007 | rasdani/github-patches | git_diff | joke2k__faker-213 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`.prefix`/`.suffix` returns a tuple instead of a single value
`.prefix` (and `.suffix`) can occasionally return a tuple of values instead of a single value when `prefixes_male` and `prefixes_female` (or `suffixes_*`) are present in the provider.
[See here for the code responsible.](https://github.com/joke2k/faker/blob/2af330e09d84306d10921fed00ad2e5cc8e3d36f/faker/providers/person/__init__.py#L93-L94)
I wasn't sure if this was intentional (it's documented to do so -- then again, the documentation is autogenerated, isn't it?), so I didn't make a PR yet, but it's certainly counterintuitive.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `faker/providers/person/__init__.py`
Content:
```
1 localized = True
2
3 from .. import BaseProvider
4
5
6 class Provider(BaseProvider):
7 formats = ['{{first_name}} {{last_name}}', ]
8
9 first_names = ['John', 'Jane']
10
11 last_names = ['Doe', ]
12
13 def name(self):
14 """
15 :example 'John Doe'
16 """
17 pattern = self.random_element(self.formats)
18 return self.generator.parse(pattern)
19
20 @classmethod
21 def first_name(cls):
22 return cls.random_element(cls.first_names)
23
24 @classmethod
25 def last_name(cls):
26 return cls.random_element(cls.last_names)
27
28 def name_male(self):
29 if hasattr(self, 'formats_male'):
30 formats = self.formats_male
31 else:
32 formats = self.formats
33 pattern = self.random_element(formats)
34 return self.generator.parse(pattern)
35
36 def name_female(self):
37 if hasattr(self, 'formats_female'):
38 formats = self.formats_female
39 else:
40 formats = self.formats
41 pattern = self.random_element(formats)
42 return self.generator.parse(pattern)
43
44 @classmethod
45 def first_name_male(cls):
46 if hasattr(cls, 'first_names_male'):
47 return cls.random_element(cls.first_names_male)
48 return cls.first_name()
49
50 @classmethod
51 def first_name_female(cls):
52 if hasattr(cls, 'first_names_female'):
53 return cls.random_element(cls.first_names_female)
54 return cls.first_name()
55
56 @classmethod
57 def last_name_male(cls):
58 if hasattr(cls, 'last_names_male'):
59 return cls.random_element(cls.last_names_male)
60 return cls.last_name()
61
62 @classmethod
63 def last_name_female(cls):
64 if hasattr(cls, 'last_names_female'):
65 return cls.random_element(cls.last_names_female)
66 return cls.last_name()
67
68
69 @classmethod
70 def prefix(cls):
71 if hasattr(cls, 'prefixes'):
72 return cls.random_element(cls.prefixes)
73 if hasattr(cls, 'prefixes_male') and hasattr(cls, 'prefixes_female'):
74 return cls.random_element((cls.prefixes_male, cls.prefixes_female))
75 return ''
76
77 @classmethod
78 def prefix_male(cls):
79 if hasattr(cls, 'prefixes_male'):
80 return cls.random_element(cls.prefixes_male)
81 return cls.prefix()
82
83 @classmethod
84 def prefix_female(cls):
85 if hasattr(cls, 'prefixes_female'):
86 return cls.random_element(cls.prefixes_female)
87 return cls.prefix()
88
89 @classmethod
90 def suffix(cls):
91 if hasattr(cls, 'suffixes'):
92 return cls.random_element(cls.suffixes)
93 if hasattr(cls, 'suffixes_male') and hasattr(cls, 'suffixes_female'):
94 return cls.random_element((cls.suffixes_male, cls.suffixes_female))
95 return ''
96
97 @classmethod
98 def suffix_male(cls):
99 if hasattr(cls, 'suffixes_male'):
100 return cls.random_element(cls.suffixes_male)
101 return cls.suffix()
102
103 @classmethod
104 def suffix_female(cls):
105 if hasattr(cls, 'suffixes_female'):
106 return cls.random_element(cls.suffixes_female)
107 return cls.suffix()
108
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/faker/providers/person/__init__.py b/faker/providers/person/__init__.py
--- a/faker/providers/person/__init__.py
+++ b/faker/providers/person/__init__.py
@@ -71,7 +71,8 @@
if hasattr(cls, 'prefixes'):
return cls.random_element(cls.prefixes)
if hasattr(cls, 'prefixes_male') and hasattr(cls, 'prefixes_female'):
- return cls.random_element((cls.prefixes_male, cls.prefixes_female))
+ prefixes = cls.random_element((cls.prefixes_male, cls.prefixes_female))
+ return cls.random_element(prefixes)
return ''
@classmethod
@@ -91,7 +92,8 @@
if hasattr(cls, 'suffixes'):
return cls.random_element(cls.suffixes)
if hasattr(cls, 'suffixes_male') and hasattr(cls, 'suffixes_female'):
- return cls.random_element((cls.suffixes_male, cls.suffixes_female))
+ suffixes = cls.random_element((cls.suffixes_male, cls.suffixes_female))
+ return cls.random_element(suffixes)
return ''
@classmethod
| {"golden_diff": "diff --git a/faker/providers/person/__init__.py b/faker/providers/person/__init__.py\n--- a/faker/providers/person/__init__.py\n+++ b/faker/providers/person/__init__.py\n@@ -71,7 +71,8 @@\n if hasattr(cls, 'prefixes'):\n return cls.random_element(cls.prefixes)\n if hasattr(cls, 'prefixes_male') and hasattr(cls, 'prefixes_female'):\n- return cls.random_element((cls.prefixes_male, cls.prefixes_female))\n+ prefixes = cls.random_element((cls.prefixes_male, cls.prefixes_female))\n+ return cls.random_element(prefixes)\n return ''\n \n @classmethod\n@@ -91,7 +92,8 @@\n if hasattr(cls, 'suffixes'):\n return cls.random_element(cls.suffixes)\n if hasattr(cls, 'suffixes_male') and hasattr(cls, 'suffixes_female'):\n- return cls.random_element((cls.suffixes_male, cls.suffixes_female))\n+ suffixes = cls.random_element((cls.suffixes_male, cls.suffixes_female))\n+ return cls.random_element(suffixes)\n return ''\n \n @classmethod\n", "issue": "`.prefix`/`.suffix` returns a tuple instead of a single value\n`.prefix` (and `.suffix`) can occasionally return a tuple of values instead of a single value when `prefixes_male` and `prefixes_female` (or `suffixes_*`) are present in the provider.\n\n[See here for the code responsible.](https://github.com/joke2k/faker/blob/2af330e09d84306d10921fed00ad2e5cc8e3d36f/faker/providers/person/__init__.py#L93-L94)\n\nI wasn't sure if this was intentional (it's documented to do so -- then again, the documentation is autogenerated, isn't it?), so I didn't make a PR yet, but it's certainly counterintuitive.\n\n", "before_files": [{"content": "localized = True\n\nfrom .. import BaseProvider\n\n\nclass Provider(BaseProvider):\n formats = ['{{first_name}} {{last_name}}', ]\n\n first_names = ['John', 'Jane']\n\n last_names = ['Doe', ]\n\n def name(self):\n \"\"\"\n :example 'John Doe'\n \"\"\"\n pattern = self.random_element(self.formats)\n return self.generator.parse(pattern)\n\n @classmethod\n def first_name(cls):\n return cls.random_element(cls.first_names)\n\n @classmethod\n def last_name(cls):\n return cls.random_element(cls.last_names)\n\n def name_male(self):\n if hasattr(self, 'formats_male'):\n formats = self.formats_male\n else:\n formats = self.formats\n pattern = self.random_element(formats)\n return self.generator.parse(pattern)\n\n def name_female(self):\n if hasattr(self, 'formats_female'):\n formats = self.formats_female\n else:\n formats = self.formats\n pattern = self.random_element(formats)\n return self.generator.parse(pattern)\n\n @classmethod\n def first_name_male(cls):\n if hasattr(cls, 'first_names_male'):\n return cls.random_element(cls.first_names_male)\n return cls.first_name()\n\n @classmethod\n def first_name_female(cls):\n if hasattr(cls, 'first_names_female'):\n return cls.random_element(cls.first_names_female)\n return cls.first_name()\n\n @classmethod\n def last_name_male(cls):\n if hasattr(cls, 'last_names_male'):\n return cls.random_element(cls.last_names_male)\n return cls.last_name()\n\n @classmethod\n def last_name_female(cls):\n if hasattr(cls, 'last_names_female'):\n return cls.random_element(cls.last_names_female)\n return cls.last_name()\n\n\n @classmethod\n def prefix(cls):\n if hasattr(cls, 'prefixes'):\n return cls.random_element(cls.prefixes)\n if hasattr(cls, 'prefixes_male') and hasattr(cls, 'prefixes_female'):\n return cls.random_element((cls.prefixes_male, cls.prefixes_female))\n return ''\n\n @classmethod\n def prefix_male(cls):\n if hasattr(cls, 'prefixes_male'):\n return cls.random_element(cls.prefixes_male)\n return cls.prefix()\n\n @classmethod\n def prefix_female(cls):\n if hasattr(cls, 'prefixes_female'):\n return cls.random_element(cls.prefixes_female)\n return cls.prefix()\n\n @classmethod\n def suffix(cls):\n if hasattr(cls, 'suffixes'):\n return cls.random_element(cls.suffixes)\n if hasattr(cls, 'suffixes_male') and hasattr(cls, 'suffixes_female'):\n return cls.random_element((cls.suffixes_male, cls.suffixes_female))\n return ''\n\n @classmethod\n def suffix_male(cls):\n if hasattr(cls, 'suffixes_male'):\n return cls.random_element(cls.suffixes_male)\n return cls.suffix()\n\n @classmethod\n def suffix_female(cls):\n if hasattr(cls, 'suffixes_female'):\n return cls.random_element(cls.suffixes_female)\n return cls.suffix()\n", "path": "faker/providers/person/__init__.py"}], "after_files": [{"content": "localized = True\n\nfrom .. import BaseProvider\n\n\nclass Provider(BaseProvider):\n formats = ['{{first_name}} {{last_name}}', ]\n\n first_names = ['John', 'Jane']\n\n last_names = ['Doe', ]\n\n def name(self):\n \"\"\"\n :example 'John Doe'\n \"\"\"\n pattern = self.random_element(self.formats)\n return self.generator.parse(pattern)\n\n @classmethod\n def first_name(cls):\n return cls.random_element(cls.first_names)\n\n @classmethod\n def last_name(cls):\n return cls.random_element(cls.last_names)\n\n def name_male(self):\n if hasattr(self, 'formats_male'):\n formats = self.formats_male\n else:\n formats = self.formats\n pattern = self.random_element(formats)\n return self.generator.parse(pattern)\n\n def name_female(self):\n if hasattr(self, 'formats_female'):\n formats = self.formats_female\n else:\n formats = self.formats\n pattern = self.random_element(formats)\n return self.generator.parse(pattern)\n\n @classmethod\n def first_name_male(cls):\n if hasattr(cls, 'first_names_male'):\n return cls.random_element(cls.first_names_male)\n return cls.first_name()\n\n @classmethod\n def first_name_female(cls):\n if hasattr(cls, 'first_names_female'):\n return cls.random_element(cls.first_names_female)\n return cls.first_name()\n\n @classmethod\n def last_name_male(cls):\n if hasattr(cls, 'last_names_male'):\n return cls.random_element(cls.last_names_male)\n return cls.last_name()\n\n @classmethod\n def last_name_female(cls):\n if hasattr(cls, 'last_names_female'):\n return cls.random_element(cls.last_names_female)\n return cls.last_name()\n\n\n @classmethod\n def prefix(cls):\n if hasattr(cls, 'prefixes'):\n return cls.random_element(cls.prefixes)\n if hasattr(cls, 'prefixes_male') and hasattr(cls, 'prefixes_female'):\n prefixes = cls.random_element((cls.prefixes_male, cls.prefixes_female))\n return cls.random_element(prefixes)\n return ''\n\n @classmethod\n def prefix_male(cls):\n if hasattr(cls, 'prefixes_male'):\n return cls.random_element(cls.prefixes_male)\n return cls.prefix()\n\n @classmethod\n def prefix_female(cls):\n if hasattr(cls, 'prefixes_female'):\n return cls.random_element(cls.prefixes_female)\n return cls.prefix()\n\n @classmethod\n def suffix(cls):\n if hasattr(cls, 'suffixes'):\n return cls.random_element(cls.suffixes)\n if hasattr(cls, 'suffixes_male') and hasattr(cls, 'suffixes_female'):\n suffixes = cls.random_element((cls.suffixes_male, cls.suffixes_female))\n return cls.random_element(suffixes)\n return ''\n\n @classmethod\n def suffix_male(cls):\n if hasattr(cls, 'suffixes_male'):\n return cls.random_element(cls.suffixes_male)\n return cls.suffix()\n\n @classmethod\n def suffix_female(cls):\n if hasattr(cls, 'suffixes_female'):\n return cls.random_element(cls.suffixes_female)\n return cls.suffix()\n", "path": "faker/providers/person/__init__.py"}]} | 1,318 | 252 |
gh_patches_debug_5043 | rasdani/github-patches | git_diff | dask__distributed-3652 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Variable delete raises KeyError
It seems that there is an unhandled KeyError in the scheduler variable delete logic.
If using the asynchronous API only visible in the scheduler logs but it does mean that Variables will be "leaking" and staying around instead of being deleted.
It happens regardless of whether you set a value.
Shortest reproduction I can provide is as follows:
```python
from distributed import Client
from distributed import Variable
def main():
Client()
Variable().delete()
if __name__ == '__main__':
main()
```
```python
tornado.application - ERROR - Exception in callback functools.partial(<bound method IOLoop._discard_future_result of <tornado.platform.asyncio.AsyncIOLoop object at 0x7f8fe9265be0>>, <Task finished coro=<VariableExtension.delete() done, defined at ./venv/lib/python3.6/site-packages/distributed/variable.py:101> exception=KeyError('variable-68a51209410248fa9f69f84b25f48343',)>)
Traceback (most recent call last):
File "./venv/lib/python3.6/site-packages/tornado/ioloop.py", line 743, in _run_callback
ret = callback()
File "./venv/lib/python3.6/site-packages/tornado/ioloop.py", line 767, in _discard_future_result
future.result()
File "./venv/lib/python3.6/site-packages/distributed/variable.py", line 110, in delete
del self.waiting_conditions[name]
KeyError: 'variable-68a51209410248fa9f69f84b25f48343'
```
Python 3.6.10
Reproduced with distributed 2.9.3 and 2.10.0
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `distributed/variable.py`
Content:
```
1 import asyncio
2 from collections import defaultdict
3 import logging
4 import uuid
5
6 from tlz import merge
7
8 from .client import Future, _get_global_client, Client
9 from .utils import tokey, log_errors, TimeoutError, ignoring
10 from .worker import get_client
11
12 logger = logging.getLogger(__name__)
13
14
15 class VariableExtension:
16 """ An extension for the scheduler to manage queues
17
18 This adds the following routes to the scheduler
19
20 * variable-set
21 * variable-get
22 * variable-delete
23 """
24
25 def __init__(self, scheduler):
26 self.scheduler = scheduler
27 self.variables = dict()
28 self.waiting = defaultdict(set)
29 self.waiting_conditions = defaultdict(asyncio.Condition)
30 self.started = asyncio.Condition()
31
32 self.scheduler.handlers.update(
33 {"variable_set": self.set, "variable_get": self.get}
34 )
35
36 self.scheduler.stream_handlers["variable-future-release"] = self.future_release
37 self.scheduler.stream_handlers["variable_delete"] = self.delete
38
39 self.scheduler.extensions["variables"] = self
40
41 async def set(self, stream=None, name=None, key=None, data=None, client=None):
42 if key is not None:
43 record = {"type": "Future", "value": key}
44 self.scheduler.client_desires_keys(keys=[key], client="variable-%s" % name)
45 else:
46 record = {"type": "msgpack", "value": data}
47 try:
48 old = self.variables[name]
49 except KeyError:
50 pass
51 else:
52 if old["type"] == "Future" and old["value"] != key:
53 asyncio.ensure_future(self.release(old["value"], name))
54 if name not in self.variables:
55 async with self.started:
56 self.started.notify_all()
57 self.variables[name] = record
58
59 async def release(self, key, name):
60 while self.waiting[key, name]:
61 async with self.waiting_conditions[name]:
62 await self.waiting_conditions[name].wait()
63
64 self.scheduler.client_releases_keys(keys=[key], client="variable-%s" % name)
65 del self.waiting[key, name]
66
67 async def future_release(self, name=None, key=None, token=None, client=None):
68 self.waiting[key, name].remove(token)
69 if not self.waiting[key, name]:
70 async with self.waiting_conditions[name]:
71 self.waiting_conditions[name].notify_all()
72
73 async def get(self, stream=None, name=None, client=None, timeout=None):
74 start = self.scheduler.loop.time()
75 while name not in self.variables:
76 if timeout is not None:
77 left = timeout - (self.scheduler.loop.time() - start)
78 else:
79 left = None
80 if left and left < 0:
81 raise TimeoutError()
82 try:
83
84 async def _(): # Python 3.6 is odd and requires special help here
85 await self.started.acquire()
86 await self.started.wait()
87
88 await asyncio.wait_for(_(), timeout=left)
89 finally:
90 with ignoring(RuntimeError): # Python 3.6 loses lock on finally clause
91 self.started.release()
92
93 record = self.variables[name]
94 if record["type"] == "Future":
95 key = record["value"]
96 token = uuid.uuid4().hex
97 ts = self.scheduler.tasks.get(key)
98 state = ts.state if ts is not None else "lost"
99 msg = {"token": token, "state": state}
100 if state == "erred":
101 msg["exception"] = ts.exception_blame.exception
102 msg["traceback"] = ts.exception_blame.traceback
103 record = merge(record, msg)
104 self.waiting[key, name].add(token)
105 return record
106
107 async def delete(self, stream=None, name=None, client=None):
108 with log_errors():
109 try:
110 old = self.variables[name]
111 except KeyError:
112 pass
113 else:
114 if old["type"] == "Future":
115 await self.release(old["value"], name)
116 del self.waiting_conditions[name]
117 del self.variables[name]
118
119
120 class Variable:
121 """ Distributed Global Variable
122
123 This allows multiple clients to share futures and data between each other
124 with a single mutable variable. All metadata is sequentialized through the
125 scheduler. Race conditions can occur.
126
127 Values must be either Futures or msgpack-encodable data (ints, lists,
128 strings, etc..) All data will be kept and sent through the scheduler, so
129 it is wise not to send too much. If you want to share a large amount of
130 data then ``scatter`` it and share the future instead.
131
132 .. warning::
133
134 This object is experimental and has known issues in Python 2
135
136 Parameters
137 ----------
138 name: string (optional)
139 Name used by other clients and the scheduler to identify the variable.
140 If not given, a random name will be generated.
141 client: Client (optional)
142 Client used for communication with the scheduler. Defaults to the
143 value of ``_get_global_client()``.
144
145 Examples
146 --------
147 >>> from dask.distributed import Client, Variable # doctest: +SKIP
148 >>> client = Client() # doctest: +SKIP
149 >>> x = Variable('x') # doctest: +SKIP
150 >>> x.set(123) # docttest: +SKIP
151 >>> x.get() # docttest: +SKIP
152 123
153 >>> future = client.submit(f, x) # doctest: +SKIP
154 >>> x.set(future) # doctest: +SKIP
155
156 See Also
157 --------
158 Queue: shared multi-producer/multi-consumer queue between clients
159 """
160
161 def __init__(self, name=None, client=None, maxsize=0):
162 self.client = client or _get_global_client()
163 self.name = name or "variable-" + uuid.uuid4().hex
164
165 async def _set(self, value):
166 if isinstance(value, Future):
167 await self.client.scheduler.variable_set(
168 key=tokey(value.key), name=self.name
169 )
170 else:
171 await self.client.scheduler.variable_set(data=value, name=self.name)
172
173 def set(self, value, **kwargs):
174 """ Set the value of this variable
175
176 Parameters
177 ----------
178 value: Future or object
179 Must be either a Future or a msgpack-encodable value
180 """
181 return self.client.sync(self._set, value, **kwargs)
182
183 async def _get(self, timeout=None):
184 d = await self.client.scheduler.variable_get(
185 timeout=timeout, name=self.name, client=self.client.id
186 )
187 if d["type"] == "Future":
188 value = Future(d["value"], self.client, inform=True, state=d["state"])
189 if d["state"] == "erred":
190 value._state.set_error(d["exception"], d["traceback"])
191 self.client._send_to_scheduler(
192 {
193 "op": "variable-future-release",
194 "name": self.name,
195 "key": d["value"],
196 "token": d["token"],
197 }
198 )
199 else:
200 value = d["value"]
201 return value
202
203 def get(self, timeout=None, **kwargs):
204 """ Get the value of this variable """
205 return self.client.sync(self._get, timeout=timeout, **kwargs)
206
207 def delete(self):
208 """ Delete this variable
209
210 Caution, this affects all clients currently pointing to this variable.
211 """
212 if self.client.status == "running": # TODO: can leave zombie futures
213 self.client._send_to_scheduler({"op": "variable_delete", "name": self.name})
214
215 def __getstate__(self):
216 return (self.name, self.client.scheduler.address)
217
218 def __setstate__(self, state):
219 name, address = state
220 try:
221 client = get_client(address)
222 assert client.scheduler.address == address
223 except (AttributeError, AssertionError):
224 client = Client(address, set_as_default=False)
225 self.__init__(name=name, client=client)
226
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/distributed/variable.py b/distributed/variable.py
--- a/distributed/variable.py
+++ b/distributed/variable.py
@@ -113,8 +113,10 @@
else:
if old["type"] == "Future":
await self.release(old["value"], name)
- del self.waiting_conditions[name]
- del self.variables[name]
+ with ignoring(KeyError):
+ del self.waiting_conditions[name]
+ with ignoring(KeyError):
+ del self.variables[name]
class Variable:
| {"golden_diff": "diff --git a/distributed/variable.py b/distributed/variable.py\n--- a/distributed/variable.py\n+++ b/distributed/variable.py\n@@ -113,8 +113,10 @@\n else:\n if old[\"type\"] == \"Future\":\n await self.release(old[\"value\"], name)\n- del self.waiting_conditions[name]\n- del self.variables[name]\n+ with ignoring(KeyError):\n+ del self.waiting_conditions[name]\n+ with ignoring(KeyError):\n+ del self.variables[name]\n \n \n class Variable:\n", "issue": "Variable delete raises KeyError\nIt seems that there is an unhandled KeyError in the scheduler variable delete logic.\r\n\r\nIf using the asynchronous API only visible in the scheduler logs but it does mean that Variables will be \"leaking\" and staying around instead of being deleted.\r\n\r\nIt happens regardless of whether you set a value.\r\n\r\nShortest reproduction I can provide is as follows:\r\n```python\r\nfrom distributed import Client\r\nfrom distributed import Variable\r\n\r\n\r\ndef main():\r\n Client()\r\n Variable().delete()\r\n\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```\r\n\r\n```python\r\ntornado.application - ERROR - Exception in callback functools.partial(<bound method IOLoop._discard_future_result of <tornado.platform.asyncio.AsyncIOLoop object at 0x7f8fe9265be0>>, <Task finished coro=<VariableExtension.delete() done, defined at ./venv/lib/python3.6/site-packages/distributed/variable.py:101> exception=KeyError('variable-68a51209410248fa9f69f84b25f48343',)>)\r\nTraceback (most recent call last):\r\n File \"./venv/lib/python3.6/site-packages/tornado/ioloop.py\", line 743, in _run_callback\r\n ret = callback()\r\n File \"./venv/lib/python3.6/site-packages/tornado/ioloop.py\", line 767, in _discard_future_result\r\n future.result()\r\n File \"./venv/lib/python3.6/site-packages/distributed/variable.py\", line 110, in delete\r\n del self.waiting_conditions[name]\r\nKeyError: 'variable-68a51209410248fa9f69f84b25f48343'\r\n```\r\n\r\nPython 3.6.10\r\nReproduced with distributed 2.9.3 and 2.10.0\n", "before_files": [{"content": "import asyncio\nfrom collections import defaultdict\nimport logging\nimport uuid\n\nfrom tlz import merge\n\nfrom .client import Future, _get_global_client, Client\nfrom .utils import tokey, log_errors, TimeoutError, ignoring\nfrom .worker import get_client\n\nlogger = logging.getLogger(__name__)\n\n\nclass VariableExtension:\n \"\"\" An extension for the scheduler to manage queues\n\n This adds the following routes to the scheduler\n\n * variable-set\n * variable-get\n * variable-delete\n \"\"\"\n\n def __init__(self, scheduler):\n self.scheduler = scheduler\n self.variables = dict()\n self.waiting = defaultdict(set)\n self.waiting_conditions = defaultdict(asyncio.Condition)\n self.started = asyncio.Condition()\n\n self.scheduler.handlers.update(\n {\"variable_set\": self.set, \"variable_get\": self.get}\n )\n\n self.scheduler.stream_handlers[\"variable-future-release\"] = self.future_release\n self.scheduler.stream_handlers[\"variable_delete\"] = self.delete\n\n self.scheduler.extensions[\"variables\"] = self\n\n async def set(self, stream=None, name=None, key=None, data=None, client=None):\n if key is not None:\n record = {\"type\": \"Future\", \"value\": key}\n self.scheduler.client_desires_keys(keys=[key], client=\"variable-%s\" % name)\n else:\n record = {\"type\": \"msgpack\", \"value\": data}\n try:\n old = self.variables[name]\n except KeyError:\n pass\n else:\n if old[\"type\"] == \"Future\" and old[\"value\"] != key:\n asyncio.ensure_future(self.release(old[\"value\"], name))\n if name not in self.variables:\n async with self.started:\n self.started.notify_all()\n self.variables[name] = record\n\n async def release(self, key, name):\n while self.waiting[key, name]:\n async with self.waiting_conditions[name]:\n await self.waiting_conditions[name].wait()\n\n self.scheduler.client_releases_keys(keys=[key], client=\"variable-%s\" % name)\n del self.waiting[key, name]\n\n async def future_release(self, name=None, key=None, token=None, client=None):\n self.waiting[key, name].remove(token)\n if not self.waiting[key, name]:\n async with self.waiting_conditions[name]:\n self.waiting_conditions[name].notify_all()\n\n async def get(self, stream=None, name=None, client=None, timeout=None):\n start = self.scheduler.loop.time()\n while name not in self.variables:\n if timeout is not None:\n left = timeout - (self.scheduler.loop.time() - start)\n else:\n left = None\n if left and left < 0:\n raise TimeoutError()\n try:\n\n async def _(): # Python 3.6 is odd and requires special help here\n await self.started.acquire()\n await self.started.wait()\n\n await asyncio.wait_for(_(), timeout=left)\n finally:\n with ignoring(RuntimeError): # Python 3.6 loses lock on finally clause\n self.started.release()\n\n record = self.variables[name]\n if record[\"type\"] == \"Future\":\n key = record[\"value\"]\n token = uuid.uuid4().hex\n ts = self.scheduler.tasks.get(key)\n state = ts.state if ts is not None else \"lost\"\n msg = {\"token\": token, \"state\": state}\n if state == \"erred\":\n msg[\"exception\"] = ts.exception_blame.exception\n msg[\"traceback\"] = ts.exception_blame.traceback\n record = merge(record, msg)\n self.waiting[key, name].add(token)\n return record\n\n async def delete(self, stream=None, name=None, client=None):\n with log_errors():\n try:\n old = self.variables[name]\n except KeyError:\n pass\n else:\n if old[\"type\"] == \"Future\":\n await self.release(old[\"value\"], name)\n del self.waiting_conditions[name]\n del self.variables[name]\n\n\nclass Variable:\n \"\"\" Distributed Global Variable\n\n This allows multiple clients to share futures and data between each other\n with a single mutable variable. All metadata is sequentialized through the\n scheduler. Race conditions can occur.\n\n Values must be either Futures or msgpack-encodable data (ints, lists,\n strings, etc..) All data will be kept and sent through the scheduler, so\n it is wise not to send too much. If you want to share a large amount of\n data then ``scatter`` it and share the future instead.\n\n .. warning::\n\n This object is experimental and has known issues in Python 2\n\n Parameters\n ----------\n name: string (optional)\n Name used by other clients and the scheduler to identify the variable.\n If not given, a random name will be generated.\n client: Client (optional)\n Client used for communication with the scheduler. Defaults to the\n value of ``_get_global_client()``.\n\n Examples\n --------\n >>> from dask.distributed import Client, Variable # doctest: +SKIP\n >>> client = Client() # doctest: +SKIP\n >>> x = Variable('x') # doctest: +SKIP\n >>> x.set(123) # docttest: +SKIP\n >>> x.get() # docttest: +SKIP\n 123\n >>> future = client.submit(f, x) # doctest: +SKIP\n >>> x.set(future) # doctest: +SKIP\n\n See Also\n --------\n Queue: shared multi-producer/multi-consumer queue between clients\n \"\"\"\n\n def __init__(self, name=None, client=None, maxsize=0):\n self.client = client or _get_global_client()\n self.name = name or \"variable-\" + uuid.uuid4().hex\n\n async def _set(self, value):\n if isinstance(value, Future):\n await self.client.scheduler.variable_set(\n key=tokey(value.key), name=self.name\n )\n else:\n await self.client.scheduler.variable_set(data=value, name=self.name)\n\n def set(self, value, **kwargs):\n \"\"\" Set the value of this variable\n\n Parameters\n ----------\n value: Future or object\n Must be either a Future or a msgpack-encodable value\n \"\"\"\n return self.client.sync(self._set, value, **kwargs)\n\n async def _get(self, timeout=None):\n d = await self.client.scheduler.variable_get(\n timeout=timeout, name=self.name, client=self.client.id\n )\n if d[\"type\"] == \"Future\":\n value = Future(d[\"value\"], self.client, inform=True, state=d[\"state\"])\n if d[\"state\"] == \"erred\":\n value._state.set_error(d[\"exception\"], d[\"traceback\"])\n self.client._send_to_scheduler(\n {\n \"op\": \"variable-future-release\",\n \"name\": self.name,\n \"key\": d[\"value\"],\n \"token\": d[\"token\"],\n }\n )\n else:\n value = d[\"value\"]\n return value\n\n def get(self, timeout=None, **kwargs):\n \"\"\" Get the value of this variable \"\"\"\n return self.client.sync(self._get, timeout=timeout, **kwargs)\n\n def delete(self):\n \"\"\" Delete this variable\n\n Caution, this affects all clients currently pointing to this variable.\n \"\"\"\n if self.client.status == \"running\": # TODO: can leave zombie futures\n self.client._send_to_scheduler({\"op\": \"variable_delete\", \"name\": self.name})\n\n def __getstate__(self):\n return (self.name, self.client.scheduler.address)\n\n def __setstate__(self, state):\n name, address = state\n try:\n client = get_client(address)\n assert client.scheduler.address == address\n except (AttributeError, AssertionError):\n client = Client(address, set_as_default=False)\n self.__init__(name=name, client=client)\n", "path": "distributed/variable.py"}], "after_files": [{"content": "import asyncio\nfrom collections import defaultdict\nimport logging\nimport uuid\n\nfrom tlz import merge\n\nfrom .client import Future, _get_global_client, Client\nfrom .utils import tokey, log_errors, TimeoutError, ignoring\nfrom .worker import get_client\n\nlogger = logging.getLogger(__name__)\n\n\nclass VariableExtension:\n \"\"\" An extension for the scheduler to manage queues\n\n This adds the following routes to the scheduler\n\n * variable-set\n * variable-get\n * variable-delete\n \"\"\"\n\n def __init__(self, scheduler):\n self.scheduler = scheduler\n self.variables = dict()\n self.waiting = defaultdict(set)\n self.waiting_conditions = defaultdict(asyncio.Condition)\n self.started = asyncio.Condition()\n\n self.scheduler.handlers.update(\n {\"variable_set\": self.set, \"variable_get\": self.get}\n )\n\n self.scheduler.stream_handlers[\"variable-future-release\"] = self.future_release\n self.scheduler.stream_handlers[\"variable_delete\"] = self.delete\n\n self.scheduler.extensions[\"variables\"] = self\n\n async def set(self, stream=None, name=None, key=None, data=None, client=None):\n if key is not None:\n record = {\"type\": \"Future\", \"value\": key}\n self.scheduler.client_desires_keys(keys=[key], client=\"variable-%s\" % name)\n else:\n record = {\"type\": \"msgpack\", \"value\": data}\n try:\n old = self.variables[name]\n except KeyError:\n pass\n else:\n if old[\"type\"] == \"Future\" and old[\"value\"] != key:\n asyncio.ensure_future(self.release(old[\"value\"], name))\n if name not in self.variables:\n async with self.started:\n self.started.notify_all()\n self.variables[name] = record\n\n async def release(self, key, name):\n while self.waiting[key, name]:\n async with self.waiting_conditions[name]:\n await self.waiting_conditions[name].wait()\n\n self.scheduler.client_releases_keys(keys=[key], client=\"variable-%s\" % name)\n del self.waiting[key, name]\n\n async def future_release(self, name=None, key=None, token=None, client=None):\n self.waiting[key, name].remove(token)\n if not self.waiting[key, name]:\n async with self.waiting_conditions[name]:\n self.waiting_conditions[name].notify_all()\n\n async def get(self, stream=None, name=None, client=None, timeout=None):\n start = self.scheduler.loop.time()\n while name not in self.variables:\n if timeout is not None:\n left = timeout - (self.scheduler.loop.time() - start)\n else:\n left = None\n if left and left < 0:\n raise TimeoutError()\n try:\n\n async def _(): # Python 3.6 is odd and requires special help here\n await self.started.acquire()\n await self.started.wait()\n\n await asyncio.wait_for(_(), timeout=left)\n finally:\n with ignoring(RuntimeError): # Python 3.6 loses lock on finally clause\n self.started.release()\n\n record = self.variables[name]\n if record[\"type\"] == \"Future\":\n key = record[\"value\"]\n token = uuid.uuid4().hex\n ts = self.scheduler.tasks.get(key)\n state = ts.state if ts is not None else \"lost\"\n msg = {\"token\": token, \"state\": state}\n if state == \"erred\":\n msg[\"exception\"] = ts.exception_blame.exception\n msg[\"traceback\"] = ts.exception_blame.traceback\n record = merge(record, msg)\n self.waiting[key, name].add(token)\n return record\n\n async def delete(self, stream=None, name=None, client=None):\n with log_errors():\n try:\n old = self.variables[name]\n except KeyError:\n pass\n else:\n if old[\"type\"] == \"Future\":\n await self.release(old[\"value\"], name)\n with ignoring(KeyError):\n del self.waiting_conditions[name]\n with ignoring(KeyError):\n del self.variables[name]\n\n\nclass Variable:\n \"\"\" Distributed Global Variable\n\n This allows multiple clients to share futures and data between each other\n with a single mutable variable. All metadata is sequentialized through the\n scheduler. Race conditions can occur.\n\n Values must be either Futures or msgpack-encodable data (ints, lists,\n strings, etc..) All data will be kept and sent through the scheduler, so\n it is wise not to send too much. If you want to share a large amount of\n data then ``scatter`` it and share the future instead.\n\n .. warning::\n\n This object is experimental and has known issues in Python 2\n\n Parameters\n ----------\n name: string (optional)\n Name used by other clients and the scheduler to identify the variable.\n If not given, a random name will be generated.\n client: Client (optional)\n Client used for communication with the scheduler. Defaults to the\n value of ``_get_global_client()``.\n\n Examples\n --------\n >>> from dask.distributed import Client, Variable # doctest: +SKIP\n >>> client = Client() # doctest: +SKIP\n >>> x = Variable('x') # doctest: +SKIP\n >>> x.set(123) # docttest: +SKIP\n >>> x.get() # docttest: +SKIP\n 123\n >>> future = client.submit(f, x) # doctest: +SKIP\n >>> x.set(future) # doctest: +SKIP\n\n See Also\n --------\n Queue: shared multi-producer/multi-consumer queue between clients\n \"\"\"\n\n def __init__(self, name=None, client=None, maxsize=0):\n self.client = client or _get_global_client()\n self.name = name or \"variable-\" + uuid.uuid4().hex\n\n async def _set(self, value):\n if isinstance(value, Future):\n await self.client.scheduler.variable_set(\n key=tokey(value.key), name=self.name\n )\n else:\n await self.client.scheduler.variable_set(data=value, name=self.name)\n\n def set(self, value, **kwargs):\n \"\"\" Set the value of this variable\n\n Parameters\n ----------\n value: Future or object\n Must be either a Future or a msgpack-encodable value\n \"\"\"\n return self.client.sync(self._set, value, **kwargs)\n\n async def _get(self, timeout=None):\n d = await self.client.scheduler.variable_get(\n timeout=timeout, name=self.name, client=self.client.id\n )\n if d[\"type\"] == \"Future\":\n value = Future(d[\"value\"], self.client, inform=True, state=d[\"state\"])\n if d[\"state\"] == \"erred\":\n value._state.set_error(d[\"exception\"], d[\"traceback\"])\n self.client._send_to_scheduler(\n {\n \"op\": \"variable-future-release\",\n \"name\": self.name,\n \"key\": d[\"value\"],\n \"token\": d[\"token\"],\n }\n )\n else:\n value = d[\"value\"]\n return value\n\n def get(self, timeout=None, **kwargs):\n \"\"\" Get the value of this variable \"\"\"\n return self.client.sync(self._get, timeout=timeout, **kwargs)\n\n def delete(self):\n \"\"\" Delete this variable\n\n Caution, this affects all clients currently pointing to this variable.\n \"\"\"\n if self.client.status == \"running\": # TODO: can leave zombie futures\n self.client._send_to_scheduler({\"op\": \"variable_delete\", \"name\": self.name})\n\n def __getstate__(self):\n return (self.name, self.client.scheduler.address)\n\n def __setstate__(self, state):\n name, address = state\n try:\n client = get_client(address)\n assert client.scheduler.address == address\n except (AttributeError, AssertionError):\n client = Client(address, set_as_default=False)\n self.__init__(name=name, client=client)\n", "path": "distributed/variable.py"}]} | 2,988 | 120 |
gh_patches_debug_23955 | rasdani/github-patches | git_diff | arviz-devs__arviz-615 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Return value of plot_pair is numvars the same
The return value of `plot_pair` is a list that contains `numvars` times the array of subplots. I guess it comes from the original version with gridspec.
Line 168 defines `axs` empty list, which is no longer used, line 219 appends `ax` to `axs`, but `ax` already contains all the subplots. Eventually, `axs` is returned instead of `ax`.
In addition, maybe the docstring should be updated to specify that `ax` (both as input and as return value) is actually an array of axes, not an axes object. It cannot be a single axes because there are many, but I am not sure if everyone will see it right away.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `arviz/plots/pairplot.py`
Content:
```
1 """Plot a scatter or hexbin of sampled parameters."""
2 import numpy as np
3 import matplotlib.pyplot as plt
4 from matplotlib.ticker import NullFormatter
5 from mpl_toolkits.axes_grid1 import make_axes_locatable
6
7 from ..data import convert_to_dataset
8 from .kdeplot import plot_kde
9 from .plot_utils import _scale_fig_size, xarray_to_ndarray, get_coords
10 from ..utils import _var_names
11
12
13 def plot_pair(
14 data,
15 var_names=None,
16 coords=None,
17 figsize=None,
18 textsize=None,
19 kind="scatter",
20 gridsize="auto",
21 contour=True,
22 fill_last=True,
23 divergences=False,
24 colorbar=False,
25 ax=None,
26 divergences_kwargs=None,
27 plot_kwargs=None,
28 ):
29 """
30 Plot a scatter or hexbin matrix of the sampled parameters.
31
32 Parameters
33 ----------
34 data : obj
35 Any object that can be converted to an az.InferenceData object
36 Refer to documentation of az.convert_to_dataset for details
37 var_names : list of variable names
38 Variables to be plotted, if None all variable are plotted
39 coords : mapping, optional
40 Coordinates of var_names to be plotted. Passed to `Dataset.sel`
41 figsize : figure size tuple
42 If None, size is (8 + numvars, 8 + numvars)
43 textsize: int
44 Text size for labels. If None it will be autoscaled based on figsize.
45 kind : str
46 Type of plot to display (kde or hexbin)
47 gridsize : int or (int, int), optional
48 Only works for kind=hexbin.
49 The number of hexagons in the x-direction. The corresponding number of hexagons in the
50 y-direction is chosen such that the hexagons are approximately regular.
51 Alternatively, gridsize can be a tuple with two elements specifying the number of hexagons
52 in the x-direction and the y-direction.
53 contour : bool
54 If True plot the 2D KDE using contours, otherwise plot a smooth 2D KDE. Defaults to True.
55 fill_last : bool
56 If True fill the last contour of the 2D KDE plot. Defaults to True.
57 divergences : Boolean
58 If True divergences will be plotted in a different color
59 colorbar : bool
60 If True a colorbar will be included as part of the plot (Defaults to False).
61 Only works when kind=hexbin
62 ax: axes
63 Matplotlib axes
64 divergences_kwargs : dicts, optional
65 Additional keywords passed to ax.scatter for divergences
66 plot_kwargs : dicts, optional
67 Additional keywords passed to ax.plot, az.plot_kde or ax.hexbin
68 Returns
69 -------
70 ax : matplotlib axes
71
72 Examples
73 --------
74 KDE Pair Plot
75
76 .. plot::
77 :context: close-figs
78
79 >>> import arviz as az
80 >>> centered = az.load_arviz_data('centered_eight')
81 >>> coords = {'school': ['Choate', 'Deerfield']}
82 >>> az.plot_pair(centered,
83 >>> var_names=['theta', 'mu', 'tau'],
84 >>> kind='kde',
85 >>> coords=coords,
86 >>> divergences=True,
87 >>> textsize=18)
88
89 Hexbin pair plot
90
91 .. plot::
92 :context: close-figs
93
94 >>> az.plot_pair(centered,
95 >>> var_names=['theta', 'mu'],
96 >>> coords=coords,
97 >>> textsize=18,
98 >>> kind='hexbin')
99
100 Pair plot showing divergences
101
102 .. plot::
103 :context: close-figs
104
105 >>> az.plot_pair(centered,
106 ... var_names=['theta', 'mu', 'tau'],
107 ... coords=coords,
108 ... divergences=True,
109 ... textsize=18)
110 """
111 valid_kinds = ["scatter", "kde", "hexbin"]
112 if kind not in valid_kinds:
113 raise ValueError(
114 ("Plot type {} not recognized." "Plot type must be in {}").format(kind, valid_kinds)
115 )
116
117 if coords is None:
118 coords = {}
119
120 if plot_kwargs is None:
121 plot_kwargs = {}
122
123 if kind == "scatter":
124 plot_kwargs.setdefault("marker", ".")
125 plot_kwargs.setdefault("lw", 0)
126
127 if divergences_kwargs is None:
128 divergences_kwargs = {}
129
130 divergences_kwargs.setdefault("marker", "o")
131 divergences_kwargs.setdefault("markeredgecolor", "k")
132 divergences_kwargs.setdefault("color", "C1")
133 divergences_kwargs.setdefault("lw", 0)
134
135 # Get posterior draws and combine chains
136 posterior_data = convert_to_dataset(data, group="posterior")
137 var_names = _var_names(var_names, posterior_data)
138 flat_var_names, _posterior = xarray_to_ndarray(
139 get_coords(posterior_data, coords), var_names=var_names, combined=True
140 )
141
142 # Get diverging draws and combine chains
143 if divergences:
144 divergent_data = convert_to_dataset(data, group="sample_stats")
145 _, diverging_mask = xarray_to_ndarray(
146 divergent_data, var_names=("diverging",), combined=True
147 )
148 diverging_mask = np.squeeze(diverging_mask)
149
150 if gridsize == "auto":
151 gridsize = int(len(_posterior[0]) ** 0.35)
152
153 numvars = len(flat_var_names)
154
155 if numvars < 2:
156 raise Exception("Number of variables to be plotted must be 2 or greater.")
157
158 if numvars == 2:
159 (figsize, ax_labelsize, _, xt_labelsize, _, _) = _scale_fig_size(
160 figsize, textsize, numvars - 1, numvars - 1
161 )
162
163 if ax is None:
164 fig, ax = plt.subplots(figsize=figsize, constrained_layout=True)
165
166 if kind == "scatter":
167 ax.plot(_posterior[0], _posterior[1], **plot_kwargs)
168 elif kind == "kde":
169 plot_kde(
170 _posterior[0],
171 _posterior[1],
172 contour=contour,
173 fill_last=fill_last,
174 ax=ax,
175 **plot_kwargs
176 )
177 else:
178 hexbin = ax.hexbin(
179 _posterior[0], _posterior[1], mincnt=1, gridsize=gridsize, **plot_kwargs
180 )
181 ax.grid(False)
182
183 if kind == "hexbin" and colorbar:
184 cbar = ax.figure.colorbar(hexbin, ticks=[hexbin.norm.vmin, hexbin.norm.vmax], ax=ax)
185 cbar.ax.set_yticklabels(["low", "high"], fontsize=ax_labelsize)
186
187 if divergences:
188 ax.plot(
189 _posterior[0][diverging_mask], _posterior[1][diverging_mask], **divergences_kwargs
190 )
191
192 ax.set_xlabel("{}".format(flat_var_names[0]), fontsize=ax_labelsize, wrap=True)
193 ax.set_ylabel("{}".format(flat_var_names[1]), fontsize=ax_labelsize, wrap=True)
194 ax.tick_params(labelsize=xt_labelsize)
195 axs = ax
196
197 else:
198 (figsize, ax_labelsize, _, xt_labelsize, _, _) = _scale_fig_size(
199 figsize, textsize, numvars - 2, numvars - 2
200 )
201
202 if ax is None:
203 fig, ax = plt.subplots(
204 numvars - 1, numvars - 1, figsize=figsize, constrained_layout=True
205 )
206 axs = []
207 hexbin_values = []
208 for i in range(0, numvars - 1):
209 var1 = _posterior[i]
210
211 for j in range(0, numvars - 1):
212 if j < i:
213 ax[j, i].axis("off")
214 continue
215
216 var2 = _posterior[j + 1]
217
218 if kind == "scatter":
219 ax[j, i].plot(var1, var2, **plot_kwargs)
220
221 elif kind == "kde":
222 plot_kde(
223 var1, var2, contour=contour, fill_last=fill_last, ax=ax[j, i], **plot_kwargs
224 )
225
226 else:
227 ax[j, i].grid(False)
228 hexbin = ax[j, i].hexbin(var1, var2, mincnt=1, gridsize=gridsize, **plot_kwargs)
229 if kind == "hexbin" and colorbar:
230 hexbin_values.append(hexbin.norm.vmin)
231 hexbin_values.append(hexbin.norm.vmax)
232 if j == i == 0 and colorbar:
233 divider = make_axes_locatable(ax[0, 1])
234 cax = divider.append_axes("left", size="7%")
235 cbar = fig.colorbar(
236 hexbin, ticks=[hexbin.norm.vmin, hexbin.norm.vmax], cax=cax
237 )
238 cbar.ax.set_yticklabels(["low", "high"], fontsize=ax_labelsize)
239
240 if divergences:
241 ax[j, i].plot(var1[diverging_mask], var2[diverging_mask], **divergences_kwargs)
242
243 if j + 1 != numvars - 1:
244 ax[j, i].axes.get_xaxis().set_major_formatter(NullFormatter())
245 else:
246 ax[j, i].set_xlabel(
247 "{}".format(flat_var_names[i]), fontsize=ax_labelsize, wrap=True
248 )
249 if i != 0:
250 ax[j, i].axes.get_yaxis().set_major_formatter(NullFormatter())
251 else:
252 ax[j, i].set_ylabel(
253 "{}".format(flat_var_names[j + 1]), fontsize=ax_labelsize, wrap=True
254 )
255
256 ax[j, i].tick_params(labelsize=xt_labelsize)
257 axs.append(ax)
258
259 return axs
260
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/arviz/plots/pairplot.py b/arviz/plots/pairplot.py
--- a/arviz/plots/pairplot.py
+++ b/arviz/plots/pairplot.py
@@ -192,7 +192,6 @@
ax.set_xlabel("{}".format(flat_var_names[0]), fontsize=ax_labelsize, wrap=True)
ax.set_ylabel("{}".format(flat_var_names[1]), fontsize=ax_labelsize, wrap=True)
ax.tick_params(labelsize=xt_labelsize)
- axs = ax
else:
(figsize, ax_labelsize, _, xt_labelsize, _, _) = _scale_fig_size(
@@ -203,7 +202,6 @@
fig, ax = plt.subplots(
numvars - 1, numvars - 1, figsize=figsize, constrained_layout=True
)
- axs = []
hexbin_values = []
for i in range(0, numvars - 1):
var1 = _posterior[i]
@@ -254,6 +252,5 @@
)
ax[j, i].tick_params(labelsize=xt_labelsize)
- axs.append(ax)
- return axs
+ return ax
| {"golden_diff": "diff --git a/arviz/plots/pairplot.py b/arviz/plots/pairplot.py\n--- a/arviz/plots/pairplot.py\n+++ b/arviz/plots/pairplot.py\n@@ -192,7 +192,6 @@\n ax.set_xlabel(\"{}\".format(flat_var_names[0]), fontsize=ax_labelsize, wrap=True)\n ax.set_ylabel(\"{}\".format(flat_var_names[1]), fontsize=ax_labelsize, wrap=True)\n ax.tick_params(labelsize=xt_labelsize)\n- axs = ax\n \n else:\n (figsize, ax_labelsize, _, xt_labelsize, _, _) = _scale_fig_size(\n@@ -203,7 +202,6 @@\n fig, ax = plt.subplots(\n numvars - 1, numvars - 1, figsize=figsize, constrained_layout=True\n )\n- axs = []\n hexbin_values = []\n for i in range(0, numvars - 1):\n var1 = _posterior[i]\n@@ -254,6 +252,5 @@\n )\n \n ax[j, i].tick_params(labelsize=xt_labelsize)\n- axs.append(ax)\n \n- return axs\n+ return ax\n", "issue": "Return value of plot_pair is numvars the same\nThe return value of `plot_pair` is a list that contains `numvars` times the array of subplots. I guess it comes from the original version with gridspec. \r\n\r\nLine 168 defines `axs` empty list, which is no longer used, line 219 appends `ax` to `axs`, but `ax` already contains all the subplots. Eventually, `axs` is returned instead of `ax`.\r\n\r\nIn addition, maybe the docstring should be updated to specify that `ax` (both as input and as return value) is actually an array of axes, not an axes object. It cannot be a single axes because there are many, but I am not sure if everyone will see it right away.\r\n\n", "before_files": [{"content": "\"\"\"Plot a scatter or hexbin of sampled parameters.\"\"\"\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.ticker import NullFormatter\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\n\nfrom ..data import convert_to_dataset\nfrom .kdeplot import plot_kde\nfrom .plot_utils import _scale_fig_size, xarray_to_ndarray, get_coords\nfrom ..utils import _var_names\n\n\ndef plot_pair(\n data,\n var_names=None,\n coords=None,\n figsize=None,\n textsize=None,\n kind=\"scatter\",\n gridsize=\"auto\",\n contour=True,\n fill_last=True,\n divergences=False,\n colorbar=False,\n ax=None,\n divergences_kwargs=None,\n plot_kwargs=None,\n):\n \"\"\"\n Plot a scatter or hexbin matrix of the sampled parameters.\n\n Parameters\n ----------\n data : obj\n Any object that can be converted to an az.InferenceData object\n Refer to documentation of az.convert_to_dataset for details\n var_names : list of variable names\n Variables to be plotted, if None all variable are plotted\n coords : mapping, optional\n Coordinates of var_names to be plotted. Passed to `Dataset.sel`\n figsize : figure size tuple\n If None, size is (8 + numvars, 8 + numvars)\n textsize: int\n Text size for labels. If None it will be autoscaled based on figsize.\n kind : str\n Type of plot to display (kde or hexbin)\n gridsize : int or (int, int), optional\n Only works for kind=hexbin.\n The number of hexagons in the x-direction. The corresponding number of hexagons in the\n y-direction is chosen such that the hexagons are approximately regular.\n Alternatively, gridsize can be a tuple with two elements specifying the number of hexagons\n in the x-direction and the y-direction.\n contour : bool\n If True plot the 2D KDE using contours, otherwise plot a smooth 2D KDE. Defaults to True.\n fill_last : bool\n If True fill the last contour of the 2D KDE plot. Defaults to True.\n divergences : Boolean\n If True divergences will be plotted in a different color\n colorbar : bool\n If True a colorbar will be included as part of the plot (Defaults to False).\n Only works when kind=hexbin\n ax: axes\n Matplotlib axes\n divergences_kwargs : dicts, optional\n Additional keywords passed to ax.scatter for divergences\n plot_kwargs : dicts, optional\n Additional keywords passed to ax.plot, az.plot_kde or ax.hexbin\n Returns\n -------\n ax : matplotlib axes\n\n Examples\n --------\n KDE Pair Plot\n\n .. plot::\n :context: close-figs\n\n >>> import arviz as az\n >>> centered = az.load_arviz_data('centered_eight')\n >>> coords = {'school': ['Choate', 'Deerfield']}\n >>> az.plot_pair(centered,\n >>> var_names=['theta', 'mu', 'tau'],\n >>> kind='kde',\n >>> coords=coords,\n >>> divergences=True,\n >>> textsize=18)\n\n Hexbin pair plot\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_pair(centered,\n >>> var_names=['theta', 'mu'],\n >>> coords=coords,\n >>> textsize=18,\n >>> kind='hexbin')\n\n Pair plot showing divergences\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_pair(centered,\n ... var_names=['theta', 'mu', 'tau'],\n ... coords=coords,\n ... divergences=True,\n ... textsize=18)\n \"\"\"\n valid_kinds = [\"scatter\", \"kde\", \"hexbin\"]\n if kind not in valid_kinds:\n raise ValueError(\n (\"Plot type {} not recognized.\" \"Plot type must be in {}\").format(kind, valid_kinds)\n )\n\n if coords is None:\n coords = {}\n\n if plot_kwargs is None:\n plot_kwargs = {}\n\n if kind == \"scatter\":\n plot_kwargs.setdefault(\"marker\", \".\")\n plot_kwargs.setdefault(\"lw\", 0)\n\n if divergences_kwargs is None:\n divergences_kwargs = {}\n\n divergences_kwargs.setdefault(\"marker\", \"o\")\n divergences_kwargs.setdefault(\"markeredgecolor\", \"k\")\n divergences_kwargs.setdefault(\"color\", \"C1\")\n divergences_kwargs.setdefault(\"lw\", 0)\n\n # Get posterior draws and combine chains\n posterior_data = convert_to_dataset(data, group=\"posterior\")\n var_names = _var_names(var_names, posterior_data)\n flat_var_names, _posterior = xarray_to_ndarray(\n get_coords(posterior_data, coords), var_names=var_names, combined=True\n )\n\n # Get diverging draws and combine chains\n if divergences:\n divergent_data = convert_to_dataset(data, group=\"sample_stats\")\n _, diverging_mask = xarray_to_ndarray(\n divergent_data, var_names=(\"diverging\",), combined=True\n )\n diverging_mask = np.squeeze(diverging_mask)\n\n if gridsize == \"auto\":\n gridsize = int(len(_posterior[0]) ** 0.35)\n\n numvars = len(flat_var_names)\n\n if numvars < 2:\n raise Exception(\"Number of variables to be plotted must be 2 or greater.\")\n\n if numvars == 2:\n (figsize, ax_labelsize, _, xt_labelsize, _, _) = _scale_fig_size(\n figsize, textsize, numvars - 1, numvars - 1\n )\n\n if ax is None:\n fig, ax = plt.subplots(figsize=figsize, constrained_layout=True)\n\n if kind == \"scatter\":\n ax.plot(_posterior[0], _posterior[1], **plot_kwargs)\n elif kind == \"kde\":\n plot_kde(\n _posterior[0],\n _posterior[1],\n contour=contour,\n fill_last=fill_last,\n ax=ax,\n **plot_kwargs\n )\n else:\n hexbin = ax.hexbin(\n _posterior[0], _posterior[1], mincnt=1, gridsize=gridsize, **plot_kwargs\n )\n ax.grid(False)\n\n if kind == \"hexbin\" and colorbar:\n cbar = ax.figure.colorbar(hexbin, ticks=[hexbin.norm.vmin, hexbin.norm.vmax], ax=ax)\n cbar.ax.set_yticklabels([\"low\", \"high\"], fontsize=ax_labelsize)\n\n if divergences:\n ax.plot(\n _posterior[0][diverging_mask], _posterior[1][diverging_mask], **divergences_kwargs\n )\n\n ax.set_xlabel(\"{}\".format(flat_var_names[0]), fontsize=ax_labelsize, wrap=True)\n ax.set_ylabel(\"{}\".format(flat_var_names[1]), fontsize=ax_labelsize, wrap=True)\n ax.tick_params(labelsize=xt_labelsize)\n axs = ax\n\n else:\n (figsize, ax_labelsize, _, xt_labelsize, _, _) = _scale_fig_size(\n figsize, textsize, numvars - 2, numvars - 2\n )\n\n if ax is None:\n fig, ax = plt.subplots(\n numvars - 1, numvars - 1, figsize=figsize, constrained_layout=True\n )\n axs = []\n hexbin_values = []\n for i in range(0, numvars - 1):\n var1 = _posterior[i]\n\n for j in range(0, numvars - 1):\n if j < i:\n ax[j, i].axis(\"off\")\n continue\n\n var2 = _posterior[j + 1]\n\n if kind == \"scatter\":\n ax[j, i].plot(var1, var2, **plot_kwargs)\n\n elif kind == \"kde\":\n plot_kde(\n var1, var2, contour=contour, fill_last=fill_last, ax=ax[j, i], **plot_kwargs\n )\n\n else:\n ax[j, i].grid(False)\n hexbin = ax[j, i].hexbin(var1, var2, mincnt=1, gridsize=gridsize, **plot_kwargs)\n if kind == \"hexbin\" and colorbar:\n hexbin_values.append(hexbin.norm.vmin)\n hexbin_values.append(hexbin.norm.vmax)\n if j == i == 0 and colorbar:\n divider = make_axes_locatable(ax[0, 1])\n cax = divider.append_axes(\"left\", size=\"7%\")\n cbar = fig.colorbar(\n hexbin, ticks=[hexbin.norm.vmin, hexbin.norm.vmax], cax=cax\n )\n cbar.ax.set_yticklabels([\"low\", \"high\"], fontsize=ax_labelsize)\n\n if divergences:\n ax[j, i].plot(var1[diverging_mask], var2[diverging_mask], **divergences_kwargs)\n\n if j + 1 != numvars - 1:\n ax[j, i].axes.get_xaxis().set_major_formatter(NullFormatter())\n else:\n ax[j, i].set_xlabel(\n \"{}\".format(flat_var_names[i]), fontsize=ax_labelsize, wrap=True\n )\n if i != 0:\n ax[j, i].axes.get_yaxis().set_major_formatter(NullFormatter())\n else:\n ax[j, i].set_ylabel(\n \"{}\".format(flat_var_names[j + 1]), fontsize=ax_labelsize, wrap=True\n )\n\n ax[j, i].tick_params(labelsize=xt_labelsize)\n axs.append(ax)\n\n return axs\n", "path": "arviz/plots/pairplot.py"}], "after_files": [{"content": "\"\"\"Plot a scatter or hexbin of sampled parameters.\"\"\"\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.ticker import NullFormatter\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\n\nfrom ..data import convert_to_dataset\nfrom .kdeplot import plot_kde\nfrom .plot_utils import _scale_fig_size, xarray_to_ndarray, get_coords\nfrom ..utils import _var_names\n\n\ndef plot_pair(\n data,\n var_names=None,\n coords=None,\n figsize=None,\n textsize=None,\n kind=\"scatter\",\n gridsize=\"auto\",\n contour=True,\n fill_last=True,\n divergences=False,\n colorbar=False,\n ax=None,\n divergences_kwargs=None,\n plot_kwargs=None,\n):\n \"\"\"\n Plot a scatter or hexbin matrix of the sampled parameters.\n\n Parameters\n ----------\n data : obj\n Any object that can be converted to an az.InferenceData object\n Refer to documentation of az.convert_to_dataset for details\n var_names : list of variable names\n Variables to be plotted, if None all variable are plotted\n coords : mapping, optional\n Coordinates of var_names to be plotted. Passed to `Dataset.sel`\n figsize : figure size tuple\n If None, size is (8 + numvars, 8 + numvars)\n textsize: int\n Text size for labels. If None it will be autoscaled based on figsize.\n kind : str\n Type of plot to display (kde or hexbin)\n gridsize : int or (int, int), optional\n Only works for kind=hexbin.\n The number of hexagons in the x-direction. The corresponding number of hexagons in the\n y-direction is chosen such that the hexagons are approximately regular.\n Alternatively, gridsize can be a tuple with two elements specifying the number of hexagons\n in the x-direction and the y-direction.\n contour : bool\n If True plot the 2D KDE using contours, otherwise plot a smooth 2D KDE. Defaults to True.\n fill_last : bool\n If True fill the last contour of the 2D KDE plot. Defaults to True.\n divergences : Boolean\n If True divergences will be plotted in a different color\n colorbar : bool\n If True a colorbar will be included as part of the plot (Defaults to False).\n Only works when kind=hexbin\n ax: axes\n Matplotlib axes\n divergences_kwargs : dicts, optional\n Additional keywords passed to ax.scatter for divergences\n plot_kwargs : dicts, optional\n Additional keywords passed to ax.plot, az.plot_kde or ax.hexbin\n Returns\n -------\n ax : matplotlib axes\n\n Examples\n --------\n KDE Pair Plot\n\n .. plot::\n :context: close-figs\n\n >>> import arviz as az\n >>> centered = az.load_arviz_data('centered_eight')\n >>> coords = {'school': ['Choate', 'Deerfield']}\n >>> az.plot_pair(centered,\n >>> var_names=['theta', 'mu', 'tau'],\n >>> kind='kde',\n >>> coords=coords,\n >>> divergences=True,\n >>> textsize=18)\n\n Hexbin pair plot\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_pair(centered,\n >>> var_names=['theta', 'mu'],\n >>> coords=coords,\n >>> textsize=18,\n >>> kind='hexbin')\n\n Pair plot showing divergences\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_pair(centered,\n ... var_names=['theta', 'mu', 'tau'],\n ... coords=coords,\n ... divergences=True,\n ... textsize=18)\n \"\"\"\n valid_kinds = [\"scatter\", \"kde\", \"hexbin\"]\n if kind not in valid_kinds:\n raise ValueError(\n (\"Plot type {} not recognized.\" \"Plot type must be in {}\").format(kind, valid_kinds)\n )\n\n if coords is None:\n coords = {}\n\n if plot_kwargs is None:\n plot_kwargs = {}\n\n if kind == \"scatter\":\n plot_kwargs.setdefault(\"marker\", \".\")\n plot_kwargs.setdefault(\"lw\", 0)\n\n if divergences_kwargs is None:\n divergences_kwargs = {}\n\n divergences_kwargs.setdefault(\"marker\", \"o\")\n divergences_kwargs.setdefault(\"markeredgecolor\", \"k\")\n divergences_kwargs.setdefault(\"color\", \"C1\")\n divergences_kwargs.setdefault(\"lw\", 0)\n\n # Get posterior draws and combine chains\n posterior_data = convert_to_dataset(data, group=\"posterior\")\n var_names = _var_names(var_names, posterior_data)\n flat_var_names, _posterior = xarray_to_ndarray(\n get_coords(posterior_data, coords), var_names=var_names, combined=True\n )\n\n # Get diverging draws and combine chains\n if divergences:\n divergent_data = convert_to_dataset(data, group=\"sample_stats\")\n _, diverging_mask = xarray_to_ndarray(\n divergent_data, var_names=(\"diverging\",), combined=True\n )\n diverging_mask = np.squeeze(diverging_mask)\n\n if gridsize == \"auto\":\n gridsize = int(len(_posterior[0]) ** 0.35)\n\n numvars = len(flat_var_names)\n\n if numvars < 2:\n raise Exception(\"Number of variables to be plotted must be 2 or greater.\")\n\n if numvars == 2:\n (figsize, ax_labelsize, _, xt_labelsize, _, _) = _scale_fig_size(\n figsize, textsize, numvars - 1, numvars - 1\n )\n\n if ax is None:\n fig, ax = plt.subplots(figsize=figsize, constrained_layout=True)\n\n if kind == \"scatter\":\n ax.plot(_posterior[0], _posterior[1], **plot_kwargs)\n elif kind == \"kde\":\n plot_kde(\n _posterior[0],\n _posterior[1],\n contour=contour,\n fill_last=fill_last,\n ax=ax,\n **plot_kwargs\n )\n else:\n hexbin = ax.hexbin(\n _posterior[0], _posterior[1], mincnt=1, gridsize=gridsize, **plot_kwargs\n )\n ax.grid(False)\n\n if kind == \"hexbin\" and colorbar:\n cbar = ax.figure.colorbar(hexbin, ticks=[hexbin.norm.vmin, hexbin.norm.vmax], ax=ax)\n cbar.ax.set_yticklabels([\"low\", \"high\"], fontsize=ax_labelsize)\n\n if divergences:\n ax.plot(\n _posterior[0][diverging_mask], _posterior[1][diverging_mask], **divergences_kwargs\n )\n\n ax.set_xlabel(\"{}\".format(flat_var_names[0]), fontsize=ax_labelsize, wrap=True)\n ax.set_ylabel(\"{}\".format(flat_var_names[1]), fontsize=ax_labelsize, wrap=True)\n ax.tick_params(labelsize=xt_labelsize)\n\n else:\n (figsize, ax_labelsize, _, xt_labelsize, _, _) = _scale_fig_size(\n figsize, textsize, numvars - 2, numvars - 2\n )\n\n if ax is None:\n fig, ax = plt.subplots(\n numvars - 1, numvars - 1, figsize=figsize, constrained_layout=True\n )\n hexbin_values = []\n for i in range(0, numvars - 1):\n var1 = _posterior[i]\n\n for j in range(0, numvars - 1):\n if j < i:\n ax[j, i].axis(\"off\")\n continue\n\n var2 = _posterior[j + 1]\n\n if kind == \"scatter\":\n ax[j, i].plot(var1, var2, **plot_kwargs)\n\n elif kind == \"kde\":\n plot_kde(\n var1, var2, contour=contour, fill_last=fill_last, ax=ax[j, i], **plot_kwargs\n )\n\n else:\n ax[j, i].grid(False)\n hexbin = ax[j, i].hexbin(var1, var2, mincnt=1, gridsize=gridsize, **plot_kwargs)\n if kind == \"hexbin\" and colorbar:\n hexbin_values.append(hexbin.norm.vmin)\n hexbin_values.append(hexbin.norm.vmax)\n if j == i == 0 and colorbar:\n divider = make_axes_locatable(ax[0, 1])\n cax = divider.append_axes(\"left\", size=\"7%\")\n cbar = fig.colorbar(\n hexbin, ticks=[hexbin.norm.vmin, hexbin.norm.vmax], cax=cax\n )\n cbar.ax.set_yticklabels([\"low\", \"high\"], fontsize=ax_labelsize)\n\n if divergences:\n ax[j, i].plot(var1[diverging_mask], var2[diverging_mask], **divergences_kwargs)\n\n if j + 1 != numvars - 1:\n ax[j, i].axes.get_xaxis().set_major_formatter(NullFormatter())\n else:\n ax[j, i].set_xlabel(\n \"{}\".format(flat_var_names[i]), fontsize=ax_labelsize, wrap=True\n )\n if i != 0:\n ax[j, i].axes.get_yaxis().set_major_formatter(NullFormatter())\n else:\n ax[j, i].set_ylabel(\n \"{}\".format(flat_var_names[j + 1]), fontsize=ax_labelsize, wrap=True\n )\n\n ax[j, i].tick_params(labelsize=xt_labelsize)\n\n return ax\n", "path": "arviz/plots/pairplot.py"}]} | 3,285 | 273 |
gh_patches_debug_44407 | rasdani/github-patches | git_diff | google__mobly-227 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`snippet_shell` is broken for python3
```
$ snippet_shell.py --mbs
s.logINFO:root:[AndroidDevice|HT67K0300103] Launching snippet apk com.google.android.mobly.snippet.bundled with protocol v1
VTraceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.6/bin/snippet_shell.py", line 6, in <module>
exec(compile(open(__file__).read(), __file__, 'exec'))
File "/Users/angli/Developer/mobly/tools/snippet_shell.py", line 84, in <module>
SnippetShell(package).main(args.serial)
File "/Users/angli/Developer/mobly/mobly/controllers/android_device_lib/jsonrpc_shell_base.py", line 88, in main
self.start_console()
File "/Users/angli/Developer/mobly/mobly/controllers/android_device_lib/jsonrpc_shell_base.py", line 73, in start_console
self._start_services(console_env)
File "/Users/angli/Developer/mobly/tools/snippet_shell.py", line 43, in _start_services
self._ad.load_snippet(name='snippet', package=self._package)
File "/Users/angli/Developer/mobly/mobly/controllers/android_device.py", line 646, in load_snippet
client.start_app_and_connect()
File "/Users/angli/Developer/mobly/mobly/controllers/android_device_lib/snippet_client.py", line 112, in start_app_and_connect
line)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/re.py", line 172, in match
return _compile(pattern, flags).match(string)
TypeError: cannot use a string pattern on a bytes-like object
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mobly/controllers/android_device_lib/snippet_client.py`
Content:
```
1 # Copyright 2016 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """JSON RPC interface to Mobly Snippet Lib."""
15 import logging
16 import re
17 import time
18
19 from mobly import utils
20 from mobly.controllers.android_device_lib import adb
21 from mobly.controllers.android_device_lib import jsonrpc_client_base
22
23 _INSTRUMENTATION_RUNNER_PACKAGE = (
24 'com.google.android.mobly.snippet.SnippetRunner')
25
26 # TODO(adorokhine): delete this in Mobly 1.6 when snippet v0 support is removed.
27 _LAUNCH_CMD_V0 = ('am instrument -w -e action start -e port %s %s/' +
28 _INSTRUMENTATION_RUNNER_PACKAGE)
29
30 _LAUNCH_CMD_V1 = (
31 'am instrument -w -e action start %s/' + _INSTRUMENTATION_RUNNER_PACKAGE)
32
33 _STOP_CMD = (
34 'am instrument -w -e action stop %s/' + _INSTRUMENTATION_RUNNER_PACKAGE)
35
36 # Maximum time to wait for a v0 snippet to start on the device (10 minutes).
37 # TODO(adorokhine): delete this in Mobly 1.6 when snippet v0 support is removed.
38 _APP_START_WAIT_TIME_V0 = 10 * 60
39
40
41 class Error(Exception):
42 pass
43
44
45 class ProtocolVersionError(Error):
46 """Raised when the protocol reported by the snippet is unknown."""
47
48
49 class SnippetClient(jsonrpc_client_base.JsonRpcClientBase):
50 """A client for interacting with snippet APKs using Mobly Snippet Lib.
51
52 See superclass documentation for a list of public attributes.
53
54 It currently supports both v0 and v1 snippet launch protocols, although
55 support for v0 will be removed in a future version.
56
57 For a description of the launch protocols, see the documentation in
58 mobly-snippet-lib, SnippetRunner.java.
59 """
60
61 def __init__(self, package, adb_proxy, log=logging.getLogger()):
62 """Initializes a SnippetClient.
63
64 Args:
65 package: (str) The package name of the apk where the snippets are
66 defined.
67 adb_proxy: (adb.AdbProxy) Adb proxy for running adb commands.
68 log: (logging.Logger) logger to which to send log messages.
69 """
70 super(SnippetClient, self).__init__(app_name=package, log=log)
71 self.package = package
72 self._adb = adb_proxy
73 self._proc = None
74
75 def start_app_and_connect(self):
76 """Overrides superclass. Launches a snippet app and connects to it."""
77 self._check_app_installed()
78
79 # Try launching the app with the v1 protocol. If that fails, fall back
80 # to v0 for compatibility. Use info here so people know exactly what's
81 # happening here, which is helpful since they need to create their own
82 # instrumentations and manifest.
83 self.log.info('Launching snippet apk %s with protocol v1',
84 self.package)
85 cmd = _LAUNCH_CMD_V1 % self.package
86 start_time = time.time()
87 self._proc = self._do_start_app(cmd)
88
89 # "Instrumentation crashed" could be due to several reasons, eg
90 # exception thrown during startup or just a launch protocol 0 snippet
91 # dying because it needs the port flag. Sadly we have no way to tell so
92 # just warn and retry as v0.
93 # TODO(adorokhine): delete this in Mobly 1.6 when snippet v0 support is
94 # removed.
95 line = self._read_line()
96 if line in ('INSTRUMENTATION_RESULT: shortMsg=Process crashed.',
97 'INSTRUMENTATION_RESULT: shortMsg='
98 'java.lang.IllegalArgumentException'):
99 self.log.warning('Snippet %s crashed on startup. This might be an '
100 'actual error or a snippet using deprecated v0 '
101 'start protocol. Retrying as a v0 snippet.',
102 self.package)
103 self.host_port = utils.get_available_host_port()
104 # Reuse the host port as the device port in v0 snippet. This isn't
105 # safe in general, but the protocol is deprecated.
106 cmd = _LAUNCH_CMD_V0 % (self.host_port, self.package)
107 self._proc = self._do_start_app(cmd)
108 self._connect_to_v0()
109 else:
110 # Check protocol version and get the device port
111 match = re.match('^SNIPPET START, PROTOCOL ([0-9]+) ([0-9]+)$',
112 line)
113 if not match or match.group(1) != '1':
114 raise ProtocolVersionError(line)
115 self._connect_to_v1()
116 self.log.debug('Snippet %s started after %.1fs on host port %s',
117 self.package, time.time() - start_time, self.host_port)
118
119 def stop_app(self):
120 # Kill the pending 'adb shell am instrument -w' process if there is one.
121 # Although killing the snippet apk would abort this process anyway, we
122 # want to call stop_standing_subprocess() to perform a health check,
123 # print the failure stack trace if there was any, and reap it from the
124 # process table.
125 self.log.debug('Stopping snippet apk %s', self.package)
126 try:
127 # Close the socket connection.
128 self.disconnect()
129 if self._proc:
130 utils.stop_standing_subprocess(self._proc)
131 out = self._adb.shell(_STOP_CMD % self.package).decode('utf-8')
132 if 'OK (0 tests)' not in out:
133 raise Error('Failed to stop existing apk. Unexpected '
134 'output: %s' % out)
135 finally:
136 # Always clean up the adb port
137 if self.host_port:
138 self._adb.forward(['--remove', 'tcp:%d' % self.host_port])
139
140 def _start_event_client(self):
141 """Overrides superclass."""
142 event_client = SnippetClient(
143 package=self.package, adb_proxy=self._adb, log=self.log)
144 event_client.host_port = self.host_port
145 event_client.connect(self.uid,
146 jsonrpc_client_base.JsonRpcCommand.CONTINUE)
147 return event_client
148
149 def _check_app_installed(self):
150 # Check that the Mobly Snippet app is installed.
151 out = self._adb.shell('pm list package')
152 if not utils.grep('^package:%s$' % self.package, out):
153 raise jsonrpc_client_base.AppStartError(
154 '%s is not installed on %s' % (self.package, self._adb.serial))
155 # Check that the app is instrumented.
156 out = self._adb.shell('pm list instrumentation')
157 matched_out = utils.grep('^instrumentation:%s/%s' %
158 (self.package,
159 _INSTRUMENTATION_RUNNER_PACKAGE), out)
160 if not matched_out:
161 raise jsonrpc_client_base.AppStartError(
162 '%s is installed on %s, but it is not instrumented.' %
163 (self.package, self._adb.serial))
164 match = re.search('^instrumentation:(.*)\/(.*) \(target=(.*)\)$',
165 matched_out[0])
166 target_name = match.group(3)
167 # Check that the instrumentation target is installed if it's not the
168 # same as the snippet package.
169 if target_name != self.package:
170 out = self._adb.shell('pm list package')
171 if not utils.grep('^package:%s$' % target_name, out):
172 raise jsonrpc_client_base.AppStartError(
173 'Instrumentation target %s is not installed on %s' %
174 (target_name, self._adb.serial))
175
176 def _do_start_app(self, launch_cmd):
177 adb_cmd = [adb.ADB]
178 if self._adb.serial:
179 adb_cmd += ['-s', self._adb.serial]
180 adb_cmd += ['shell', launch_cmd]
181 return utils.start_standing_subprocess(adb_cmd, shell=False)
182
183 # TODO(adorokhine): delete this in Mobly 1.6 when snippet v0 support is
184 # removed.
185 def _connect_to_v0(self):
186 self.device_port = self.host_port
187 self._adb.forward(
188 ['tcp:%d' % self.host_port, 'tcp:%d' % self.device_port])
189 start_time = time.time()
190 expiration_time = start_time + _APP_START_WAIT_TIME_V0
191 while time.time() < expiration_time:
192 self.log.debug('Attempting to start %s.', self.package)
193 try:
194 self.connect()
195 return
196 except:
197 self.log.debug(
198 'v0 snippet %s is not yet running, retrying',
199 self.package,
200 exc_info=True)
201 time.sleep(1)
202 raise jsonrpc_client_base.AppStartError(
203 '%s failed to start on %s.' % (self.package, self._adb.serial))
204
205 def _connect_to_v1(self):
206 line = self._read_line()
207 match = re.match('^SNIPPET SERVING, PORT ([0-9]+)$', line)
208 if not match:
209 raise ProtocolVersionError(line)
210 self.device_port = int(match.group(1))
211
212 # Forward the device port to a new host port, and connect to that port
213 self.host_port = utils.get_available_host_port()
214 self._adb.forward(
215 ['tcp:%d' % self.host_port, 'tcp:%d' % self.device_port])
216 self.connect()
217
218 def _read_line(self):
219 line = self._proc.stdout.readline().rstrip()
220 self.log.debug('Read line from instrumentation output: "%s"', line)
221 return line
222
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mobly/controllers/android_device_lib/snippet_client.py b/mobly/controllers/android_device_lib/snippet_client.py
--- a/mobly/controllers/android_device_lib/snippet_client.py
+++ b/mobly/controllers/android_device_lib/snippet_client.py
@@ -1,11 +1,11 @@
# Copyright 2016 Google Inc.
-#
+#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
-#
+#
# http://www.apache.org/licenses/LICENSE-2.0
-#
+#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
@@ -42,7 +42,7 @@
pass
-class ProtocolVersionError(Error):
+class ProtocolVersionError(jsonrpc_client_base.AppStartError):
"""Raised when the protocol reported by the snippet is unknown."""
@@ -92,7 +92,7 @@
# just warn and retry as v0.
# TODO(adorokhine): delete this in Mobly 1.6 when snippet v0 support is
# removed.
- line = self._read_line()
+ line = self._read_protocol_line()
if line in ('INSTRUMENTATION_RESULT: shortMsg=Process crashed.',
'INSTRUMENTATION_RESULT: shortMsg='
'java.lang.IllegalArgumentException'):
@@ -185,7 +185,8 @@
def _connect_to_v0(self):
self.device_port = self.host_port
self._adb.forward(
- ['tcp:%d' % self.host_port, 'tcp:%d' % self.device_port])
+ ['tcp:%d' % self.host_port,
+ 'tcp:%d' % self.device_port])
start_time = time.time()
expiration_time = start_time + _APP_START_WAIT_TIME_V0
while time.time() < expiration_time:
@@ -203,19 +204,46 @@
'%s failed to start on %s.' % (self.package, self._adb.serial))
def _connect_to_v1(self):
- line = self._read_line()
+ line = self._read_protocol_line()
match = re.match('^SNIPPET SERVING, PORT ([0-9]+)$', line)
if not match:
- raise ProtocolVersionError(line)
+ raise jsonrpc_client_base.AppStartError(line)
self.device_port = int(match.group(1))
# Forward the device port to a new host port, and connect to that port
self.host_port = utils.get_available_host_port()
self._adb.forward(
- ['tcp:%d' % self.host_port, 'tcp:%d' % self.device_port])
+ ['tcp:%d' % self.host_port,
+ 'tcp:%d' % self.device_port])
self.connect()
- def _read_line(self):
- line = self._proc.stdout.readline().rstrip()
- self.log.debug('Read line from instrumentation output: "%s"', line)
- return line
+ def _read_protocol_line(self):
+ """Reads the next line of instrumentation output relevant to snippets.
+
+ This method will skip over lines that don't start with 'SNIPPET' or
+ 'INSTRUMENTATION_RESULT'.
+
+ Returns:
+ (str) Next line of snippet-related instrumentation output, stripped.
+
+ Raises:
+ jsonrpc_client_base.AppStartError: If EOF is reached without any
+ protocol lines being read.
+ """
+ while True:
+ line = self._proc.stdout.readline().decode('utf-8')
+ if not line:
+ raise jsonrpc_client_base.AppStartError(
+ 'Unexpected EOF waiting for app to start')
+ # readline() uses an empty string to mark EOF, and a single newline
+ # to mark regular empty lines in the output. Don't move the strip()
+ # call above the truthiness check, or this method will start
+ # considering any blank output line to be EOF.
+ line = line.strip()
+ if (line.startswith('INSTRUMENTATION_RESULT:') or
+ line.startswith('SNIPPET ')):
+ self.log.debug(
+ 'Accepted line from instrumentation output: "%s"', line)
+ return line
+ self.log.debug('Discarded line from instrumentation output: "%s"',
+ line)
| {"golden_diff": "diff --git a/mobly/controllers/android_device_lib/snippet_client.py b/mobly/controllers/android_device_lib/snippet_client.py\n--- a/mobly/controllers/android_device_lib/snippet_client.py\n+++ b/mobly/controllers/android_device_lib/snippet_client.py\n@@ -1,11 +1,11 @@\n # Copyright 2016 Google Inc.\n-# \n+#\n # Licensed under the Apache License, Version 2.0 (the \"License\");\n # you may not use this file except in compliance with the License.\n # You may obtain a copy of the License at\n-# \n+#\n # http://www.apache.org/licenses/LICENSE-2.0\n-# \n+#\n # Unless required by applicable law or agreed to in writing, software\n # distributed under the License is distributed on an \"AS IS\" BASIS,\n # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n@@ -42,7 +42,7 @@\n pass\n \n \n-class ProtocolVersionError(Error):\n+class ProtocolVersionError(jsonrpc_client_base.AppStartError):\n \"\"\"Raised when the protocol reported by the snippet is unknown.\"\"\"\n \n \n@@ -92,7 +92,7 @@\n # just warn and retry as v0.\n # TODO(adorokhine): delete this in Mobly 1.6 when snippet v0 support is\n # removed.\n- line = self._read_line()\n+ line = self._read_protocol_line()\n if line in ('INSTRUMENTATION_RESULT: shortMsg=Process crashed.',\n 'INSTRUMENTATION_RESULT: shortMsg='\n 'java.lang.IllegalArgumentException'):\n@@ -185,7 +185,8 @@\n def _connect_to_v0(self):\n self.device_port = self.host_port\n self._adb.forward(\n- ['tcp:%d' % self.host_port, 'tcp:%d' % self.device_port])\n+ ['tcp:%d' % self.host_port,\n+ 'tcp:%d' % self.device_port])\n start_time = time.time()\n expiration_time = start_time + _APP_START_WAIT_TIME_V0\n while time.time() < expiration_time:\n@@ -203,19 +204,46 @@\n '%s failed to start on %s.' % (self.package, self._adb.serial))\n \n def _connect_to_v1(self):\n- line = self._read_line()\n+ line = self._read_protocol_line()\n match = re.match('^SNIPPET SERVING, PORT ([0-9]+)$', line)\n if not match:\n- raise ProtocolVersionError(line)\n+ raise jsonrpc_client_base.AppStartError(line)\n self.device_port = int(match.group(1))\n \n # Forward the device port to a new host port, and connect to that port\n self.host_port = utils.get_available_host_port()\n self._adb.forward(\n- ['tcp:%d' % self.host_port, 'tcp:%d' % self.device_port])\n+ ['tcp:%d' % self.host_port,\n+ 'tcp:%d' % self.device_port])\n self.connect()\n \n- def _read_line(self):\n- line = self._proc.stdout.readline().rstrip()\n- self.log.debug('Read line from instrumentation output: \"%s\"', line)\n- return line\n+ def _read_protocol_line(self):\n+ \"\"\"Reads the next line of instrumentation output relevant to snippets.\n+\n+ This method will skip over lines that don't start with 'SNIPPET' or\n+ 'INSTRUMENTATION_RESULT'.\n+\n+ Returns:\n+ (str) Next line of snippet-related instrumentation output, stripped.\n+\n+ Raises:\n+ jsonrpc_client_base.AppStartError: If EOF is reached without any\n+ protocol lines being read.\n+ \"\"\"\n+ while True:\n+ line = self._proc.stdout.readline().decode('utf-8')\n+ if not line:\n+ raise jsonrpc_client_base.AppStartError(\n+ 'Unexpected EOF waiting for app to start')\n+ # readline() uses an empty string to mark EOF, and a single newline\n+ # to mark regular empty lines in the output. Don't move the strip()\n+ # call above the truthiness check, or this method will start\n+ # considering any blank output line to be EOF.\n+ line = line.strip()\n+ if (line.startswith('INSTRUMENTATION_RESULT:') or\n+ line.startswith('SNIPPET ')):\n+ self.log.debug(\n+ 'Accepted line from instrumentation output: \"%s\"', line)\n+ return line\n+ self.log.debug('Discarded line from instrumentation output: \"%s\"',\n+ line)\n", "issue": "`snippet_shell` is broken for python3\n```\r\n$ snippet_shell.py --mbs\r\ns.logINFO:root:[AndroidDevice|HT67K0300103] Launching snippet apk com.google.android.mobly.snippet.bundled with protocol v1\r\nVTraceback (most recent call last):\r\n File \"/Library/Frameworks/Python.framework/Versions/3.6/bin/snippet_shell.py\", line 6, in <module>\r\n exec(compile(open(__file__).read(), __file__, 'exec'))\r\n File \"/Users/angli/Developer/mobly/tools/snippet_shell.py\", line 84, in <module>\r\n SnippetShell(package).main(args.serial)\r\n File \"/Users/angli/Developer/mobly/mobly/controllers/android_device_lib/jsonrpc_shell_base.py\", line 88, in main\r\n self.start_console()\r\n File \"/Users/angli/Developer/mobly/mobly/controllers/android_device_lib/jsonrpc_shell_base.py\", line 73, in start_console\r\n self._start_services(console_env)\r\n File \"/Users/angli/Developer/mobly/tools/snippet_shell.py\", line 43, in _start_services\r\n self._ad.load_snippet(name='snippet', package=self._package)\r\n File \"/Users/angli/Developer/mobly/mobly/controllers/android_device.py\", line 646, in load_snippet\r\n client.start_app_and_connect()\r\n File \"/Users/angli/Developer/mobly/mobly/controllers/android_device_lib/snippet_client.py\", line 112, in start_app_and_connect\r\n line)\r\n File \"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/re.py\", line 172, in match\r\n return _compile(pattern, flags).match(string)\r\nTypeError: cannot use a string pattern on a bytes-like object\r\n```\n", "before_files": [{"content": "# Copyright 2016 Google Inc.\n# \n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# \n# http://www.apache.org/licenses/LICENSE-2.0\n# \n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"JSON RPC interface to Mobly Snippet Lib.\"\"\"\nimport logging\nimport re\nimport time\n\nfrom mobly import utils\nfrom mobly.controllers.android_device_lib import adb\nfrom mobly.controllers.android_device_lib import jsonrpc_client_base\n\n_INSTRUMENTATION_RUNNER_PACKAGE = (\n 'com.google.android.mobly.snippet.SnippetRunner')\n\n# TODO(adorokhine): delete this in Mobly 1.6 when snippet v0 support is removed.\n_LAUNCH_CMD_V0 = ('am instrument -w -e action start -e port %s %s/' +\n _INSTRUMENTATION_RUNNER_PACKAGE)\n\n_LAUNCH_CMD_V1 = (\n 'am instrument -w -e action start %s/' + _INSTRUMENTATION_RUNNER_PACKAGE)\n\n_STOP_CMD = (\n 'am instrument -w -e action stop %s/' + _INSTRUMENTATION_RUNNER_PACKAGE)\n\n# Maximum time to wait for a v0 snippet to start on the device (10 minutes).\n# TODO(adorokhine): delete this in Mobly 1.6 when snippet v0 support is removed.\n_APP_START_WAIT_TIME_V0 = 10 * 60\n\n\nclass Error(Exception):\n pass\n\n\nclass ProtocolVersionError(Error):\n \"\"\"Raised when the protocol reported by the snippet is unknown.\"\"\"\n\n\nclass SnippetClient(jsonrpc_client_base.JsonRpcClientBase):\n \"\"\"A client for interacting with snippet APKs using Mobly Snippet Lib.\n\n See superclass documentation for a list of public attributes.\n\n It currently supports both v0 and v1 snippet launch protocols, although\n support for v0 will be removed in a future version.\n\n For a description of the launch protocols, see the documentation in\n mobly-snippet-lib, SnippetRunner.java.\n \"\"\"\n\n def __init__(self, package, adb_proxy, log=logging.getLogger()):\n \"\"\"Initializes a SnippetClient.\n \n Args:\n package: (str) The package name of the apk where the snippets are\n defined.\n adb_proxy: (adb.AdbProxy) Adb proxy for running adb commands.\n log: (logging.Logger) logger to which to send log messages.\n \"\"\"\n super(SnippetClient, self).__init__(app_name=package, log=log)\n self.package = package\n self._adb = adb_proxy\n self._proc = None\n\n def start_app_and_connect(self):\n \"\"\"Overrides superclass. Launches a snippet app and connects to it.\"\"\"\n self._check_app_installed()\n\n # Try launching the app with the v1 protocol. If that fails, fall back\n # to v0 for compatibility. Use info here so people know exactly what's\n # happening here, which is helpful since they need to create their own\n # instrumentations and manifest.\n self.log.info('Launching snippet apk %s with protocol v1',\n self.package)\n cmd = _LAUNCH_CMD_V1 % self.package\n start_time = time.time()\n self._proc = self._do_start_app(cmd)\n\n # \"Instrumentation crashed\" could be due to several reasons, eg\n # exception thrown during startup or just a launch protocol 0 snippet\n # dying because it needs the port flag. Sadly we have no way to tell so\n # just warn and retry as v0.\n # TODO(adorokhine): delete this in Mobly 1.6 when snippet v0 support is\n # removed.\n line = self._read_line()\n if line in ('INSTRUMENTATION_RESULT: shortMsg=Process crashed.',\n 'INSTRUMENTATION_RESULT: shortMsg='\n 'java.lang.IllegalArgumentException'):\n self.log.warning('Snippet %s crashed on startup. This might be an '\n 'actual error or a snippet using deprecated v0 '\n 'start protocol. Retrying as a v0 snippet.',\n self.package)\n self.host_port = utils.get_available_host_port()\n # Reuse the host port as the device port in v0 snippet. This isn't\n # safe in general, but the protocol is deprecated.\n cmd = _LAUNCH_CMD_V0 % (self.host_port, self.package)\n self._proc = self._do_start_app(cmd)\n self._connect_to_v0()\n else:\n # Check protocol version and get the device port\n match = re.match('^SNIPPET START, PROTOCOL ([0-9]+) ([0-9]+)$',\n line)\n if not match or match.group(1) != '1':\n raise ProtocolVersionError(line)\n self._connect_to_v1()\n self.log.debug('Snippet %s started after %.1fs on host port %s',\n self.package, time.time() - start_time, self.host_port)\n\n def stop_app(self):\n # Kill the pending 'adb shell am instrument -w' process if there is one.\n # Although killing the snippet apk would abort this process anyway, we\n # want to call stop_standing_subprocess() to perform a health check,\n # print the failure stack trace if there was any, and reap it from the\n # process table.\n self.log.debug('Stopping snippet apk %s', self.package)\n try:\n # Close the socket connection.\n self.disconnect()\n if self._proc:\n utils.stop_standing_subprocess(self._proc)\n out = self._adb.shell(_STOP_CMD % self.package).decode('utf-8')\n if 'OK (0 tests)' not in out:\n raise Error('Failed to stop existing apk. Unexpected '\n 'output: %s' % out)\n finally:\n # Always clean up the adb port\n if self.host_port:\n self._adb.forward(['--remove', 'tcp:%d' % self.host_port])\n\n def _start_event_client(self):\n \"\"\"Overrides superclass.\"\"\"\n event_client = SnippetClient(\n package=self.package, adb_proxy=self._adb, log=self.log)\n event_client.host_port = self.host_port\n event_client.connect(self.uid,\n jsonrpc_client_base.JsonRpcCommand.CONTINUE)\n return event_client\n\n def _check_app_installed(self):\n # Check that the Mobly Snippet app is installed.\n out = self._adb.shell('pm list package')\n if not utils.grep('^package:%s$' % self.package, out):\n raise jsonrpc_client_base.AppStartError(\n '%s is not installed on %s' % (self.package, self._adb.serial))\n # Check that the app is instrumented.\n out = self._adb.shell('pm list instrumentation')\n matched_out = utils.grep('^instrumentation:%s/%s' %\n (self.package,\n _INSTRUMENTATION_RUNNER_PACKAGE), out)\n if not matched_out:\n raise jsonrpc_client_base.AppStartError(\n '%s is installed on %s, but it is not instrumented.' %\n (self.package, self._adb.serial))\n match = re.search('^instrumentation:(.*)\\/(.*) \\(target=(.*)\\)$',\n matched_out[0])\n target_name = match.group(3)\n # Check that the instrumentation target is installed if it's not the\n # same as the snippet package.\n if target_name != self.package:\n out = self._adb.shell('pm list package')\n if not utils.grep('^package:%s$' % target_name, out):\n raise jsonrpc_client_base.AppStartError(\n 'Instrumentation target %s is not installed on %s' %\n (target_name, self._adb.serial))\n\n def _do_start_app(self, launch_cmd):\n adb_cmd = [adb.ADB]\n if self._adb.serial:\n adb_cmd += ['-s', self._adb.serial]\n adb_cmd += ['shell', launch_cmd]\n return utils.start_standing_subprocess(adb_cmd, shell=False)\n\n # TODO(adorokhine): delete this in Mobly 1.6 when snippet v0 support is\n # removed.\n def _connect_to_v0(self):\n self.device_port = self.host_port\n self._adb.forward(\n ['tcp:%d' % self.host_port, 'tcp:%d' % self.device_port])\n start_time = time.time()\n expiration_time = start_time + _APP_START_WAIT_TIME_V0\n while time.time() < expiration_time:\n self.log.debug('Attempting to start %s.', self.package)\n try:\n self.connect()\n return\n except:\n self.log.debug(\n 'v0 snippet %s is not yet running, retrying',\n self.package,\n exc_info=True)\n time.sleep(1)\n raise jsonrpc_client_base.AppStartError(\n '%s failed to start on %s.' % (self.package, self._adb.serial))\n\n def _connect_to_v1(self):\n line = self._read_line()\n match = re.match('^SNIPPET SERVING, PORT ([0-9]+)$', line)\n if not match:\n raise ProtocolVersionError(line)\n self.device_port = int(match.group(1))\n\n # Forward the device port to a new host port, and connect to that port\n self.host_port = utils.get_available_host_port()\n self._adb.forward(\n ['tcp:%d' % self.host_port, 'tcp:%d' % self.device_port])\n self.connect()\n\n def _read_line(self):\n line = self._proc.stdout.readline().rstrip()\n self.log.debug('Read line from instrumentation output: \"%s\"', line)\n return line\n", "path": "mobly/controllers/android_device_lib/snippet_client.py"}], "after_files": [{"content": "# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"JSON RPC interface to Mobly Snippet Lib.\"\"\"\nimport logging\nimport re\nimport time\n\nfrom mobly import utils\nfrom mobly.controllers.android_device_lib import adb\nfrom mobly.controllers.android_device_lib import jsonrpc_client_base\n\n_INSTRUMENTATION_RUNNER_PACKAGE = (\n 'com.google.android.mobly.snippet.SnippetRunner')\n\n# TODO(adorokhine): delete this in Mobly 1.6 when snippet v0 support is removed.\n_LAUNCH_CMD_V0 = ('am instrument -w -e action start -e port %s %s/' +\n _INSTRUMENTATION_RUNNER_PACKAGE)\n\n_LAUNCH_CMD_V1 = (\n 'am instrument -w -e action start %s/' + _INSTRUMENTATION_RUNNER_PACKAGE)\n\n_STOP_CMD = (\n 'am instrument -w -e action stop %s/' + _INSTRUMENTATION_RUNNER_PACKAGE)\n\n# Maximum time to wait for a v0 snippet to start on the device (10 minutes).\n# TODO(adorokhine): delete this in Mobly 1.6 when snippet v0 support is removed.\n_APP_START_WAIT_TIME_V0 = 10 * 60\n\n\nclass Error(Exception):\n pass\n\n\nclass ProtocolVersionError(jsonrpc_client_base.AppStartError):\n \"\"\"Raised when the protocol reported by the snippet is unknown.\"\"\"\n\n\nclass SnippetClient(jsonrpc_client_base.JsonRpcClientBase):\n \"\"\"A client for interacting with snippet APKs using Mobly Snippet Lib.\n\n See superclass documentation for a list of public attributes.\n\n It currently supports both v0 and v1 snippet launch protocols, although\n support for v0 will be removed in a future version.\n\n For a description of the launch protocols, see the documentation in\n mobly-snippet-lib, SnippetRunner.java.\n \"\"\"\n\n def __init__(self, package, adb_proxy, log=logging.getLogger()):\n \"\"\"Initializes a SnippetClient.\n \n Args:\n package: (str) The package name of the apk where the snippets are\n defined.\n adb_proxy: (adb.AdbProxy) Adb proxy for running adb commands.\n log: (logging.Logger) logger to which to send log messages.\n \"\"\"\n super(SnippetClient, self).__init__(app_name=package, log=log)\n self.package = package\n self._adb = adb_proxy\n self._proc = None\n\n def start_app_and_connect(self):\n \"\"\"Overrides superclass. Launches a snippet app and connects to it.\"\"\"\n self._check_app_installed()\n\n # Try launching the app with the v1 protocol. If that fails, fall back\n # to v0 for compatibility. Use info here so people know exactly what's\n # happening here, which is helpful since they need to create their own\n # instrumentations and manifest.\n self.log.info('Launching snippet apk %s with protocol v1',\n self.package)\n cmd = _LAUNCH_CMD_V1 % self.package\n start_time = time.time()\n self._proc = self._do_start_app(cmd)\n\n # \"Instrumentation crashed\" could be due to several reasons, eg\n # exception thrown during startup or just a launch protocol 0 snippet\n # dying because it needs the port flag. Sadly we have no way to tell so\n # just warn and retry as v0.\n # TODO(adorokhine): delete this in Mobly 1.6 when snippet v0 support is\n # removed.\n line = self._read_protocol_line()\n if line in ('INSTRUMENTATION_RESULT: shortMsg=Process crashed.',\n 'INSTRUMENTATION_RESULT: shortMsg='\n 'java.lang.IllegalArgumentException'):\n self.log.warning('Snippet %s crashed on startup. This might be an '\n 'actual error or a snippet using deprecated v0 '\n 'start protocol. Retrying as a v0 snippet.',\n self.package)\n self.host_port = utils.get_available_host_port()\n # Reuse the host port as the device port in v0 snippet. This isn't\n # safe in general, but the protocol is deprecated.\n cmd = _LAUNCH_CMD_V0 % (self.host_port, self.package)\n self._proc = self._do_start_app(cmd)\n self._connect_to_v0()\n else:\n # Check protocol version and get the device port\n match = re.match('^SNIPPET START, PROTOCOL ([0-9]+) ([0-9]+)$',\n line)\n if not match or match.group(1) != '1':\n raise ProtocolVersionError(line)\n self._connect_to_v1()\n self.log.debug('Snippet %s started after %.1fs on host port %s',\n self.package, time.time() - start_time, self.host_port)\n\n def stop_app(self):\n # Kill the pending 'adb shell am instrument -w' process if there is one.\n # Although killing the snippet apk would abort this process anyway, we\n # want to call stop_standing_subprocess() to perform a health check,\n # print the failure stack trace if there was any, and reap it from the\n # process table.\n self.log.debug('Stopping snippet apk %s', self.package)\n try:\n # Close the socket connection.\n self.disconnect()\n if self._proc:\n utils.stop_standing_subprocess(self._proc)\n out = self._adb.shell(_STOP_CMD % self.package).decode('utf-8')\n if 'OK (0 tests)' not in out:\n raise Error('Failed to stop existing apk. Unexpected '\n 'output: %s' % out)\n finally:\n # Always clean up the adb port\n if self.host_port:\n self._adb.forward(['--remove', 'tcp:%d' % self.host_port])\n\n def _start_event_client(self):\n \"\"\"Overrides superclass.\"\"\"\n event_client = SnippetClient(\n package=self.package, adb_proxy=self._adb, log=self.log)\n event_client.host_port = self.host_port\n event_client.connect(self.uid,\n jsonrpc_client_base.JsonRpcCommand.CONTINUE)\n return event_client\n\n def _check_app_installed(self):\n # Check that the Mobly Snippet app is installed.\n out = self._adb.shell('pm list package')\n if not utils.grep('^package:%s$' % self.package, out):\n raise jsonrpc_client_base.AppStartError(\n '%s is not installed on %s' % (self.package, self._adb.serial))\n # Check that the app is instrumented.\n out = self._adb.shell('pm list instrumentation')\n matched_out = utils.grep('^instrumentation:%s/%s' %\n (self.package,\n _INSTRUMENTATION_RUNNER_PACKAGE), out)\n if not matched_out:\n raise jsonrpc_client_base.AppStartError(\n '%s is installed on %s, but it is not instrumented.' %\n (self.package, self._adb.serial))\n match = re.search('^instrumentation:(.*)\\/(.*) \\(target=(.*)\\)$',\n matched_out[0])\n target_name = match.group(3)\n # Check that the instrumentation target is installed if it's not the\n # same as the snippet package.\n if target_name != self.package:\n out = self._adb.shell('pm list package')\n if not utils.grep('^package:%s$' % target_name, out):\n raise jsonrpc_client_base.AppStartError(\n 'Instrumentation target %s is not installed on %s' %\n (target_name, self._adb.serial))\n\n def _do_start_app(self, launch_cmd):\n adb_cmd = [adb.ADB]\n if self._adb.serial:\n adb_cmd += ['-s', self._adb.serial]\n adb_cmd += ['shell', launch_cmd]\n return utils.start_standing_subprocess(adb_cmd, shell=False)\n\n # TODO(adorokhine): delete this in Mobly 1.6 when snippet v0 support is\n # removed.\n def _connect_to_v0(self):\n self.device_port = self.host_port\n self._adb.forward(\n ['tcp:%d' % self.host_port,\n 'tcp:%d' % self.device_port])\n start_time = time.time()\n expiration_time = start_time + _APP_START_WAIT_TIME_V0\n while time.time() < expiration_time:\n self.log.debug('Attempting to start %s.', self.package)\n try:\n self.connect()\n return\n except:\n self.log.debug(\n 'v0 snippet %s is not yet running, retrying',\n self.package,\n exc_info=True)\n time.sleep(1)\n raise jsonrpc_client_base.AppStartError(\n '%s failed to start on %s.' % (self.package, self._adb.serial))\n\n def _connect_to_v1(self):\n line = self._read_protocol_line()\n match = re.match('^SNIPPET SERVING, PORT ([0-9]+)$', line)\n if not match:\n raise jsonrpc_client_base.AppStartError(line)\n self.device_port = int(match.group(1))\n\n # Forward the device port to a new host port, and connect to that port\n self.host_port = utils.get_available_host_port()\n self._adb.forward(\n ['tcp:%d' % self.host_port,\n 'tcp:%d' % self.device_port])\n self.connect()\n\n def _read_protocol_line(self):\n \"\"\"Reads the next line of instrumentation output relevant to snippets.\n\n This method will skip over lines that don't start with 'SNIPPET' or\n 'INSTRUMENTATION_RESULT'.\n\n Returns:\n (str) Next line of snippet-related instrumentation output, stripped.\n\n Raises:\n jsonrpc_client_base.AppStartError: If EOF is reached without any\n protocol lines being read.\n \"\"\"\n while True:\n line = self._proc.stdout.readline().decode('utf-8')\n if not line:\n raise jsonrpc_client_base.AppStartError(\n 'Unexpected EOF waiting for app to start')\n # readline() uses an empty string to mark EOF, and a single newline\n # to mark regular empty lines in the output. Don't move the strip()\n # call above the truthiness check, or this method will start\n # considering any blank output line to be EOF.\n line = line.strip()\n if (line.startswith('INSTRUMENTATION_RESULT:') or\n line.startswith('SNIPPET ')):\n self.log.debug(\n 'Accepted line from instrumentation output: \"%s\"', line)\n return line\n self.log.debug('Discarded line from instrumentation output: \"%s\"',\n line)\n", "path": "mobly/controllers/android_device_lib/snippet_client.py"}]} | 3,416 | 1,011 |
gh_patches_debug_13248 | rasdani/github-patches | git_diff | graspologic-org__graspologic-965 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix to repeated compilation of numba
- [x] Does this PR have a descriptive title that could go in our release notes?
- [ ] Does this PR add any new dependencies?
- [ ] Does this PR modify any existing APIs?
- [ ] Is the change to the API backwards compatible?
- [ ] Have you built the documentation (reference and/or tutorial) and verified the generated documentation is appropriate?
Fixes #946
Utilizes the addition of a global function at the top of the file that calls the nb.jit function a single time instead of in the constructor. This speeds up the compilation and running speed of the code.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `graspologic/models/edge_swaps.py`
Content:
```
1 from typing import Optional
2
3 import numba as nb
4 import numpy as np
5 from beartype import beartype
6 from scipy.sparse import csr_matrix, lil_matrix
7 from sklearn.utils import check_scalar
8
9 from graspologic.preconditions import check_argument
10 from graspologic.types import AdjacencyMatrix, Tuple
11 from graspologic.utils import import_graph, is_loopless, is_symmetric, is_unweighted
12
13
14 # Code based on: https://github.com/joelnish/double-edge-swap-mcmc/blob/master/dbl_edge_mcmc.py
15 class EdgeSwapper:
16 """
17 Degree Preserving Edge Swaps
18
19 This class allows for performing degree preserving edge swaps to
20 generate new networks with the same degree sequence as the input network.
21
22 Attributes
23 ----------
24 adjacency : np.ndarray OR csr_matrix, shape (n_verts, n_verts)
25 The initial adjacency matrix to perform edge swaps on. Must be unweighted and undirected.
26
27 edge_list : np.ndarray, shape (n_verts, 2)
28 The corresponding edgelist for the input network
29
30 seed: int, optional
31 Random seed to make outputs reproducible, must be positive
32
33
34 References
35 ----------
36 .. [1] Fosdick, B. K., Larremore, D. B., Nishimura, J., & Ugander, J. (2018).
37 Configuring random graph models with fixed degree sequences.
38 Siam Review, 60(2), 315-355.
39
40 .. [2] Carstens, C. J., & Horadam, K. J. (2017).
41 Switching edges to randomize networks: what goes wrong and how to fix it.
42 Journal of Complex Networks, 5(3), 337-351.
43
44 .. [3] https://github.com/joelnish/double-edge-swap-mcmc/blob/master/dbl_edge_mcmc.py
45 """
46
47 @beartype
48 def __init__(self, adjacency: AdjacencyMatrix, seed: Optional[int] = None):
49
50 weight_check = is_unweighted(adjacency)
51 check_argument(weight_check, "adjacency must be unweighted")
52
53 loop_check = is_loopless(adjacency)
54 check_argument(loop_check, "adjacency cannot have loops")
55
56 direct_check = is_symmetric(adjacency)
57 check_argument(direct_check, "adjacency must be undirected")
58
59 max_seed = np.iinfo(np.uint32).max
60 if seed is None:
61 seed = np.random.randint(max_seed, dtype=np.int64)
62 seed = check_scalar(
63 seed, "seed", (int, np.integer), min_val=0, max_val=max_seed
64 )
65 self._rng = np.random.default_rng(seed)
66
67 adjacency = import_graph(adjacency, copy=True)
68
69 if isinstance(adjacency, csr_matrix):
70 # more efficient for manipulations which change sparsity structure
71 adjacency = lil_matrix(adjacency)
72 self._edge_swap_function = _edge_swap
73 else:
74 # for numpy input, use numba for JIT compilation
75 # NOTE: not convinced numba is helping much here, look into optimizing
76 self._edge_swap_function = nb.jit(_edge_swap)
77
78 self.adjacency = adjacency
79
80 edge_list = self._do_setup()
81 check_argument(len(edge_list) >= 2, "there must be at least 2 edges")
82 self.edge_list = edge_list
83
84 def _do_setup(self) -> np.ndarray:
85 """
86 Computes the edge_list from the adjancency matrix
87
88 Returns
89 -------
90 edge_list : np.ndarray, shape (n_verts, 2)
91 The corresponding edge_list of adjacency
92 """
93
94 # get edges for upper triangle of undirected graph
95 row_inds, col_inds = np.nonzero(self.adjacency)
96 upper = row_inds < col_inds
97 row_inds = row_inds[upper]
98 col_inds = col_inds[upper]
99 edge_list = np.stack((row_inds, col_inds)).T
100 return edge_list
101
102 def swap_edges(self, n_swaps: int = 1) -> Tuple[AdjacencyMatrix, np.ndarray]:
103 """
104 Performs a number of edge swaps on the graph
105
106 Parameters
107 ----------
108 n_swaps : int (default 1), optional
109 The number of edge swaps to be performed
110
111 Returns
112 -------
113 adjacency : np.ndarray OR csr.matrix, shape (n_verts, n_verts)
114 The adjancency matrix after a number of edge swaps are performed on the graph
115
116 edge_list : np.ndarray (n_verts, 2)
117 The edge_list after a number of edge swaps are perfomed on the graph
118 """
119
120 # Note: for some reason could not get reproducibility w/o setting seed
121 # inside of the _edge_swap_function itself
122 max_seed = np.iinfo(np.int32).max
123 for _ in range(n_swaps):
124 self.adjacency, self.edge_list = self._edge_swap_function(
125 self.adjacency,
126 self.edge_list,
127 seed=self._rng.integers(max_seed),
128 )
129
130 adjacency = self.adjacency
131 if isinstance(adjacency, lil_matrix):
132 adjacency = csr_matrix(adjacency)
133 else:
134 adjacency = adjacency.copy()
135
136 return adjacency, self.edge_list.copy()
137
138
139 def _edge_swap(
140 adjacency: AdjacencyMatrix, edge_list: np.ndarray, seed: Optional[int] = None
141 ) -> Tuple[AdjacencyMatrix, np.ndarray]:
142 """
143 Performs the edge swap on the adjacency matrix. If adjacency is
144 np.ndarray, then nopython=True is used in numba, but if adjacency
145 is csr_matrix, then forceobj=True is used in numba
146
147 Parameters
148 ----------
149 adjacency : np.ndarray OR csr_matrix, shape (n_verts, n_verts)
150 The initial adjacency matrix in which edge swaps are performed on it
151
152 edge_list : np.ndarray, shape (n_verts, 2)
153 The corresponding edge_list of adjacency
154
155 seed: int, optional
156 Random seed to make outputs reproducible, must be positive
157
158 Returns
159 -------
160 adjacency : np.ndarray OR csr_matrix, shape (n_verts, n_verts)
161 The adjancency matrix after an edge swap is performed on the graph
162
163 edge_list : np.ndarray (n_verts, 2)
164 The edge_list after an edge swap is perfomed on the graph
165 """
166
167 # need to use np.random here instead of the generator for numba compatibility
168 if seed is not None:
169 np.random.seed(seed)
170
171 # choose two indices at random
172 # NOTE: using np.random here for current numba compatibility
173 orig_inds = np.random.choice(len(edge_list), size=2, replace=False)
174
175 u, v = edge_list[orig_inds[0]]
176
177 # two types of swap orientations for undirected graph
178 if np.random.rand() < 0.5:
179 x, y = edge_list[orig_inds[1]]
180 else:
181 y, x = edge_list[orig_inds[1]]
182
183 # ensures no initial loops
184 if u == v or x == y:
185 return adjacency, edge_list
186
187 # ensures no loops after swap (must be swap on 4 distinct nodes)
188 if u == x or v == y:
189 return adjacency, edge_list
190
191 # save edge values
192 w_ux = adjacency[u, x]
193 w_vy = adjacency[v, y]
194
195 # ensures no multigraphs after swap
196 if w_ux >= 1 or w_vy >= 1:
197 return adjacency, edge_list
198
199 # perform the swap
200 adjacency[u, v] = 0
201 adjacency[v, u] = 0
202 adjacency[x, y] = 0
203 adjacency[y, x] = 0
204
205 adjacency[u, x] = 1
206 adjacency[x, u] = 1
207 adjacency[v, y] = 1
208 adjacency[y, v] = 1
209
210 # update edge list
211 edge_list[orig_inds[0]] = [u, x]
212 edge_list[orig_inds[1]] = [v, y]
213 return adjacency, edge_list
214
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/graspologic/models/edge_swaps.py b/graspologic/models/edge_swaps.py
--- a/graspologic/models/edge_swaps.py
+++ b/graspologic/models/edge_swaps.py
@@ -73,7 +73,7 @@
else:
# for numpy input, use numba for JIT compilation
# NOTE: not convinced numba is helping much here, look into optimizing
- self._edge_swap_function = nb.jit(_edge_swap)
+ self._edge_swap_function = _edge_swap_numba
self.adjacency = adjacency
@@ -211,3 +211,6 @@
edge_list[orig_inds[0]] = [u, x]
edge_list[orig_inds[1]] = [v, y]
return adjacency, edge_list
+
+
+_edge_swap_numba = nb.jit(_edge_swap)
| {"golden_diff": "diff --git a/graspologic/models/edge_swaps.py b/graspologic/models/edge_swaps.py\n--- a/graspologic/models/edge_swaps.py\n+++ b/graspologic/models/edge_swaps.py\n@@ -73,7 +73,7 @@\n else:\n # for numpy input, use numba for JIT compilation\n # NOTE: not convinced numba is helping much here, look into optimizing\n- self._edge_swap_function = nb.jit(_edge_swap)\n+ self._edge_swap_function = _edge_swap_numba\n \n self.adjacency = adjacency\n \n@@ -211,3 +211,6 @@\n edge_list[orig_inds[0]] = [u, x]\n edge_list[orig_inds[1]] = [v, y]\n return adjacency, edge_list\n+\n+\n+_edge_swap_numba = nb.jit(_edge_swap)\n", "issue": "Fix to repeated compilation of numba\n- [x] Does this PR have a descriptive title that could go in our release notes?\r\n- [ ] Does this PR add any new dependencies?\r\n- [ ] Does this PR modify any existing APIs?\r\n - [ ] Is the change to the API backwards compatible?\r\n- [ ] Have you built the documentation (reference and/or tutorial) and verified the generated documentation is appropriate?\r\n\r\nFixes #946\r\n\r\nUtilizes the addition of a global function at the top of the file that calls the nb.jit function a single time instead of in the constructor. This speeds up the compilation and running speed of the code.\n", "before_files": [{"content": "from typing import Optional\n\nimport numba as nb\nimport numpy as np\nfrom beartype import beartype\nfrom scipy.sparse import csr_matrix, lil_matrix\nfrom sklearn.utils import check_scalar\n\nfrom graspologic.preconditions import check_argument\nfrom graspologic.types import AdjacencyMatrix, Tuple\nfrom graspologic.utils import import_graph, is_loopless, is_symmetric, is_unweighted\n\n\n# Code based on: https://github.com/joelnish/double-edge-swap-mcmc/blob/master/dbl_edge_mcmc.py\nclass EdgeSwapper:\n \"\"\"\n Degree Preserving Edge Swaps\n\n This class allows for performing degree preserving edge swaps to\n generate new networks with the same degree sequence as the input network.\n\n Attributes\n ----------\n adjacency : np.ndarray OR csr_matrix, shape (n_verts, n_verts)\n The initial adjacency matrix to perform edge swaps on. Must be unweighted and undirected.\n\n edge_list : np.ndarray, shape (n_verts, 2)\n The corresponding edgelist for the input network\n\n seed: int, optional\n Random seed to make outputs reproducible, must be positive\n\n\n References\n ----------\n .. [1] Fosdick, B. K., Larremore, D. B., Nishimura, J., & Ugander, J. (2018).\n Configuring random graph models with fixed degree sequences.\n Siam Review, 60(2), 315-355.\n\n .. [2] Carstens, C. J., & Horadam, K. J. (2017).\n Switching edges to randomize networks: what goes wrong and how to fix it.\n Journal of Complex Networks, 5(3), 337-351.\n\n .. [3] https://github.com/joelnish/double-edge-swap-mcmc/blob/master/dbl_edge_mcmc.py\n \"\"\"\n\n @beartype\n def __init__(self, adjacency: AdjacencyMatrix, seed: Optional[int] = None):\n\n weight_check = is_unweighted(adjacency)\n check_argument(weight_check, \"adjacency must be unweighted\")\n\n loop_check = is_loopless(adjacency)\n check_argument(loop_check, \"adjacency cannot have loops\")\n\n direct_check = is_symmetric(adjacency)\n check_argument(direct_check, \"adjacency must be undirected\")\n\n max_seed = np.iinfo(np.uint32).max\n if seed is None:\n seed = np.random.randint(max_seed, dtype=np.int64)\n seed = check_scalar(\n seed, \"seed\", (int, np.integer), min_val=0, max_val=max_seed\n )\n self._rng = np.random.default_rng(seed)\n\n adjacency = import_graph(adjacency, copy=True)\n\n if isinstance(adjacency, csr_matrix):\n # more efficient for manipulations which change sparsity structure\n adjacency = lil_matrix(adjacency)\n self._edge_swap_function = _edge_swap\n else:\n # for numpy input, use numba for JIT compilation\n # NOTE: not convinced numba is helping much here, look into optimizing\n self._edge_swap_function = nb.jit(_edge_swap)\n\n self.adjacency = adjacency\n\n edge_list = self._do_setup()\n check_argument(len(edge_list) >= 2, \"there must be at least 2 edges\")\n self.edge_list = edge_list\n\n def _do_setup(self) -> np.ndarray:\n \"\"\"\n Computes the edge_list from the adjancency matrix\n\n Returns\n -------\n edge_list : np.ndarray, shape (n_verts, 2)\n The corresponding edge_list of adjacency\n \"\"\"\n\n # get edges for upper triangle of undirected graph\n row_inds, col_inds = np.nonzero(self.adjacency)\n upper = row_inds < col_inds\n row_inds = row_inds[upper]\n col_inds = col_inds[upper]\n edge_list = np.stack((row_inds, col_inds)).T\n return edge_list\n\n def swap_edges(self, n_swaps: int = 1) -> Tuple[AdjacencyMatrix, np.ndarray]:\n \"\"\"\n Performs a number of edge swaps on the graph\n\n Parameters\n ----------\n n_swaps : int (default 1), optional\n The number of edge swaps to be performed\n\n Returns\n -------\n adjacency : np.ndarray OR csr.matrix, shape (n_verts, n_verts)\n The adjancency matrix after a number of edge swaps are performed on the graph\n\n edge_list : np.ndarray (n_verts, 2)\n The edge_list after a number of edge swaps are perfomed on the graph\n \"\"\"\n\n # Note: for some reason could not get reproducibility w/o setting seed\n # inside of the _edge_swap_function itself\n max_seed = np.iinfo(np.int32).max\n for _ in range(n_swaps):\n self.adjacency, self.edge_list = self._edge_swap_function(\n self.adjacency,\n self.edge_list,\n seed=self._rng.integers(max_seed),\n )\n\n adjacency = self.adjacency\n if isinstance(adjacency, lil_matrix):\n adjacency = csr_matrix(adjacency)\n else:\n adjacency = adjacency.copy()\n\n return adjacency, self.edge_list.copy()\n\n\ndef _edge_swap(\n adjacency: AdjacencyMatrix, edge_list: np.ndarray, seed: Optional[int] = None\n) -> Tuple[AdjacencyMatrix, np.ndarray]:\n \"\"\"\n Performs the edge swap on the adjacency matrix. If adjacency is\n np.ndarray, then nopython=True is used in numba, but if adjacency\n is csr_matrix, then forceobj=True is used in numba\n\n Parameters\n ----------\n adjacency : np.ndarray OR csr_matrix, shape (n_verts, n_verts)\n The initial adjacency matrix in which edge swaps are performed on it\n\n edge_list : np.ndarray, shape (n_verts, 2)\n The corresponding edge_list of adjacency\n\n seed: int, optional\n Random seed to make outputs reproducible, must be positive\n\n Returns\n -------\n adjacency : np.ndarray OR csr_matrix, shape (n_verts, n_verts)\n The adjancency matrix after an edge swap is performed on the graph\n\n edge_list : np.ndarray (n_verts, 2)\n The edge_list after an edge swap is perfomed on the graph\n \"\"\"\n\n # need to use np.random here instead of the generator for numba compatibility\n if seed is not None:\n np.random.seed(seed)\n\n # choose two indices at random\n # NOTE: using np.random here for current numba compatibility\n orig_inds = np.random.choice(len(edge_list), size=2, replace=False)\n\n u, v = edge_list[orig_inds[0]]\n\n # two types of swap orientations for undirected graph\n if np.random.rand() < 0.5:\n x, y = edge_list[orig_inds[1]]\n else:\n y, x = edge_list[orig_inds[1]]\n\n # ensures no initial loops\n if u == v or x == y:\n return adjacency, edge_list\n\n # ensures no loops after swap (must be swap on 4 distinct nodes)\n if u == x or v == y:\n return adjacency, edge_list\n\n # save edge values\n w_ux = adjacency[u, x]\n w_vy = adjacency[v, y]\n\n # ensures no multigraphs after swap\n if w_ux >= 1 or w_vy >= 1:\n return adjacency, edge_list\n\n # perform the swap\n adjacency[u, v] = 0\n adjacency[v, u] = 0\n adjacency[x, y] = 0\n adjacency[y, x] = 0\n\n adjacency[u, x] = 1\n adjacency[x, u] = 1\n adjacency[v, y] = 1\n adjacency[y, v] = 1\n\n # update edge list\n edge_list[orig_inds[0]] = [u, x]\n edge_list[orig_inds[1]] = [v, y]\n return adjacency, edge_list\n", "path": "graspologic/models/edge_swaps.py"}], "after_files": [{"content": "from typing import Optional\n\nimport numba as nb\nimport numpy as np\nfrom beartype import beartype\nfrom scipy.sparse import csr_matrix, lil_matrix\nfrom sklearn.utils import check_scalar\n\nfrom graspologic.preconditions import check_argument\nfrom graspologic.types import AdjacencyMatrix, Tuple\nfrom graspologic.utils import import_graph, is_loopless, is_symmetric, is_unweighted\n\n\n# Code based on: https://github.com/joelnish/double-edge-swap-mcmc/blob/master/dbl_edge_mcmc.py\nclass EdgeSwapper:\n \"\"\"\n Degree Preserving Edge Swaps\n\n This class allows for performing degree preserving edge swaps to\n generate new networks with the same degree sequence as the input network.\n\n Attributes\n ----------\n adjacency : np.ndarray OR csr_matrix, shape (n_verts, n_verts)\n The initial adjacency matrix to perform edge swaps on. Must be unweighted and undirected.\n\n edge_list : np.ndarray, shape (n_verts, 2)\n The corresponding edgelist for the input network\n\n seed: int, optional\n Random seed to make outputs reproducible, must be positive\n\n\n References\n ----------\n .. [1] Fosdick, B. K., Larremore, D. B., Nishimura, J., & Ugander, J. (2018).\n Configuring random graph models with fixed degree sequences.\n Siam Review, 60(2), 315-355.\n\n .. [2] Carstens, C. J., & Horadam, K. J. (2017).\n Switching edges to randomize networks: what goes wrong and how to fix it.\n Journal of Complex Networks, 5(3), 337-351.\n\n .. [3] https://github.com/joelnish/double-edge-swap-mcmc/blob/master/dbl_edge_mcmc.py\n \"\"\"\n\n @beartype\n def __init__(self, adjacency: AdjacencyMatrix, seed: Optional[int] = None):\n\n weight_check = is_unweighted(adjacency)\n check_argument(weight_check, \"adjacency must be unweighted\")\n\n loop_check = is_loopless(adjacency)\n check_argument(loop_check, \"adjacency cannot have loops\")\n\n direct_check = is_symmetric(adjacency)\n check_argument(direct_check, \"adjacency must be undirected\")\n\n max_seed = np.iinfo(np.uint32).max\n if seed is None:\n seed = np.random.randint(max_seed, dtype=np.int64)\n seed = check_scalar(\n seed, \"seed\", (int, np.integer), min_val=0, max_val=max_seed\n )\n self._rng = np.random.default_rng(seed)\n\n adjacency = import_graph(adjacency, copy=True)\n\n if isinstance(adjacency, csr_matrix):\n # more efficient for manipulations which change sparsity structure\n adjacency = lil_matrix(adjacency)\n self._edge_swap_function = _edge_swap\n else:\n # for numpy input, use numba for JIT compilation\n # NOTE: not convinced numba is helping much here, look into optimizing\n self._edge_swap_function = _edge_swap_numba\n\n self.adjacency = adjacency\n\n edge_list = self._do_setup()\n check_argument(len(edge_list) >= 2, \"there must be at least 2 edges\")\n self.edge_list = edge_list\n\n def _do_setup(self) -> np.ndarray:\n \"\"\"\n Computes the edge_list from the adjancency matrix\n\n Returns\n -------\n edge_list : np.ndarray, shape (n_verts, 2)\n The corresponding edge_list of adjacency\n \"\"\"\n\n # get edges for upper triangle of undirected graph\n row_inds, col_inds = np.nonzero(self.adjacency)\n upper = row_inds < col_inds\n row_inds = row_inds[upper]\n col_inds = col_inds[upper]\n edge_list = np.stack((row_inds, col_inds)).T\n return edge_list\n\n def swap_edges(self, n_swaps: int = 1) -> Tuple[AdjacencyMatrix, np.ndarray]:\n \"\"\"\n Performs a number of edge swaps on the graph\n\n Parameters\n ----------\n n_swaps : int (default 1), optional\n The number of edge swaps to be performed\n\n Returns\n -------\n adjacency : np.ndarray OR csr.matrix, shape (n_verts, n_verts)\n The adjancency matrix after a number of edge swaps are performed on the graph\n\n edge_list : np.ndarray (n_verts, 2)\n The edge_list after a number of edge swaps are perfomed on the graph\n \"\"\"\n\n # Note: for some reason could not get reproducibility w/o setting seed\n # inside of the _edge_swap_function itself\n max_seed = np.iinfo(np.int32).max\n for _ in range(n_swaps):\n self.adjacency, self.edge_list = self._edge_swap_function(\n self.adjacency,\n self.edge_list,\n seed=self._rng.integers(max_seed),\n )\n\n adjacency = self.adjacency\n if isinstance(adjacency, lil_matrix):\n adjacency = csr_matrix(adjacency)\n else:\n adjacency = adjacency.copy()\n\n return adjacency, self.edge_list.copy()\n\n\ndef _edge_swap(\n adjacency: AdjacencyMatrix, edge_list: np.ndarray, seed: Optional[int] = None\n) -> Tuple[AdjacencyMatrix, np.ndarray]:\n \"\"\"\n Performs the edge swap on the adjacency matrix. If adjacency is\n np.ndarray, then nopython=True is used in numba, but if adjacency\n is csr_matrix, then forceobj=True is used in numba\n\n Parameters\n ----------\n adjacency : np.ndarray OR csr_matrix, shape (n_verts, n_verts)\n The initial adjacency matrix in which edge swaps are performed on it\n\n edge_list : np.ndarray, shape (n_verts, 2)\n The corresponding edge_list of adjacency\n\n seed: int, optional\n Random seed to make outputs reproducible, must be positive\n\n Returns\n -------\n adjacency : np.ndarray OR csr_matrix, shape (n_verts, n_verts)\n The adjancency matrix after an edge swap is performed on the graph\n\n edge_list : np.ndarray (n_verts, 2)\n The edge_list after an edge swap is perfomed on the graph\n \"\"\"\n\n # need to use np.random here instead of the generator for numba compatibility\n if seed is not None:\n np.random.seed(seed)\n\n # choose two indices at random\n # NOTE: using np.random here for current numba compatibility\n orig_inds = np.random.choice(len(edge_list), size=2, replace=False)\n\n u, v = edge_list[orig_inds[0]]\n\n # two types of swap orientations for undirected graph\n if np.random.rand() < 0.5:\n x, y = edge_list[orig_inds[1]]\n else:\n y, x = edge_list[orig_inds[1]]\n\n # ensures no initial loops\n if u == v or x == y:\n return adjacency, edge_list\n\n # ensures no loops after swap (must be swap on 4 distinct nodes)\n if u == x or v == y:\n return adjacency, edge_list\n\n # save edge values\n w_ux = adjacency[u, x]\n w_vy = adjacency[v, y]\n\n # ensures no multigraphs after swap\n if w_ux >= 1 or w_vy >= 1:\n return adjacency, edge_list\n\n # perform the swap\n adjacency[u, v] = 0\n adjacency[v, u] = 0\n adjacency[x, y] = 0\n adjacency[y, x] = 0\n\n adjacency[u, x] = 1\n adjacency[x, u] = 1\n adjacency[v, y] = 1\n adjacency[y, v] = 1\n\n # update edge list\n edge_list[orig_inds[0]] = [u, x]\n edge_list[orig_inds[1]] = [v, y]\n return adjacency, edge_list\n\n\n_edge_swap_numba = nb.jit(_edge_swap)\n", "path": "graspologic/models/edge_swaps.py"}]} | 2,724 | 195 |
gh_patches_debug_36010 | rasdani/github-patches | git_diff | cal-itp__benefits-396 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Track sign in status in user session
## Background
Once the user successfully signs in to their Login.gov account, we will receive a response from the authentication server indicating their status. We can store the fact the the user is then "signed in" in their session using the existing [`session` framework](https://github.com/cal-itp/benefits/blob/dev/benefits/core/session.py) which keeps everything local to the user in secure cookies.
This task is just about setting up the session code to be able to track this, but not actually getting the status in there (see #373 for that).
Check out #321 / #330 as an example of a similar addition last Sprint.
## Tasks
- [x] Add a [new key `_AUTH = "auth"`](https://github.com/cal-itp/benefits/blob/dev/benefits/core/session.py#L26) for use elsewhere
- [x] Create a new function `auth()` that uses the key to get the stored `bool` corresponding to whether the user is signed in via or not
- [x] Update the [`update()`](https://github.com/cal-itp/benefits/blob/dev/benefits/core/session.py#L192) function to accept a new kwarg `auth=False`; if it's a bool, store in the request's session using the key
- [x] Update the [`reset()`](https://github.com/cal-itp/benefits/blob/dev/benefits/core/session.py#L134) function to set the key to `False` by default
- [x] Update the [`context_dict()`](https://github.com/cal-itp/benefits/blob/dev/benefits/core/session.py#L52) function to add the new key/value - this shows up in the debug bar
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `benefits/core/session.py`
Content:
```
1 """
2 The core application: helpers to work with request sessions.
3 """
4 import hashlib
5 import logging
6 import time
7 import uuid
8
9 from django.urls import reverse
10
11 from benefits.settings import RATE_LIMIT_PERIOD
12 from . import models
13
14
15 logger = logging.getLogger(__name__)
16
17
18 _AGENCY = "agency"
19 _DEBUG = "debug"
20 _DID = "did"
21 _ELIGIBILITY = "eligibility"
22 _LANG = "lang"
23 _LIMITCOUNTER = "limitcounter"
24 _LIMITUNTIL = "limituntil"
25 _ORIGIN = "origin"
26 _START = "start"
27 _UID = "uid"
28 _VERIFIER = "verifier"
29
30 # ignore bandit B105:hardcoded_password_string
31 # as these are not passwords, but keys for the session dict
32 _TOKEN = "token" # nosec
33 _TOKEN_EXP = "token_exp" # nosec
34
35
36 def agency(request):
37 """Get the agency from the request's session, or None"""
38 logger.debug("Get session agency")
39 try:
40 return models.TransitAgency.by_id(request.session[_AGENCY])
41 except (KeyError, models.TransitAgency.DoesNotExist):
42 logger.debug("Can't get agency from session")
43 return None
44
45
46 def active_agency(request):
47 """True if the request's session is configured with an active agency. False otherwise."""
48 logger.debug("Get session active agency flag")
49 a = agency(request)
50 return a and a.active
51
52
53 def context_dict(request):
54 """The request's session context as a dict."""
55 logger.debug("Get session context dict")
56 return {
57 _AGENCY: agency(request).slug if active_agency(request) else None,
58 _LIMITCOUNTER: rate_limit_counter(request),
59 _DEBUG: debug(request),
60 _DID: did(request),
61 _ELIGIBILITY: eligibility(request),
62 _LANG: language(request),
63 _ORIGIN: origin(request),
64 _LIMITUNTIL: rate_limit_time(request),
65 _START: start(request),
66 _TOKEN: token(request),
67 _TOKEN_EXP: token_expiry(request),
68 _UID: uid(request),
69 _VERIFIER: verifier(request),
70 }
71
72
73 def debug(request):
74 """Get the DEBUG flag from the request's session."""
75 logger.debug("Get session debug flag")
76 return bool(request.session.get(_DEBUG, False))
77
78
79 def did(request):
80 """Get the session's device ID, a hashed version of the unique ID."""
81 logger.debug("Get session did")
82 d = request.session.get(_DID)
83 if not d:
84 reset(request)
85 d = request.session.get(_DID)
86 return str(d)
87
88
89 def eligibility(request):
90 """Get the confirmed models.EligibilityType from the request's session, or None"""
91 logger.debug("Get session confirmed eligibility")
92 eligibility = request.session.get(_ELIGIBILITY)
93 if eligibility:
94 return models.EligibilityType.get(eligibility)
95 else:
96 return None
97
98
99 def eligible(request):
100 """True if the request's session is configured with an active agency and has confirmed eligibility. False otherwise."""
101 logger.debug("Get session eligible flag")
102 return active_agency(request) and agency(request).supports_type(eligibility(request))
103
104
105 def increment_rate_limit_counter(request):
106 """Adds 1 to this session's rate limit counter."""
107 logger.debug("Increment rate limit counter")
108 c = rate_limit_counter(request)
109 request.session[_LIMITCOUNTER] = int(c) + 1
110
111
112 def language(request):
113 """Get the language configured for the request."""
114 logger.debug("Get session language")
115 return request.LANGUAGE_CODE
116
117
118 def origin(request):
119 """Get the origin for the request's session, or None."""
120 logger.debug("Get session origin")
121 return request.session.get(_ORIGIN)
122
123
124 def rate_limit_counter(request):
125 """Get this session's rate limit counter."""
126 logger.debug("Get rate limit counter")
127 return request.session.get(_LIMITCOUNTER)
128
129
130 def rate_limit_time(request):
131 """Get this session's rate limit time, a Unix timestamp after which the session's rate limt resets."""
132 logger.debug("Get rate limit time")
133 return request.session.get(_LIMITUNTIL)
134
135
136 def reset(request):
137 """Reset the session for the request."""
138 logger.debug("Reset session")
139 request.session[_AGENCY] = None
140 request.session[_ELIGIBILITY] = None
141 request.session[_ORIGIN] = reverse("core:index")
142 request.session[_TOKEN] = None
143 request.session[_TOKEN_EXP] = None
144 request.session[_VERIFIER] = None
145
146 if _UID not in request.session or not request.session[_UID]:
147 logger.debug("Reset session time and uid")
148 request.session[_START] = int(time.time() * 1000)
149 u = str(uuid.uuid4())
150 request.session[_UID] = u
151 request.session[_DID] = str(uuid.UUID(hashlib.sha512(bytes(u, "utf8")).hexdigest()[:32]))
152 reset_rate_limit(request)
153
154
155 def reset_rate_limit(request):
156 """Reset this session's rate limit counter and time."""
157 logger.debug("Reset rate limit")
158 request.session[_LIMITCOUNTER] = 0
159 # get the current time in Unix seconds, then add RATE_LIMIT_PERIOD seconds
160 request.session[_LIMITUNTIL] = int(time.time()) + RATE_LIMIT_PERIOD
161
162
163 def start(request):
164 """Get the start time from the request's session, as integer milliseconds since Epoch."""
165 logger.debug("Get session time")
166 s = request.session.get(_START)
167 if not s:
168 reset(request)
169 s = request.session.get(_START)
170 return s
171
172
173 def token(request):
174 """Get the token from the request's session, or None."""
175 logger.debug("Get session token")
176 return request.session.get(_TOKEN)
177
178
179 def token_expiry(request):
180 """Get the token's expiry time from the request's session, or None."""
181 logger.debug("Get session token expiry")
182 return request.session.get(_TOKEN_EXP)
183
184
185 def uid(request):
186 """Get the session's unique ID, generating a new one if necessary."""
187 logger.debug("Get session uid")
188 u = request.session.get(_UID)
189 if not u:
190 reset(request)
191 u = request.session.get(_UID)
192 return u
193
194
195 def update(request, agency=None, debug=None, eligibility_types=None, origin=None, token=None, token_exp=None, verifier=None):
196 """Update the request's session with non-null values."""
197 if agency is not None and isinstance(agency, models.TransitAgency):
198 logger.debug(f"Update session {_AGENCY}")
199 request.session[_AGENCY] = agency.id
200 if debug is not None:
201 logger.debug(f"Update session {_DEBUG}")
202 request.session[_DEBUG] = debug
203 if eligibility_types is not None and isinstance(eligibility_types, list):
204 logger.debug(f"Update session {_ELIGIBILITY}")
205 if len(eligibility_types) > 1:
206 raise NotImplementedError("Multiple eligibilities are not supported at this time.")
207 elif len(eligibility_types) == 1:
208 # get the eligibility corresponding to the session's agency
209 a = models.TransitAgency.by_id(request.session[_AGENCY])
210 t = str(eligibility_types[0]).strip()
211 request.session[_ELIGIBILITY] = a.get_type_id(t)
212 else:
213 # empty list, clear session eligibility
214 request.session[_ELIGIBILITY] = None
215 if origin is not None:
216 logger.debug(f"Update session {_ORIGIN}")
217 request.session[_ORIGIN] = origin
218 if token is not None:
219 logger.debug(f"Update session {_TOKEN}")
220 request.session[_TOKEN] = token
221 request.session[_TOKEN_EXP] = token_exp
222 if verifier is not None and isinstance(verifier, models.EligibilityVerifier):
223 logger.debug(f"Update session {_VERIFIER}")
224 request.session[_VERIFIER] = verifier.id
225
226
227 def valid_token(request):
228 """True if the request's session is configured with a valid token. False otherwise."""
229 if token(request) is not None:
230 logger.debug("Session contains a token")
231 exp = token_expiry(request)
232
233 # ensure token does not expire in the next 5 seconds
234 valid = exp is None or exp > (time.time() + 5)
235
236 logger.debug(f"Session token is {'valid' if valid else 'expired'}")
237 return valid
238 else:
239 logger.debug("Session does not contain a valid token")
240 return False
241
242
243 def verifier(request):
244 """Get the verifier from the request's session, or None"""
245 logger.debug("Get session verifier")
246 try:
247 return models.EligibilityVerifier.by_id(request.session[_VERIFIER])
248 except (KeyError, models.EligibilityVerifier.DoesNotExist):
249 logger.debug("Can't get verifier from session")
250 return None
251
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/benefits/core/session.py b/benefits/core/session.py
--- a/benefits/core/session.py
+++ b/benefits/core/session.py
@@ -16,6 +16,7 @@
_AGENCY = "agency"
+_AUTH = "auth"
_DEBUG = "debug"
_DID = "did"
_ELIGIBILITY = "eligibility"
@@ -50,11 +51,18 @@
return a and a.active
+def auth(request):
+ """Get the auth from the request's session, or None"""
+ logger.debug("Get session auth")
+ return request.session.get(_AUTH)
+
+
def context_dict(request):
"""The request's session context as a dict."""
logger.debug("Get session context dict")
return {
_AGENCY: agency(request).slug if active_agency(request) else None,
+ _AUTH: auth(request),
_LIMITCOUNTER: rate_limit_counter(request),
_DEBUG: debug(request),
_DID: did(request),
@@ -137,6 +145,7 @@
"""Reset the session for the request."""
logger.debug("Reset session")
request.session[_AGENCY] = None
+ request.session[_AUTH] = False
request.session[_ELIGIBILITY] = None
request.session[_ORIGIN] = reverse("core:index")
request.session[_TOKEN] = None
@@ -192,11 +201,24 @@
return u
-def update(request, agency=None, debug=None, eligibility_types=None, origin=None, token=None, token_exp=None, verifier=None):
+def update(
+ request,
+ agency=None,
+ auth=None,
+ debug=None,
+ eligibility_types=None,
+ origin=None,
+ token=None,
+ token_exp=None,
+ verifier=None,
+):
"""Update the request's session with non-null values."""
if agency is not None and isinstance(agency, models.TransitAgency):
logger.debug(f"Update session {_AGENCY}")
request.session[_AGENCY] = agency.id
+ if auth is not None and type(auth) == bool:
+ logger.debug(f"Update session {_AUTH}")
+ request.session[_AUTH] = auth
if debug is not None:
logger.debug(f"Update session {_DEBUG}")
request.session[_DEBUG] = debug
| {"golden_diff": "diff --git a/benefits/core/session.py b/benefits/core/session.py\n--- a/benefits/core/session.py\n+++ b/benefits/core/session.py\n@@ -16,6 +16,7 @@\n \n \n _AGENCY = \"agency\"\n+_AUTH = \"auth\"\n _DEBUG = \"debug\"\n _DID = \"did\"\n _ELIGIBILITY = \"eligibility\"\n@@ -50,11 +51,18 @@\n return a and a.active\n \n \n+def auth(request):\n+ \"\"\"Get the auth from the request's session, or None\"\"\"\n+ logger.debug(\"Get session auth\")\n+ return request.session.get(_AUTH)\n+\n+\n def context_dict(request):\n \"\"\"The request's session context as a dict.\"\"\"\n logger.debug(\"Get session context dict\")\n return {\n _AGENCY: agency(request).slug if active_agency(request) else None,\n+ _AUTH: auth(request),\n _LIMITCOUNTER: rate_limit_counter(request),\n _DEBUG: debug(request),\n _DID: did(request),\n@@ -137,6 +145,7 @@\n \"\"\"Reset the session for the request.\"\"\"\n logger.debug(\"Reset session\")\n request.session[_AGENCY] = None\n+ request.session[_AUTH] = False\n request.session[_ELIGIBILITY] = None\n request.session[_ORIGIN] = reverse(\"core:index\")\n request.session[_TOKEN] = None\n@@ -192,11 +201,24 @@\n return u\n \n \n-def update(request, agency=None, debug=None, eligibility_types=None, origin=None, token=None, token_exp=None, verifier=None):\n+def update(\n+ request,\n+ agency=None,\n+ auth=None,\n+ debug=None,\n+ eligibility_types=None,\n+ origin=None,\n+ token=None,\n+ token_exp=None,\n+ verifier=None,\n+):\n \"\"\"Update the request's session with non-null values.\"\"\"\n if agency is not None and isinstance(agency, models.TransitAgency):\n logger.debug(f\"Update session {_AGENCY}\")\n request.session[_AGENCY] = agency.id\n+ if auth is not None and type(auth) == bool:\n+ logger.debug(f\"Update session {_AUTH}\")\n+ request.session[_AUTH] = auth\n if debug is not None:\n logger.debug(f\"Update session {_DEBUG}\")\n request.session[_DEBUG] = debug\n", "issue": "Track sign in status in user session\n## Background\r\n\r\nOnce the user successfully signs in to their Login.gov account, we will receive a response from the authentication server indicating their status. We can store the fact the the user is then \"signed in\" in their session using the existing [`session` framework](https://github.com/cal-itp/benefits/blob/dev/benefits/core/session.py) which keeps everything local to the user in secure cookies.\r\n\r\nThis task is just about setting up the session code to be able to track this, but not actually getting the status in there (see #373 for that).\r\n\r\nCheck out #321 / #330 as an example of a similar addition last Sprint.\r\n\r\n## Tasks\r\n\r\n- [x] Add a [new key `_AUTH = \"auth\"`](https://github.com/cal-itp/benefits/blob/dev/benefits/core/session.py#L26) for use elsewhere\r\n- [x] Create a new function `auth()` that uses the key to get the stored `bool` corresponding to whether the user is signed in via or not\r\n- [x] Update the [`update()`](https://github.com/cal-itp/benefits/blob/dev/benefits/core/session.py#L192) function to accept a new kwarg `auth=False`; if it's a bool, store in the request's session using the key\r\n- [x] Update the [`reset()`](https://github.com/cal-itp/benefits/blob/dev/benefits/core/session.py#L134) function to set the key to `False` by default\r\n- [x] Update the [`context_dict()`](https://github.com/cal-itp/benefits/blob/dev/benefits/core/session.py#L52) function to add the new key/value - this shows up in the debug bar\n", "before_files": [{"content": "\"\"\"\nThe core application: helpers to work with request sessions.\n\"\"\"\nimport hashlib\nimport logging\nimport time\nimport uuid\n\nfrom django.urls import reverse\n\nfrom benefits.settings import RATE_LIMIT_PERIOD\nfrom . import models\n\n\nlogger = logging.getLogger(__name__)\n\n\n_AGENCY = \"agency\"\n_DEBUG = \"debug\"\n_DID = \"did\"\n_ELIGIBILITY = \"eligibility\"\n_LANG = \"lang\"\n_LIMITCOUNTER = \"limitcounter\"\n_LIMITUNTIL = \"limituntil\"\n_ORIGIN = \"origin\"\n_START = \"start\"\n_UID = \"uid\"\n_VERIFIER = \"verifier\"\n\n# ignore bandit B105:hardcoded_password_string\n# as these are not passwords, but keys for the session dict\n_TOKEN = \"token\" # nosec\n_TOKEN_EXP = \"token_exp\" # nosec\n\n\ndef agency(request):\n \"\"\"Get the agency from the request's session, or None\"\"\"\n logger.debug(\"Get session agency\")\n try:\n return models.TransitAgency.by_id(request.session[_AGENCY])\n except (KeyError, models.TransitAgency.DoesNotExist):\n logger.debug(\"Can't get agency from session\")\n return None\n\n\ndef active_agency(request):\n \"\"\"True if the request's session is configured with an active agency. False otherwise.\"\"\"\n logger.debug(\"Get session active agency flag\")\n a = agency(request)\n return a and a.active\n\n\ndef context_dict(request):\n \"\"\"The request's session context as a dict.\"\"\"\n logger.debug(\"Get session context dict\")\n return {\n _AGENCY: agency(request).slug if active_agency(request) else None,\n _LIMITCOUNTER: rate_limit_counter(request),\n _DEBUG: debug(request),\n _DID: did(request),\n _ELIGIBILITY: eligibility(request),\n _LANG: language(request),\n _ORIGIN: origin(request),\n _LIMITUNTIL: rate_limit_time(request),\n _START: start(request),\n _TOKEN: token(request),\n _TOKEN_EXP: token_expiry(request),\n _UID: uid(request),\n _VERIFIER: verifier(request),\n }\n\n\ndef debug(request):\n \"\"\"Get the DEBUG flag from the request's session.\"\"\"\n logger.debug(\"Get session debug flag\")\n return bool(request.session.get(_DEBUG, False))\n\n\ndef did(request):\n \"\"\"Get the session's device ID, a hashed version of the unique ID.\"\"\"\n logger.debug(\"Get session did\")\n d = request.session.get(_DID)\n if not d:\n reset(request)\n d = request.session.get(_DID)\n return str(d)\n\n\ndef eligibility(request):\n \"\"\"Get the confirmed models.EligibilityType from the request's session, or None\"\"\"\n logger.debug(\"Get session confirmed eligibility\")\n eligibility = request.session.get(_ELIGIBILITY)\n if eligibility:\n return models.EligibilityType.get(eligibility)\n else:\n return None\n\n\ndef eligible(request):\n \"\"\"True if the request's session is configured with an active agency and has confirmed eligibility. False otherwise.\"\"\"\n logger.debug(\"Get session eligible flag\")\n return active_agency(request) and agency(request).supports_type(eligibility(request))\n\n\ndef increment_rate_limit_counter(request):\n \"\"\"Adds 1 to this session's rate limit counter.\"\"\"\n logger.debug(\"Increment rate limit counter\")\n c = rate_limit_counter(request)\n request.session[_LIMITCOUNTER] = int(c) + 1\n\n\ndef language(request):\n \"\"\"Get the language configured for the request.\"\"\"\n logger.debug(\"Get session language\")\n return request.LANGUAGE_CODE\n\n\ndef origin(request):\n \"\"\"Get the origin for the request's session, or None.\"\"\"\n logger.debug(\"Get session origin\")\n return request.session.get(_ORIGIN)\n\n\ndef rate_limit_counter(request):\n \"\"\"Get this session's rate limit counter.\"\"\"\n logger.debug(\"Get rate limit counter\")\n return request.session.get(_LIMITCOUNTER)\n\n\ndef rate_limit_time(request):\n \"\"\"Get this session's rate limit time, a Unix timestamp after which the session's rate limt resets.\"\"\"\n logger.debug(\"Get rate limit time\")\n return request.session.get(_LIMITUNTIL)\n\n\ndef reset(request):\n \"\"\"Reset the session for the request.\"\"\"\n logger.debug(\"Reset session\")\n request.session[_AGENCY] = None\n request.session[_ELIGIBILITY] = None\n request.session[_ORIGIN] = reverse(\"core:index\")\n request.session[_TOKEN] = None\n request.session[_TOKEN_EXP] = None\n request.session[_VERIFIER] = None\n\n if _UID not in request.session or not request.session[_UID]:\n logger.debug(\"Reset session time and uid\")\n request.session[_START] = int(time.time() * 1000)\n u = str(uuid.uuid4())\n request.session[_UID] = u\n request.session[_DID] = str(uuid.UUID(hashlib.sha512(bytes(u, \"utf8\")).hexdigest()[:32]))\n reset_rate_limit(request)\n\n\ndef reset_rate_limit(request):\n \"\"\"Reset this session's rate limit counter and time.\"\"\"\n logger.debug(\"Reset rate limit\")\n request.session[_LIMITCOUNTER] = 0\n # get the current time in Unix seconds, then add RATE_LIMIT_PERIOD seconds\n request.session[_LIMITUNTIL] = int(time.time()) + RATE_LIMIT_PERIOD\n\n\ndef start(request):\n \"\"\"Get the start time from the request's session, as integer milliseconds since Epoch.\"\"\"\n logger.debug(\"Get session time\")\n s = request.session.get(_START)\n if not s:\n reset(request)\n s = request.session.get(_START)\n return s\n\n\ndef token(request):\n \"\"\"Get the token from the request's session, or None.\"\"\"\n logger.debug(\"Get session token\")\n return request.session.get(_TOKEN)\n\n\ndef token_expiry(request):\n \"\"\"Get the token's expiry time from the request's session, or None.\"\"\"\n logger.debug(\"Get session token expiry\")\n return request.session.get(_TOKEN_EXP)\n\n\ndef uid(request):\n \"\"\"Get the session's unique ID, generating a new one if necessary.\"\"\"\n logger.debug(\"Get session uid\")\n u = request.session.get(_UID)\n if not u:\n reset(request)\n u = request.session.get(_UID)\n return u\n\n\ndef update(request, agency=None, debug=None, eligibility_types=None, origin=None, token=None, token_exp=None, verifier=None):\n \"\"\"Update the request's session with non-null values.\"\"\"\n if agency is not None and isinstance(agency, models.TransitAgency):\n logger.debug(f\"Update session {_AGENCY}\")\n request.session[_AGENCY] = agency.id\n if debug is not None:\n logger.debug(f\"Update session {_DEBUG}\")\n request.session[_DEBUG] = debug\n if eligibility_types is not None and isinstance(eligibility_types, list):\n logger.debug(f\"Update session {_ELIGIBILITY}\")\n if len(eligibility_types) > 1:\n raise NotImplementedError(\"Multiple eligibilities are not supported at this time.\")\n elif len(eligibility_types) == 1:\n # get the eligibility corresponding to the session's agency\n a = models.TransitAgency.by_id(request.session[_AGENCY])\n t = str(eligibility_types[0]).strip()\n request.session[_ELIGIBILITY] = a.get_type_id(t)\n else:\n # empty list, clear session eligibility\n request.session[_ELIGIBILITY] = None\n if origin is not None:\n logger.debug(f\"Update session {_ORIGIN}\")\n request.session[_ORIGIN] = origin\n if token is not None:\n logger.debug(f\"Update session {_TOKEN}\")\n request.session[_TOKEN] = token\n request.session[_TOKEN_EXP] = token_exp\n if verifier is not None and isinstance(verifier, models.EligibilityVerifier):\n logger.debug(f\"Update session {_VERIFIER}\")\n request.session[_VERIFIER] = verifier.id\n\n\ndef valid_token(request):\n \"\"\"True if the request's session is configured with a valid token. False otherwise.\"\"\"\n if token(request) is not None:\n logger.debug(\"Session contains a token\")\n exp = token_expiry(request)\n\n # ensure token does not expire in the next 5 seconds\n valid = exp is None or exp > (time.time() + 5)\n\n logger.debug(f\"Session token is {'valid' if valid else 'expired'}\")\n return valid\n else:\n logger.debug(\"Session does not contain a valid token\")\n return False\n\n\ndef verifier(request):\n \"\"\"Get the verifier from the request's session, or None\"\"\"\n logger.debug(\"Get session verifier\")\n try:\n return models.EligibilityVerifier.by_id(request.session[_VERIFIER])\n except (KeyError, models.EligibilityVerifier.DoesNotExist):\n logger.debug(\"Can't get verifier from session\")\n return None\n", "path": "benefits/core/session.py"}], "after_files": [{"content": "\"\"\"\nThe core application: helpers to work with request sessions.\n\"\"\"\nimport hashlib\nimport logging\nimport time\nimport uuid\n\nfrom django.urls import reverse\n\nfrom benefits.settings import RATE_LIMIT_PERIOD\nfrom . import models\n\n\nlogger = logging.getLogger(__name__)\n\n\n_AGENCY = \"agency\"\n_AUTH = \"auth\"\n_DEBUG = \"debug\"\n_DID = \"did\"\n_ELIGIBILITY = \"eligibility\"\n_LANG = \"lang\"\n_LIMITCOUNTER = \"limitcounter\"\n_LIMITUNTIL = \"limituntil\"\n_ORIGIN = \"origin\"\n_START = \"start\"\n_UID = \"uid\"\n_VERIFIER = \"verifier\"\n\n# ignore bandit B105:hardcoded_password_string\n# as these are not passwords, but keys for the session dict\n_TOKEN = \"token\" # nosec\n_TOKEN_EXP = \"token_exp\" # nosec\n\n\ndef agency(request):\n \"\"\"Get the agency from the request's session, or None\"\"\"\n logger.debug(\"Get session agency\")\n try:\n return models.TransitAgency.by_id(request.session[_AGENCY])\n except (KeyError, models.TransitAgency.DoesNotExist):\n logger.debug(\"Can't get agency from session\")\n return None\n\n\ndef active_agency(request):\n \"\"\"True if the request's session is configured with an active agency. False otherwise.\"\"\"\n logger.debug(\"Get session active agency flag\")\n a = agency(request)\n return a and a.active\n\n\ndef auth(request):\n \"\"\"Get the auth from the request's session, or None\"\"\"\n logger.debug(\"Get session auth\")\n return request.session.get(_AUTH)\n\n\ndef context_dict(request):\n \"\"\"The request's session context as a dict.\"\"\"\n logger.debug(\"Get session context dict\")\n return {\n _AGENCY: agency(request).slug if active_agency(request) else None,\n _AUTH: auth(request),\n _LIMITCOUNTER: rate_limit_counter(request),\n _DEBUG: debug(request),\n _DID: did(request),\n _ELIGIBILITY: eligibility(request),\n _LANG: language(request),\n _ORIGIN: origin(request),\n _LIMITUNTIL: rate_limit_time(request),\n _START: start(request),\n _TOKEN: token(request),\n _TOKEN_EXP: token_expiry(request),\n _UID: uid(request),\n _VERIFIER: verifier(request),\n }\n\n\ndef debug(request):\n \"\"\"Get the DEBUG flag from the request's session.\"\"\"\n logger.debug(\"Get session debug flag\")\n return bool(request.session.get(_DEBUG, False))\n\n\ndef did(request):\n \"\"\"Get the session's device ID, a hashed version of the unique ID.\"\"\"\n logger.debug(\"Get session did\")\n d = request.session.get(_DID)\n if not d:\n reset(request)\n d = request.session.get(_DID)\n return str(d)\n\n\ndef eligibility(request):\n \"\"\"Get the confirmed models.EligibilityType from the request's session, or None\"\"\"\n logger.debug(\"Get session confirmed eligibility\")\n eligibility = request.session.get(_ELIGIBILITY)\n if eligibility:\n return models.EligibilityType.get(eligibility)\n else:\n return None\n\n\ndef eligible(request):\n \"\"\"True if the request's session is configured with an active agency and has confirmed eligibility. False otherwise.\"\"\"\n logger.debug(\"Get session eligible flag\")\n return active_agency(request) and agency(request).supports_type(eligibility(request))\n\n\ndef increment_rate_limit_counter(request):\n \"\"\"Adds 1 to this session's rate limit counter.\"\"\"\n logger.debug(\"Increment rate limit counter\")\n c = rate_limit_counter(request)\n request.session[_LIMITCOUNTER] = int(c) + 1\n\n\ndef language(request):\n \"\"\"Get the language configured for the request.\"\"\"\n logger.debug(\"Get session language\")\n return request.LANGUAGE_CODE\n\n\ndef origin(request):\n \"\"\"Get the origin for the request's session, or None.\"\"\"\n logger.debug(\"Get session origin\")\n return request.session.get(_ORIGIN)\n\n\ndef rate_limit_counter(request):\n \"\"\"Get this session's rate limit counter.\"\"\"\n logger.debug(\"Get rate limit counter\")\n return request.session.get(_LIMITCOUNTER)\n\n\ndef rate_limit_time(request):\n \"\"\"Get this session's rate limit time, a Unix timestamp after which the session's rate limt resets.\"\"\"\n logger.debug(\"Get rate limit time\")\n return request.session.get(_LIMITUNTIL)\n\n\ndef reset(request):\n \"\"\"Reset the session for the request.\"\"\"\n logger.debug(\"Reset session\")\n request.session[_AGENCY] = None\n request.session[_AUTH] = False\n request.session[_ELIGIBILITY] = None\n request.session[_ORIGIN] = reverse(\"core:index\")\n request.session[_TOKEN] = None\n request.session[_TOKEN_EXP] = None\n request.session[_VERIFIER] = None\n\n if _UID not in request.session or not request.session[_UID]:\n logger.debug(\"Reset session time and uid\")\n request.session[_START] = int(time.time() * 1000)\n u = str(uuid.uuid4())\n request.session[_UID] = u\n request.session[_DID] = str(uuid.UUID(hashlib.sha512(bytes(u, \"utf8\")).hexdigest()[:32]))\n reset_rate_limit(request)\n\n\ndef reset_rate_limit(request):\n \"\"\"Reset this session's rate limit counter and time.\"\"\"\n logger.debug(\"Reset rate limit\")\n request.session[_LIMITCOUNTER] = 0\n # get the current time in Unix seconds, then add RATE_LIMIT_PERIOD seconds\n request.session[_LIMITUNTIL] = int(time.time()) + RATE_LIMIT_PERIOD\n\n\ndef start(request):\n \"\"\"Get the start time from the request's session, as integer milliseconds since Epoch.\"\"\"\n logger.debug(\"Get session time\")\n s = request.session.get(_START)\n if not s:\n reset(request)\n s = request.session.get(_START)\n return s\n\n\ndef token(request):\n \"\"\"Get the token from the request's session, or None.\"\"\"\n logger.debug(\"Get session token\")\n return request.session.get(_TOKEN)\n\n\ndef token_expiry(request):\n \"\"\"Get the token's expiry time from the request's session, or None.\"\"\"\n logger.debug(\"Get session token expiry\")\n return request.session.get(_TOKEN_EXP)\n\n\ndef uid(request):\n \"\"\"Get the session's unique ID, generating a new one if necessary.\"\"\"\n logger.debug(\"Get session uid\")\n u = request.session.get(_UID)\n if not u:\n reset(request)\n u = request.session.get(_UID)\n return u\n\n\ndef update(\n request,\n agency=None,\n auth=None,\n debug=None,\n eligibility_types=None,\n origin=None,\n token=None,\n token_exp=None,\n verifier=None,\n):\n \"\"\"Update the request's session with non-null values.\"\"\"\n if agency is not None and isinstance(agency, models.TransitAgency):\n logger.debug(f\"Update session {_AGENCY}\")\n request.session[_AGENCY] = agency.id\n if auth is not None and type(auth) == bool:\n logger.debug(f\"Update session {_AUTH}\")\n request.session[_AUTH] = auth\n if debug is not None:\n logger.debug(f\"Update session {_DEBUG}\")\n request.session[_DEBUG] = debug\n if eligibility_types is not None and isinstance(eligibility_types, list):\n logger.debug(f\"Update session {_ELIGIBILITY}\")\n if len(eligibility_types) > 1:\n raise NotImplementedError(\"Multiple eligibilities are not supported at this time.\")\n elif len(eligibility_types) == 1:\n # get the eligibility corresponding to the session's agency\n a = models.TransitAgency.by_id(request.session[_AGENCY])\n t = str(eligibility_types[0]).strip()\n request.session[_ELIGIBILITY] = a.get_type_id(t)\n else:\n # empty list, clear session eligibility\n request.session[_ELIGIBILITY] = None\n if origin is not None:\n logger.debug(f\"Update session {_ORIGIN}\")\n request.session[_ORIGIN] = origin\n if token is not None:\n logger.debug(f\"Update session {_TOKEN}\")\n request.session[_TOKEN] = token\n request.session[_TOKEN_EXP] = token_exp\n if verifier is not None and isinstance(verifier, models.EligibilityVerifier):\n logger.debug(f\"Update session {_VERIFIER}\")\n request.session[_VERIFIER] = verifier.id\n\n\ndef valid_token(request):\n \"\"\"True if the request's session is configured with a valid token. False otherwise.\"\"\"\n if token(request) is not None:\n logger.debug(\"Session contains a token\")\n exp = token_expiry(request)\n\n # ensure token does not expire in the next 5 seconds\n valid = exp is None or exp > (time.time() + 5)\n\n logger.debug(f\"Session token is {'valid' if valid else 'expired'}\")\n return valid\n else:\n logger.debug(\"Session does not contain a valid token\")\n return False\n\n\ndef verifier(request):\n \"\"\"Get the verifier from the request's session, or None\"\"\"\n logger.debug(\"Get session verifier\")\n try:\n return models.EligibilityVerifier.by_id(request.session[_VERIFIER])\n except (KeyError, models.EligibilityVerifier.DoesNotExist):\n logger.debug(\"Can't get verifier from session\")\n return None\n", "path": "benefits/core/session.py"}]} | 3,194 | 520 |
gh_patches_debug_36715 | rasdani/github-patches | git_diff | Mailu__Mailu-1968 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Insufficient input validation in /internal endpoint
It seems like it is possible to crash a thread of flask by sending an invalid utf-8 username.
As far as i can see, this can not be used to gain access (should be validated).
Here are logs to reproduce this.
[error] 56#56: *1312970 auth http server 127.0.0.1:8000 did not send server or port while in http auth state, client: xx.xx.xx.xx, server: 0.0.0.0:25, login: "xxxxxxxxx\[email protected]"
GET /internal/auth/email HTTP/1.0" 500 290 "-" "-"
ERROR in app: Exception on /internal/auth/email [GET]
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/flask/app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "/usr/lib/python3.7/site-packages/flask/app.py", line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/lib/python3.7/site-packages/flask/app.py", line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/lib/python3.7/site-packages/flask/_compat.py", line 35, in reraise
raise value
File "/usr/lib/python3.7/site-packages/flask/app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/lib/python3.7/site-packages/flask/app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/lib/python3.7/site-packages/flask_limiter/extension.py", line 544, in __inner
return obj(*a, **k)
File "/app/mailu/internal/views/auth.py", line 18, in nginx_authentication
headers = nginx.handle_authentication(flask.request.headers)
File "/app/mailu/internal/nginx.py", line 45, in handle_authentication
user_email = raw_user_email.encode("iso8859-1").decode("utf8")
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xed in position 6: invalid continuation byte
xx.xx.xx.xx - - [xx/xx/xxxx:xx:xx:xx +0000] "GET /internal/auth/email HTTP/1.0" 500 290 "-" "-"
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/admin/mailu/internal/nginx.py`
Content:
```
1 from mailu import models
2 from flask import current_app as app
3
4 import re
5 import urllib
6 import ipaddress
7 import socket
8 import tenacity
9
10 SUPPORTED_AUTH_METHODS = ["none", "plain"]
11
12
13 STATUSES = {
14 "authentication": ("Authentication credentials invalid", {
15 "imap": "AUTHENTICATIONFAILED",
16 "smtp": "535 5.7.8",
17 "pop3": "-ERR Authentication failed"
18 }),
19 "encryption": ("Must issue a STARTTLS command first", {
20 "smtp": "530 5.7.0"
21 }),
22 }
23
24 def check_credentials(user, password, ip, protocol=None):
25 if not user or not user.enabled or (protocol == "imap" and not user.enable_imap) or (protocol == "pop3" and not user.enable_pop):
26 return False
27 is_ok = False
28 # webmails
29 if len(password) == 64 and ip == app.config['WEBMAIL_ADDRESS']:
30 if user.verify_temp_token(password):
31 is_ok = True
32 # All tokens are 32 characters hex lowercase
33 if not is_ok and len(password) == 32:
34 for token in user.tokens:
35 if (token.check_password(password) and
36 (not token.ip or token.ip == ip)):
37 is_ok = True
38 break
39 if not is_ok and user.check_password(password):
40 is_ok = True
41 return is_ok
42
43 def handle_authentication(headers):
44 """ Handle an HTTP nginx authentication request
45 See: http://nginx.org/en/docs/mail/ngx_mail_auth_http_module.html#protocol
46 """
47 method = headers["Auth-Method"]
48 protocol = headers["Auth-Protocol"]
49 # Incoming mail, no authentication
50 if method == "none" and protocol == "smtp":
51 server, port = get_server(protocol, False)
52 if app.config["INBOUND_TLS_ENFORCE"]:
53 if "Auth-SSL" in headers and headers["Auth-SSL"] == "on":
54 return {
55 "Auth-Status": "OK",
56 "Auth-Server": server,
57 "Auth-Port": port
58 }
59 else:
60 status, code = get_status(protocol, "encryption")
61 return {
62 "Auth-Status": status,
63 "Auth-Error-Code" : code,
64 "Auth-Wait": 0
65 }
66 else:
67 return {
68 "Auth-Status": "OK",
69 "Auth-Server": server,
70 "Auth-Port": port
71 }
72 # Authenticated user
73 elif method == "plain":
74 server, port = get_server(headers["Auth-Protocol"], True)
75 # According to RFC2616 section 3.7.1 and PEP 3333, HTTP headers should
76 # be ASCII and are generally considered ISO8859-1. However when passing
77 # the password, nginx does not transcode the input UTF string, thus
78 # we need to manually decode.
79 raw_user_email = urllib.parse.unquote(headers["Auth-User"])
80 user_email = raw_user_email.encode("iso8859-1").decode("utf8")
81 raw_password = urllib.parse.unquote(headers["Auth-Pass"])
82 password = raw_password.encode("iso8859-1").decode("utf8")
83 ip = urllib.parse.unquote(headers["Client-Ip"])
84 service_port = int(urllib.parse.unquote(headers["Auth-Port"]))
85 if service_port == 25:
86 return {
87 "Auth-Status": "AUTH not supported",
88 "Auth-Error-Code": "502 5.5.1",
89 "Auth-Wait": 0
90 }
91 user = models.User.query.get(user_email)
92 if check_credentials(user, password, ip, protocol):
93 return {
94 "Auth-Status": "OK",
95 "Auth-Server": server,
96 "Auth-Port": port
97 }
98 else:
99 status, code = get_status(protocol, "authentication")
100 return {
101 "Auth-Status": status,
102 "Auth-Error-Code": code,
103 "Auth-Wait": 0
104 }
105 # Unexpected
106 return {}
107
108
109 def get_status(protocol, status):
110 """ Return the proper error code depending on the protocol
111 """
112 status, codes = STATUSES[status]
113 return status, codes[protocol]
114
115 def extract_host_port(host_and_port, default_port):
116 host, _, port = re.match('^(.*?)(:([0-9]*))?$', host_and_port).groups()
117 return host, int(port) if port else default_port
118
119 def get_server(protocol, authenticated=False):
120 if protocol == "imap":
121 hostname, port = extract_host_port(app.config['IMAP_ADDRESS'], 143)
122 elif protocol == "pop3":
123 hostname, port = extract_host_port(app.config['POP3_ADDRESS'], 110)
124 elif protocol == "smtp":
125 if authenticated:
126 hostname, port = extract_host_port(app.config['AUTHSMTP_ADDRESS'], 10025)
127 else:
128 hostname, port = extract_host_port(app.config['SMTP_ADDRESS'], 25)
129 try:
130 # test if hostname is already resolved to an ip adddress
131 ipaddress.ip_address(hostname)
132 except:
133 # hostname is not an ip address - so we need to resolve it
134 hostname = resolve_hostname(hostname)
135 return hostname, port
136
137 @tenacity.retry(stop=tenacity.stop_after_attempt(100),
138 wait=tenacity.wait_random(min=2, max=5))
139 def resolve_hostname(hostname):
140 """ This function uses system DNS to resolve a hostname.
141 It is capable of retrying in case the host is not immediately available
142 """
143 return socket.gethostbyname(hostname)
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/admin/mailu/internal/nginx.py b/core/admin/mailu/internal/nginx.py
--- a/core/admin/mailu/internal/nginx.py
+++ b/core/admin/mailu/internal/nginx.py
@@ -71,16 +71,6 @@
}
# Authenticated user
elif method == "plain":
- server, port = get_server(headers["Auth-Protocol"], True)
- # According to RFC2616 section 3.7.1 and PEP 3333, HTTP headers should
- # be ASCII and are generally considered ISO8859-1. However when passing
- # the password, nginx does not transcode the input UTF string, thus
- # we need to manually decode.
- raw_user_email = urllib.parse.unquote(headers["Auth-User"])
- user_email = raw_user_email.encode("iso8859-1").decode("utf8")
- raw_password = urllib.parse.unquote(headers["Auth-Pass"])
- password = raw_password.encode("iso8859-1").decode("utf8")
- ip = urllib.parse.unquote(headers["Client-Ip"])
service_port = int(urllib.parse.unquote(headers["Auth-Port"]))
if service_port == 25:
return {
@@ -88,20 +78,33 @@
"Auth-Error-Code": "502 5.5.1",
"Auth-Wait": 0
}
- user = models.User.query.get(user_email)
- if check_credentials(user, password, ip, protocol):
- return {
- "Auth-Status": "OK",
- "Auth-Server": server,
- "Auth-Port": port
- }
+ # According to RFC2616 section 3.7.1 and PEP 3333, HTTP headers should
+ # be ASCII and are generally considered ISO8859-1. However when passing
+ # the password, nginx does not transcode the input UTF string, thus
+ # we need to manually decode.
+ raw_user_email = urllib.parse.unquote(headers["Auth-User"])
+ raw_password = urllib.parse.unquote(headers["Auth-Pass"])
+ try:
+ user_email = raw_user_email.encode("iso8859-1").decode("utf8")
+ password = raw_password.encode("iso8859-1").decode("utf8")
+ except:
+ app.logger.warn(f'Received undecodable user/password from nginx: {raw_user_email!r}/{raw_password!r}')
else:
- status, code = get_status(protocol, "authentication")
- return {
- "Auth-Status": status,
- "Auth-Error-Code": code,
- "Auth-Wait": 0
- }
+ user = models.User.query.get(user_email)
+ ip = urllib.parse.unquote(headers["Client-Ip"])
+ if check_credentials(user, password, ip, protocol):
+ server, port = get_server(headers["Auth-Protocol"], True)
+ return {
+ "Auth-Status": "OK",
+ "Auth-Server": server,
+ "Auth-Port": port
+ }
+ status, code = get_status(protocol, "authentication")
+ return {
+ "Auth-Status": status,
+ "Auth-Error-Code": code,
+ "Auth-Wait": 0
+ }
# Unexpected
return {}
| {"golden_diff": "diff --git a/core/admin/mailu/internal/nginx.py b/core/admin/mailu/internal/nginx.py\n--- a/core/admin/mailu/internal/nginx.py\n+++ b/core/admin/mailu/internal/nginx.py\n@@ -71,16 +71,6 @@\n }\n # Authenticated user\n elif method == \"plain\":\n- server, port = get_server(headers[\"Auth-Protocol\"], True)\n- # According to RFC2616 section 3.7.1 and PEP 3333, HTTP headers should\n- # be ASCII and are generally considered ISO8859-1. However when passing\n- # the password, nginx does not transcode the input UTF string, thus\n- # we need to manually decode.\n- raw_user_email = urllib.parse.unquote(headers[\"Auth-User\"])\n- user_email = raw_user_email.encode(\"iso8859-1\").decode(\"utf8\")\n- raw_password = urllib.parse.unquote(headers[\"Auth-Pass\"])\n- password = raw_password.encode(\"iso8859-1\").decode(\"utf8\")\n- ip = urllib.parse.unquote(headers[\"Client-Ip\"])\n service_port = int(urllib.parse.unquote(headers[\"Auth-Port\"]))\n if service_port == 25:\n return {\n@@ -88,20 +78,33 @@\n \"Auth-Error-Code\": \"502 5.5.1\",\n \"Auth-Wait\": 0\n }\n- user = models.User.query.get(user_email)\n- if check_credentials(user, password, ip, protocol):\n- return {\n- \"Auth-Status\": \"OK\",\n- \"Auth-Server\": server,\n- \"Auth-Port\": port\n- }\n+ # According to RFC2616 section 3.7.1 and PEP 3333, HTTP headers should\n+ # be ASCII and are generally considered ISO8859-1. However when passing\n+ # the password, nginx does not transcode the input UTF string, thus\n+ # we need to manually decode.\n+ raw_user_email = urllib.parse.unquote(headers[\"Auth-User\"])\n+ raw_password = urllib.parse.unquote(headers[\"Auth-Pass\"])\n+ try:\n+ user_email = raw_user_email.encode(\"iso8859-1\").decode(\"utf8\")\n+ password = raw_password.encode(\"iso8859-1\").decode(\"utf8\")\n+ except:\n+ app.logger.warn(f'Received undecodable user/password from nginx: {raw_user_email!r}/{raw_password!r}')\n else:\n- status, code = get_status(protocol, \"authentication\")\n- return {\n- \"Auth-Status\": status,\n- \"Auth-Error-Code\": code,\n- \"Auth-Wait\": 0\n- }\n+ user = models.User.query.get(user_email)\n+ ip = urllib.parse.unquote(headers[\"Client-Ip\"])\n+ if check_credentials(user, password, ip, protocol):\n+ server, port = get_server(headers[\"Auth-Protocol\"], True)\n+ return {\n+ \"Auth-Status\": \"OK\",\n+ \"Auth-Server\": server,\n+ \"Auth-Port\": port\n+ }\n+ status, code = get_status(protocol, \"authentication\")\n+ return {\n+ \"Auth-Status\": status,\n+ \"Auth-Error-Code\": code,\n+ \"Auth-Wait\": 0\n+ }\n # Unexpected\n return {}\n", "issue": "Insufficient input validation in /internal endpoint\nIt seems like it is possible to crash a thread of flask by sending an invalid utf-8 username.\r\nAs far as i can see, this can not be used to gain access (should be validated).\r\n\r\nHere are logs to reproduce this.\r\n\r\n[error] 56#56: *1312970 auth http server 127.0.0.1:8000 did not send server or port while in http auth state, client: xx.xx.xx.xx, server: 0.0.0.0:25, login: \"xxxxxxxxx\\[email protected]\"\r\n\r\nGET /internal/auth/email HTTP/1.0\" 500 290 \"-\" \"-\"\r\n\r\nERROR in app: Exception on /internal/auth/email [GET]\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.7/site-packages/flask/app.py\", line 2292, in wsgi_app\r\n response = self.full_dispatch_request()\r\n File \"/usr/lib/python3.7/site-packages/flask/app.py\", line 1815, in full_dispatch_request\r\n rv = self.handle_user_exception(e)\r\n File \"/usr/lib/python3.7/site-packages/flask/app.py\", line 1718, in handle_user_exception\r\n reraise(exc_type, exc_value, tb)\r\n File \"/usr/lib/python3.7/site-packages/flask/_compat.py\", line 35, in reraise\r\n raise value\r\n File \"/usr/lib/python3.7/site-packages/flask/app.py\", line 1813, in full_dispatch_request\r\n rv = self.dispatch_request()\r\n File \"/usr/lib/python3.7/site-packages/flask/app.py\", line 1799, in dispatch_request\r\n return self.view_functions[rule.endpoint](**req.view_args)\r\n File \"/usr/lib/python3.7/site-packages/flask_limiter/extension.py\", line 544, in __inner\r\n return obj(*a, **k)\r\n File \"/app/mailu/internal/views/auth.py\", line 18, in nginx_authentication\r\n headers = nginx.handle_authentication(flask.request.headers)\r\n File \"/app/mailu/internal/nginx.py\", line 45, in handle_authentication\r\n user_email = raw_user_email.encode(\"iso8859-1\").decode(\"utf8\")\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xed in position 6: invalid continuation byte\r\n\r\nxx.xx.xx.xx - - [xx/xx/xxxx:xx:xx:xx +0000] \"GET /internal/auth/email HTTP/1.0\" 500 290 \"-\" \"-\"\r\n\n", "before_files": [{"content": "from mailu import models\nfrom flask import current_app as app\n\nimport re\nimport urllib\nimport ipaddress\nimport socket\nimport tenacity\n\nSUPPORTED_AUTH_METHODS = [\"none\", \"plain\"]\n\n\nSTATUSES = {\n \"authentication\": (\"Authentication credentials invalid\", {\n \"imap\": \"AUTHENTICATIONFAILED\",\n \"smtp\": \"535 5.7.8\",\n \"pop3\": \"-ERR Authentication failed\"\n }),\n \"encryption\": (\"Must issue a STARTTLS command first\", {\n \"smtp\": \"530 5.7.0\"\n }),\n}\n\ndef check_credentials(user, password, ip, protocol=None):\n if not user or not user.enabled or (protocol == \"imap\" and not user.enable_imap) or (protocol == \"pop3\" and not user.enable_pop):\n return False\n is_ok = False\n # webmails\n if len(password) == 64 and ip == app.config['WEBMAIL_ADDRESS']:\n if user.verify_temp_token(password):\n is_ok = True\n # All tokens are 32 characters hex lowercase\n if not is_ok and len(password) == 32:\n for token in user.tokens:\n if (token.check_password(password) and\n (not token.ip or token.ip == ip)):\n is_ok = True\n break\n if not is_ok and user.check_password(password):\n is_ok = True\n return is_ok\n\ndef handle_authentication(headers):\n \"\"\" Handle an HTTP nginx authentication request\n See: http://nginx.org/en/docs/mail/ngx_mail_auth_http_module.html#protocol\n \"\"\"\n method = headers[\"Auth-Method\"]\n protocol = headers[\"Auth-Protocol\"]\n # Incoming mail, no authentication\n if method == \"none\" and protocol == \"smtp\":\n server, port = get_server(protocol, False)\n if app.config[\"INBOUND_TLS_ENFORCE\"]:\n if \"Auth-SSL\" in headers and headers[\"Auth-SSL\"] == \"on\":\n return {\n \"Auth-Status\": \"OK\",\n \"Auth-Server\": server,\n \"Auth-Port\": port\n }\n else:\n status, code = get_status(protocol, \"encryption\")\n return {\n \"Auth-Status\": status,\n \"Auth-Error-Code\" : code,\n \"Auth-Wait\": 0\n }\n else:\n return {\n \"Auth-Status\": \"OK\",\n \"Auth-Server\": server,\n \"Auth-Port\": port\n }\n # Authenticated user\n elif method == \"plain\":\n server, port = get_server(headers[\"Auth-Protocol\"], True)\n # According to RFC2616 section 3.7.1 and PEP 3333, HTTP headers should\n # be ASCII and are generally considered ISO8859-1. However when passing\n # the password, nginx does not transcode the input UTF string, thus\n # we need to manually decode.\n raw_user_email = urllib.parse.unquote(headers[\"Auth-User\"])\n user_email = raw_user_email.encode(\"iso8859-1\").decode(\"utf8\")\n raw_password = urllib.parse.unquote(headers[\"Auth-Pass\"])\n password = raw_password.encode(\"iso8859-1\").decode(\"utf8\")\n ip = urllib.parse.unquote(headers[\"Client-Ip\"])\n service_port = int(urllib.parse.unquote(headers[\"Auth-Port\"]))\n if service_port == 25:\n return {\n \"Auth-Status\": \"AUTH not supported\",\n \"Auth-Error-Code\": \"502 5.5.1\",\n \"Auth-Wait\": 0\n }\n user = models.User.query.get(user_email)\n if check_credentials(user, password, ip, protocol):\n return {\n \"Auth-Status\": \"OK\",\n \"Auth-Server\": server,\n \"Auth-Port\": port\n }\n else:\n status, code = get_status(protocol, \"authentication\")\n return {\n \"Auth-Status\": status,\n \"Auth-Error-Code\": code,\n \"Auth-Wait\": 0\n }\n # Unexpected\n return {}\n\n\ndef get_status(protocol, status):\n \"\"\" Return the proper error code depending on the protocol\n \"\"\"\n status, codes = STATUSES[status]\n return status, codes[protocol]\n\ndef extract_host_port(host_and_port, default_port):\n host, _, port = re.match('^(.*?)(:([0-9]*))?$', host_and_port).groups()\n return host, int(port) if port else default_port\n\ndef get_server(protocol, authenticated=False):\n if protocol == \"imap\":\n hostname, port = extract_host_port(app.config['IMAP_ADDRESS'], 143)\n elif protocol == \"pop3\":\n hostname, port = extract_host_port(app.config['POP3_ADDRESS'], 110)\n elif protocol == \"smtp\":\n if authenticated:\n hostname, port = extract_host_port(app.config['AUTHSMTP_ADDRESS'], 10025)\n else:\n hostname, port = extract_host_port(app.config['SMTP_ADDRESS'], 25)\n try:\n # test if hostname is already resolved to an ip adddress\n ipaddress.ip_address(hostname)\n except:\n # hostname is not an ip address - so we need to resolve it\n hostname = resolve_hostname(hostname)\n return hostname, port\n\[email protected](stop=tenacity.stop_after_attempt(100),\n wait=tenacity.wait_random(min=2, max=5))\ndef resolve_hostname(hostname):\n \"\"\" This function uses system DNS to resolve a hostname.\n It is capable of retrying in case the host is not immediately available\n \"\"\"\n return socket.gethostbyname(hostname)\n", "path": "core/admin/mailu/internal/nginx.py"}], "after_files": [{"content": "from mailu import models\nfrom flask import current_app as app\n\nimport re\nimport urllib\nimport ipaddress\nimport socket\nimport tenacity\n\nSUPPORTED_AUTH_METHODS = [\"none\", \"plain\"]\n\n\nSTATUSES = {\n \"authentication\": (\"Authentication credentials invalid\", {\n \"imap\": \"AUTHENTICATIONFAILED\",\n \"smtp\": \"535 5.7.8\",\n \"pop3\": \"-ERR Authentication failed\"\n }),\n \"encryption\": (\"Must issue a STARTTLS command first\", {\n \"smtp\": \"530 5.7.0\"\n }),\n}\n\ndef check_credentials(user, password, ip, protocol=None):\n if not user or not user.enabled or (protocol == \"imap\" and not user.enable_imap) or (protocol == \"pop3\" and not user.enable_pop):\n return False\n is_ok = False\n # webmails\n if len(password) == 64 and ip == app.config['WEBMAIL_ADDRESS']:\n if user.verify_temp_token(password):\n is_ok = True\n # All tokens are 32 characters hex lowercase\n if not is_ok and len(password) == 32:\n for token in user.tokens:\n if (token.check_password(password) and\n (not token.ip or token.ip == ip)):\n is_ok = True\n break\n if not is_ok and user.check_password(password):\n is_ok = True\n return is_ok\n\ndef handle_authentication(headers):\n \"\"\" Handle an HTTP nginx authentication request\n See: http://nginx.org/en/docs/mail/ngx_mail_auth_http_module.html#protocol\n \"\"\"\n method = headers[\"Auth-Method\"]\n protocol = headers[\"Auth-Protocol\"]\n # Incoming mail, no authentication\n if method == \"none\" and protocol == \"smtp\":\n server, port = get_server(protocol, False)\n if app.config[\"INBOUND_TLS_ENFORCE\"]:\n if \"Auth-SSL\" in headers and headers[\"Auth-SSL\"] == \"on\":\n return {\n \"Auth-Status\": \"OK\",\n \"Auth-Server\": server,\n \"Auth-Port\": port\n }\n else:\n status, code = get_status(protocol, \"encryption\")\n return {\n \"Auth-Status\": status,\n \"Auth-Error-Code\" : code,\n \"Auth-Wait\": 0\n }\n else:\n return {\n \"Auth-Status\": \"OK\",\n \"Auth-Server\": server,\n \"Auth-Port\": port\n }\n # Authenticated user\n elif method == \"plain\":\n service_port = int(urllib.parse.unquote(headers[\"Auth-Port\"]))\n if service_port == 25:\n return {\n \"Auth-Status\": \"AUTH not supported\",\n \"Auth-Error-Code\": \"502 5.5.1\",\n \"Auth-Wait\": 0\n }\n # According to RFC2616 section 3.7.1 and PEP 3333, HTTP headers should\n # be ASCII and are generally considered ISO8859-1. However when passing\n # the password, nginx does not transcode the input UTF string, thus\n # we need to manually decode.\n raw_user_email = urllib.parse.unquote(headers[\"Auth-User\"])\n raw_password = urllib.parse.unquote(headers[\"Auth-Pass\"])\n try:\n user_email = raw_user_email.encode(\"iso8859-1\").decode(\"utf8\")\n password = raw_password.encode(\"iso8859-1\").decode(\"utf8\")\n except:\n app.logger.warn(f'Received undecodable user/password from nginx: {raw_user_email!r}/{raw_password!r}')\n else:\n user = models.User.query.get(user_email)\n ip = urllib.parse.unquote(headers[\"Client-Ip\"])\n if check_credentials(user, password, ip, protocol):\n server, port = get_server(headers[\"Auth-Protocol\"], True)\n return {\n \"Auth-Status\": \"OK\",\n \"Auth-Server\": server,\n \"Auth-Port\": port\n }\n status, code = get_status(protocol, \"authentication\")\n return {\n \"Auth-Status\": status,\n \"Auth-Error-Code\": code,\n \"Auth-Wait\": 0\n }\n # Unexpected\n return {}\n\n\ndef get_status(protocol, status):\n \"\"\" Return the proper error code depending on the protocol\n \"\"\"\n status, codes = STATUSES[status]\n return status, codes[protocol]\n\ndef extract_host_port(host_and_port, default_port):\n host, _, port = re.match('^(.*?)(:([0-9]*))?$', host_and_port).groups()\n return host, int(port) if port else default_port\n\ndef get_server(protocol, authenticated=False):\n if protocol == \"imap\":\n hostname, port = extract_host_port(app.config['IMAP_ADDRESS'], 143)\n elif protocol == \"pop3\":\n hostname, port = extract_host_port(app.config['POP3_ADDRESS'], 110)\n elif protocol == \"smtp\":\n if authenticated:\n hostname, port = extract_host_port(app.config['AUTHSMTP_ADDRESS'], 10025)\n else:\n hostname, port = extract_host_port(app.config['SMTP_ADDRESS'], 25)\n try:\n # test if hostname is already resolved to an ip adddress\n ipaddress.ip_address(hostname)\n except:\n # hostname is not an ip address - so we need to resolve it\n hostname = resolve_hostname(hostname)\n return hostname, port\n\[email protected](stop=tenacity.stop_after_attempt(100),\n wait=tenacity.wait_random(min=2, max=5))\ndef resolve_hostname(hostname):\n \"\"\" This function uses system DNS to resolve a hostname.\n It is capable of retrying in case the host is not immediately available\n \"\"\"\n return socket.gethostbyname(hostname)\n", "path": "core/admin/mailu/internal/nginx.py"}]} | 2,417 | 770 |
gh_patches_debug_27115 | rasdani/github-patches | git_diff | nonebot__nonebot2-135 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: 错误删除未执行过的`temp matcher`
**描述问题:**
处理事件时错误地删除了未执行过的 `temp matcher`
**环境信息:**
- OS: any
- Python Version: any
- Nonebot Version: 2.0.0a8
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nonebot/message.py`
Content:
```
1 """
2 事件处理
3 ========
4
5 NoneBot 内部处理并按优先级分发事件给所有事件响应器,提供了多个插槽以进行事件的预处理等。
6 """
7
8 import asyncio
9 from datetime import datetime
10 from typing import Set, Type, Optional, Iterable, TYPE_CHECKING
11
12 from nonebot.log import logger
13 from nonebot.rule import TrieRule
14 from nonebot.matcher import matchers, Matcher
15 from nonebot.exception import IgnoredException, StopPropagation, NoLogException
16 from nonebot.typing import T_State, T_EventPreProcessor, T_RunPreProcessor, T_EventPostProcessor, T_RunPostProcessor
17
18 if TYPE_CHECKING:
19 from nonebot.adapters import Bot, Event
20
21 _event_preprocessors: Set[T_EventPreProcessor] = set()
22 _event_postprocessors: Set[T_EventPostProcessor] = set()
23 _run_preprocessors: Set[T_RunPreProcessor] = set()
24 _run_postprocessors: Set[T_RunPostProcessor] = set()
25
26
27 def event_preprocessor(func: T_EventPreProcessor) -> T_EventPreProcessor:
28 """
29 :说明:
30
31 事件预处理。装饰一个函数,使它在每次接收到事件并分发给各响应器之前执行。
32
33 :参数:
34
35 事件预处理函数接收三个参数。
36
37 * ``bot: Bot``: Bot 对象
38 * ``event: Event``: Event 对象
39 * ``state: T_State``: 当前 State
40 """
41 _event_preprocessors.add(func)
42 return func
43
44
45 def event_postprocessor(func: T_EventPostProcessor) -> T_EventPostProcessor:
46 """
47 :说明:
48
49 事件后处理。装饰一个函数,使它在每次接收到事件并分发给各响应器之后执行。
50
51 :参数:
52
53 事件后处理函数接收三个参数。
54
55 * ``bot: Bot``: Bot 对象
56 * ``event: Event``: Event 对象
57 * ``state: T_State``: 当前事件运行前 State
58 """
59 _event_postprocessors.add(func)
60 return func
61
62
63 def run_preprocessor(func: T_RunPreProcessor) -> T_RunPreProcessor:
64 """
65 :说明:
66
67 运行预处理。装饰一个函数,使它在每次事件响应器运行前执行。
68
69 :参数:
70
71 运行预处理函数接收四个参数。
72
73 * ``matcher: Matcher``: 当前要运行的事件响应器
74 * ``bot: Bot``: Bot 对象
75 * ``event: Event``: Event 对象
76 * ``state: T_State``: 当前 State
77 """
78 _run_preprocessors.add(func)
79 return func
80
81
82 def run_postprocessor(func: T_RunPostProcessor) -> T_RunPostProcessor:
83 """
84 :说明:
85
86 运行后处理。装饰一个函数,使它在每次事件响应器运行后执行。
87
88 :参数:
89
90 运行后处理函数接收五个参数。
91
92 * ``matcher: Matcher``: 运行完毕的事件响应器
93 * ``exception: Optional[Exception]``: 事件响应器运行错误(如果存在)
94 * ``bot: Bot``: Bot 对象
95 * ``event: Event``: Event 对象
96 * ``state: T_State``: 当前 State
97 """
98 _run_postprocessors.add(func)
99 return func
100
101
102 async def _check_matcher(priority: int, bot: "Bot", event: "Event",
103 state: T_State) -> Iterable[Type[Matcher]]:
104 current_matchers = matchers[priority].copy()
105
106 async def _check(Matcher: Type[Matcher], bot: "Bot", event: "Event",
107 state: T_State) -> Optional[Type[Matcher]]:
108 try:
109 if (not Matcher.expire_time or datetime.now() <= Matcher.expire_time
110 ) and await Matcher.check_perm(
111 bot, event) and await Matcher.check_rule(bot, event, state):
112 return Matcher
113 except Exception as e:
114 logger.opt(colors=True, exception=e).error(
115 f"<r><bg #f8bbd0>Rule check failed for {Matcher}.</bg #f8bbd0></r>"
116 )
117 return None
118
119 async def _check_expire(Matcher: Type[Matcher]) -> Optional[Type[Matcher]]:
120 if Matcher.temp or (Matcher.expire_time and
121 datetime.now() > Matcher.expire_time):
122 return Matcher
123 return None
124
125 checking_tasks = [
126 _check(Matcher, bot, event, state) for Matcher in current_matchers
127 ]
128 checking_expire_tasks = [
129 _check_expire(Matcher) for Matcher in current_matchers
130 ]
131 results = await asyncio.gather(*checking_tasks, return_exceptions=True)
132 expired = await asyncio.gather(*checking_expire_tasks)
133 for expired_matcher in filter(lambda x: x, expired):
134 try:
135 matchers[priority].remove(expired_matcher) # type: ignore
136 except Exception:
137 pass
138 return filter(lambda x: x, results)
139
140
141 async def _run_matcher(Matcher: Type[Matcher], bot: "Bot", event: "Event",
142 state: T_State) -> None:
143 logger.info(f"Event will be handled by {Matcher}")
144
145 matcher = Matcher()
146
147 coros = list(
148 map(lambda x: x(matcher, bot, event, state), _run_preprocessors))
149 if coros:
150 try:
151 await asyncio.gather(*coros)
152 except IgnoredException:
153 logger.opt(colors=True).info(
154 f"Matcher {matcher} running is <b>cancelled</b>")
155 return
156 except Exception as e:
157 logger.opt(colors=True, exception=e).error(
158 "<r><bg #f8bbd0>Error when running RunPreProcessors. "
159 "Running cancelled!</bg #f8bbd0></r>")
160 return
161
162 exception = None
163
164 try:
165 logger.debug(f"Running matcher {matcher}")
166 await matcher.run(bot, event, state)
167 except Exception as e:
168 logger.opt(colors=True, exception=e).error(
169 f"<r><bg #f8bbd0>Running matcher {matcher} failed.</bg #f8bbd0></r>"
170 )
171 exception = e
172
173 coros = list(
174 map(lambda x: x(matcher, exception, bot, event, state),
175 _run_postprocessors))
176 if coros:
177 try:
178 await asyncio.gather(*coros)
179 except Exception as e:
180 logger.opt(colors=True, exception=e).error(
181 "<r><bg #f8bbd0>Error when running RunPostProcessors</bg #f8bbd0></r>"
182 )
183
184 if matcher.block:
185 raise StopPropagation
186 return
187
188
189 async def handle_event(bot: "Bot", event: "Event"):
190 """
191 :说明:
192
193 处理一个事件。调用该函数以实现分发事件。
194
195 :参数:
196
197 * ``bot: Bot``: Bot 对象
198 * ``event: Event``: Event 对象
199
200 :示例:
201
202 .. code-block:: python
203
204 import asyncio
205 asyncio.create_task(handle_event(bot, event))
206 """
207 show_log = True
208 log_msg = f"<m>{bot.type.upper()} {bot.self_id}</m> | "
209 try:
210 log_msg += event.get_log_string()
211 except NoLogException:
212 show_log = False
213 if show_log:
214 logger.opt(colors=True).info(log_msg)
215
216 state = {}
217 coros = list(map(lambda x: x(bot, event, state), _event_preprocessors))
218 if coros:
219 try:
220 logger.debug("Running PreProcessors...")
221 await asyncio.gather(*coros)
222 except IgnoredException:
223 logger.opt(colors=True).info(
224 f"Event {event.get_event_name()} is <b>ignored</b>")
225 return
226 except Exception as e:
227 logger.opt(colors=True, exception=e).error(
228 "<r><bg #f8bbd0>Error when running EventPreProcessors. "
229 "Event ignored!</bg #f8bbd0></r>")
230 return
231
232 # Trie Match
233 _, _ = TrieRule.get_value(bot, event, state)
234
235 break_flag = False
236 for priority in sorted(matchers.keys()):
237 if break_flag:
238 break
239
240 if show_log:
241 logger.debug(f"Checking for matchers in priority {priority}...")
242
243 run_matchers = await _check_matcher(priority, bot, event, state)
244
245 pending_tasks = [
246 _run_matcher(matcher, bot, event, state.copy())
247 for matcher in run_matchers
248 ]
249
250 results = await asyncio.gather(*pending_tasks, return_exceptions=True)
251
252 for result in results:
253 if isinstance(result, StopPropagation):
254 if not break_flag:
255 break_flag = True
256 logger.debug("Stop event propagation")
257
258 coros = list(map(lambda x: x(bot, event, state), _event_postprocessors))
259 if coros:
260 try:
261 logger.debug("Running PostProcessors...")
262 await asyncio.gather(*coros)
263 except Exception as e:
264 logger.opt(colors=True, exception=e).error(
265 "<r><bg #f8bbd0>Error when running EventPostProcessors</bg #f8bbd0></r>"
266 )
267
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nonebot/message.py b/nonebot/message.py
--- a/nonebot/message.py
+++ b/nonebot/message.py
@@ -117,8 +117,7 @@
return None
async def _check_expire(Matcher: Type[Matcher]) -> Optional[Type[Matcher]]:
- if Matcher.temp or (Matcher.expire_time and
- datetime.now() > Matcher.expire_time):
+ if Matcher.expire_time and datetime.now() > Matcher.expire_time:
return Matcher
return None
@@ -128,14 +127,19 @@
checking_expire_tasks = [
_check_expire(Matcher) for Matcher in current_matchers
]
- results = await asyncio.gather(*checking_tasks, return_exceptions=True)
+ results = await asyncio.gather(*checking_tasks)
expired = await asyncio.gather(*checking_expire_tasks)
for expired_matcher in filter(lambda x: x, expired):
try:
matchers[priority].remove(expired_matcher) # type: ignore
except Exception:
pass
- return filter(lambda x: x, results)
+ for temp_matcher in filter(lambda x: x and x.temp, results):
+ try:
+ matchers[priority].remove(temp_matcher) # type: ignore
+ except Exception:
+ pass
+ return filter(lambda x: x, results) # type: ignore
async def _run_matcher(Matcher: Type[Matcher], bot: "Bot", event: "Event",
| {"golden_diff": "diff --git a/nonebot/message.py b/nonebot/message.py\n--- a/nonebot/message.py\n+++ b/nonebot/message.py\n@@ -117,8 +117,7 @@\n return None\n \n async def _check_expire(Matcher: Type[Matcher]) -> Optional[Type[Matcher]]:\n- if Matcher.temp or (Matcher.expire_time and\n- datetime.now() > Matcher.expire_time):\n+ if Matcher.expire_time and datetime.now() > Matcher.expire_time:\n return Matcher\n return None\n \n@@ -128,14 +127,19 @@\n checking_expire_tasks = [\n _check_expire(Matcher) for Matcher in current_matchers\n ]\n- results = await asyncio.gather(*checking_tasks, return_exceptions=True)\n+ results = await asyncio.gather(*checking_tasks)\n expired = await asyncio.gather(*checking_expire_tasks)\n for expired_matcher in filter(lambda x: x, expired):\n try:\n matchers[priority].remove(expired_matcher) # type: ignore\n except Exception:\n pass\n- return filter(lambda x: x, results)\n+ for temp_matcher in filter(lambda x: x and x.temp, results):\n+ try:\n+ matchers[priority].remove(temp_matcher) # type: ignore\n+ except Exception:\n+ pass\n+ return filter(lambda x: x, results) # type: ignore\n \n \n async def _run_matcher(Matcher: Type[Matcher], bot: \"Bot\", event: \"Event\",\n", "issue": "Bug: \u9519\u8bef\u5220\u9664\u672a\u6267\u884c\u8fc7\u7684`temp matcher`\n**\u63cf\u8ff0\u95ee\u9898\uff1a**\r\n\r\n\u5904\u7406\u4e8b\u4ef6\u65f6\u9519\u8bef\u5730\u5220\u9664\u4e86\u672a\u6267\u884c\u8fc7\u7684 `temp matcher`\r\n\r\n**\u73af\u5883\u4fe1\u606f\uff1a**\r\n\r\n - OS: any\r\n - Python Version: any\r\n - Nonebot Version: 2.0.0a8\r\n\n", "before_files": [{"content": "\"\"\"\n\u4e8b\u4ef6\u5904\u7406\n========\n\nNoneBot \u5185\u90e8\u5904\u7406\u5e76\u6309\u4f18\u5148\u7ea7\u5206\u53d1\u4e8b\u4ef6\u7ed9\u6240\u6709\u4e8b\u4ef6\u54cd\u5e94\u5668\uff0c\u63d0\u4f9b\u4e86\u591a\u4e2a\u63d2\u69fd\u4ee5\u8fdb\u884c\u4e8b\u4ef6\u7684\u9884\u5904\u7406\u7b49\u3002\n\"\"\"\n\nimport asyncio\nfrom datetime import datetime\nfrom typing import Set, Type, Optional, Iterable, TYPE_CHECKING\n\nfrom nonebot.log import logger\nfrom nonebot.rule import TrieRule\nfrom nonebot.matcher import matchers, Matcher\nfrom nonebot.exception import IgnoredException, StopPropagation, NoLogException\nfrom nonebot.typing import T_State, T_EventPreProcessor, T_RunPreProcessor, T_EventPostProcessor, T_RunPostProcessor\n\nif TYPE_CHECKING:\n from nonebot.adapters import Bot, Event\n\n_event_preprocessors: Set[T_EventPreProcessor] = set()\n_event_postprocessors: Set[T_EventPostProcessor] = set()\n_run_preprocessors: Set[T_RunPreProcessor] = set()\n_run_postprocessors: Set[T_RunPostProcessor] = set()\n\n\ndef event_preprocessor(func: T_EventPreProcessor) -> T_EventPreProcessor:\n \"\"\"\n :\u8bf4\u660e:\n\n \u4e8b\u4ef6\u9884\u5904\u7406\u3002\u88c5\u9970\u4e00\u4e2a\u51fd\u6570\uff0c\u4f7f\u5b83\u5728\u6bcf\u6b21\u63a5\u6536\u5230\u4e8b\u4ef6\u5e76\u5206\u53d1\u7ed9\u5404\u54cd\u5e94\u5668\u4e4b\u524d\u6267\u884c\u3002\n\n :\u53c2\u6570:\n\n \u4e8b\u4ef6\u9884\u5904\u7406\u51fd\u6570\u63a5\u6536\u4e09\u4e2a\u53c2\u6570\u3002\n\n * ``bot: Bot``: Bot \u5bf9\u8c61\n * ``event: Event``: Event \u5bf9\u8c61\n * ``state: T_State``: \u5f53\u524d State\n \"\"\"\n _event_preprocessors.add(func)\n return func\n\n\ndef event_postprocessor(func: T_EventPostProcessor) -> T_EventPostProcessor:\n \"\"\"\n :\u8bf4\u660e:\n\n \u4e8b\u4ef6\u540e\u5904\u7406\u3002\u88c5\u9970\u4e00\u4e2a\u51fd\u6570\uff0c\u4f7f\u5b83\u5728\u6bcf\u6b21\u63a5\u6536\u5230\u4e8b\u4ef6\u5e76\u5206\u53d1\u7ed9\u5404\u54cd\u5e94\u5668\u4e4b\u540e\u6267\u884c\u3002\n\n :\u53c2\u6570:\n\n \u4e8b\u4ef6\u540e\u5904\u7406\u51fd\u6570\u63a5\u6536\u4e09\u4e2a\u53c2\u6570\u3002\n\n * ``bot: Bot``: Bot \u5bf9\u8c61\n * ``event: Event``: Event \u5bf9\u8c61\n * ``state: T_State``: \u5f53\u524d\u4e8b\u4ef6\u8fd0\u884c\u524d State\n \"\"\"\n _event_postprocessors.add(func)\n return func\n\n\ndef run_preprocessor(func: T_RunPreProcessor) -> T_RunPreProcessor:\n \"\"\"\n :\u8bf4\u660e:\n\n \u8fd0\u884c\u9884\u5904\u7406\u3002\u88c5\u9970\u4e00\u4e2a\u51fd\u6570\uff0c\u4f7f\u5b83\u5728\u6bcf\u6b21\u4e8b\u4ef6\u54cd\u5e94\u5668\u8fd0\u884c\u524d\u6267\u884c\u3002\n\n :\u53c2\u6570:\n\n \u8fd0\u884c\u9884\u5904\u7406\u51fd\u6570\u63a5\u6536\u56db\u4e2a\u53c2\u6570\u3002\n\n * ``matcher: Matcher``: \u5f53\u524d\u8981\u8fd0\u884c\u7684\u4e8b\u4ef6\u54cd\u5e94\u5668\n * ``bot: Bot``: Bot \u5bf9\u8c61\n * ``event: Event``: Event \u5bf9\u8c61\n * ``state: T_State``: \u5f53\u524d State\n \"\"\"\n _run_preprocessors.add(func)\n return func\n\n\ndef run_postprocessor(func: T_RunPostProcessor) -> T_RunPostProcessor:\n \"\"\"\n :\u8bf4\u660e:\n\n \u8fd0\u884c\u540e\u5904\u7406\u3002\u88c5\u9970\u4e00\u4e2a\u51fd\u6570\uff0c\u4f7f\u5b83\u5728\u6bcf\u6b21\u4e8b\u4ef6\u54cd\u5e94\u5668\u8fd0\u884c\u540e\u6267\u884c\u3002\n\n :\u53c2\u6570:\n\n \u8fd0\u884c\u540e\u5904\u7406\u51fd\u6570\u63a5\u6536\u4e94\u4e2a\u53c2\u6570\u3002\n\n * ``matcher: Matcher``: \u8fd0\u884c\u5b8c\u6bd5\u7684\u4e8b\u4ef6\u54cd\u5e94\u5668\n * ``exception: Optional[Exception]``: \u4e8b\u4ef6\u54cd\u5e94\u5668\u8fd0\u884c\u9519\u8bef\uff08\u5982\u679c\u5b58\u5728\uff09\n * ``bot: Bot``: Bot \u5bf9\u8c61\n * ``event: Event``: Event \u5bf9\u8c61\n * ``state: T_State``: \u5f53\u524d State\n \"\"\"\n _run_postprocessors.add(func)\n return func\n\n\nasync def _check_matcher(priority: int, bot: \"Bot\", event: \"Event\",\n state: T_State) -> Iterable[Type[Matcher]]:\n current_matchers = matchers[priority].copy()\n\n async def _check(Matcher: Type[Matcher], bot: \"Bot\", event: \"Event\",\n state: T_State) -> Optional[Type[Matcher]]:\n try:\n if (not Matcher.expire_time or datetime.now() <= Matcher.expire_time\n ) and await Matcher.check_perm(\n bot, event) and await Matcher.check_rule(bot, event, state):\n return Matcher\n except Exception as e:\n logger.opt(colors=True, exception=e).error(\n f\"<r><bg #f8bbd0>Rule check failed for {Matcher}.</bg #f8bbd0></r>\"\n )\n return None\n\n async def _check_expire(Matcher: Type[Matcher]) -> Optional[Type[Matcher]]:\n if Matcher.temp or (Matcher.expire_time and\n datetime.now() > Matcher.expire_time):\n return Matcher\n return None\n\n checking_tasks = [\n _check(Matcher, bot, event, state) for Matcher in current_matchers\n ]\n checking_expire_tasks = [\n _check_expire(Matcher) for Matcher in current_matchers\n ]\n results = await asyncio.gather(*checking_tasks, return_exceptions=True)\n expired = await asyncio.gather(*checking_expire_tasks)\n for expired_matcher in filter(lambda x: x, expired):\n try:\n matchers[priority].remove(expired_matcher) # type: ignore\n except Exception:\n pass\n return filter(lambda x: x, results)\n\n\nasync def _run_matcher(Matcher: Type[Matcher], bot: \"Bot\", event: \"Event\",\n state: T_State) -> None:\n logger.info(f\"Event will be handled by {Matcher}\")\n\n matcher = Matcher()\n\n coros = list(\n map(lambda x: x(matcher, bot, event, state), _run_preprocessors))\n if coros:\n try:\n await asyncio.gather(*coros)\n except IgnoredException:\n logger.opt(colors=True).info(\n f\"Matcher {matcher} running is <b>cancelled</b>\")\n return\n except Exception as e:\n logger.opt(colors=True, exception=e).error(\n \"<r><bg #f8bbd0>Error when running RunPreProcessors. \"\n \"Running cancelled!</bg #f8bbd0></r>\")\n return\n\n exception = None\n\n try:\n logger.debug(f\"Running matcher {matcher}\")\n await matcher.run(bot, event, state)\n except Exception as e:\n logger.opt(colors=True, exception=e).error(\n f\"<r><bg #f8bbd0>Running matcher {matcher} failed.</bg #f8bbd0></r>\"\n )\n exception = e\n\n coros = list(\n map(lambda x: x(matcher, exception, bot, event, state),\n _run_postprocessors))\n if coros:\n try:\n await asyncio.gather(*coros)\n except Exception as e:\n logger.opt(colors=True, exception=e).error(\n \"<r><bg #f8bbd0>Error when running RunPostProcessors</bg #f8bbd0></r>\"\n )\n\n if matcher.block:\n raise StopPropagation\n return\n\n\nasync def handle_event(bot: \"Bot\", event: \"Event\"):\n \"\"\"\n :\u8bf4\u660e:\n\n \u5904\u7406\u4e00\u4e2a\u4e8b\u4ef6\u3002\u8c03\u7528\u8be5\u51fd\u6570\u4ee5\u5b9e\u73b0\u5206\u53d1\u4e8b\u4ef6\u3002\n\n :\u53c2\u6570:\n\n * ``bot: Bot``: Bot \u5bf9\u8c61\n * ``event: Event``: Event \u5bf9\u8c61\n\n :\u793a\u4f8b:\n\n .. code-block:: python\n\n import asyncio\n asyncio.create_task(handle_event(bot, event))\n \"\"\"\n show_log = True\n log_msg = f\"<m>{bot.type.upper()} {bot.self_id}</m> | \"\n try:\n log_msg += event.get_log_string()\n except NoLogException:\n show_log = False\n if show_log:\n logger.opt(colors=True).info(log_msg)\n\n state = {}\n coros = list(map(lambda x: x(bot, event, state), _event_preprocessors))\n if coros:\n try:\n logger.debug(\"Running PreProcessors...\")\n await asyncio.gather(*coros)\n except IgnoredException:\n logger.opt(colors=True).info(\n f\"Event {event.get_event_name()} is <b>ignored</b>\")\n return\n except Exception as e:\n logger.opt(colors=True, exception=e).error(\n \"<r><bg #f8bbd0>Error when running EventPreProcessors. \"\n \"Event ignored!</bg #f8bbd0></r>\")\n return\n\n # Trie Match\n _, _ = TrieRule.get_value(bot, event, state)\n\n break_flag = False\n for priority in sorted(matchers.keys()):\n if break_flag:\n break\n\n if show_log:\n logger.debug(f\"Checking for matchers in priority {priority}...\")\n\n run_matchers = await _check_matcher(priority, bot, event, state)\n\n pending_tasks = [\n _run_matcher(matcher, bot, event, state.copy())\n for matcher in run_matchers\n ]\n\n results = await asyncio.gather(*pending_tasks, return_exceptions=True)\n\n for result in results:\n if isinstance(result, StopPropagation):\n if not break_flag:\n break_flag = True\n logger.debug(\"Stop event propagation\")\n\n coros = list(map(lambda x: x(bot, event, state), _event_postprocessors))\n if coros:\n try:\n logger.debug(\"Running PostProcessors...\")\n await asyncio.gather(*coros)\n except Exception as e:\n logger.opt(colors=True, exception=e).error(\n \"<r><bg #f8bbd0>Error when running EventPostProcessors</bg #f8bbd0></r>\"\n )\n", "path": "nonebot/message.py"}], "after_files": [{"content": "\"\"\"\n\u4e8b\u4ef6\u5904\u7406\n========\n\nNoneBot \u5185\u90e8\u5904\u7406\u5e76\u6309\u4f18\u5148\u7ea7\u5206\u53d1\u4e8b\u4ef6\u7ed9\u6240\u6709\u4e8b\u4ef6\u54cd\u5e94\u5668\uff0c\u63d0\u4f9b\u4e86\u591a\u4e2a\u63d2\u69fd\u4ee5\u8fdb\u884c\u4e8b\u4ef6\u7684\u9884\u5904\u7406\u7b49\u3002\n\"\"\"\n\nimport asyncio\nfrom datetime import datetime\nfrom typing import Set, Type, Optional, Iterable, TYPE_CHECKING\n\nfrom nonebot.log import logger\nfrom nonebot.rule import TrieRule\nfrom nonebot.matcher import matchers, Matcher\nfrom nonebot.exception import IgnoredException, StopPropagation, NoLogException\nfrom nonebot.typing import T_State, T_EventPreProcessor, T_RunPreProcessor, T_EventPostProcessor, T_RunPostProcessor\n\nif TYPE_CHECKING:\n from nonebot.adapters import Bot, Event\n\n_event_preprocessors: Set[T_EventPreProcessor] = set()\n_event_postprocessors: Set[T_EventPostProcessor] = set()\n_run_preprocessors: Set[T_RunPreProcessor] = set()\n_run_postprocessors: Set[T_RunPostProcessor] = set()\n\n\ndef event_preprocessor(func: T_EventPreProcessor) -> T_EventPreProcessor:\n \"\"\"\n :\u8bf4\u660e:\n\n \u4e8b\u4ef6\u9884\u5904\u7406\u3002\u88c5\u9970\u4e00\u4e2a\u51fd\u6570\uff0c\u4f7f\u5b83\u5728\u6bcf\u6b21\u63a5\u6536\u5230\u4e8b\u4ef6\u5e76\u5206\u53d1\u7ed9\u5404\u54cd\u5e94\u5668\u4e4b\u524d\u6267\u884c\u3002\n\n :\u53c2\u6570:\n\n \u4e8b\u4ef6\u9884\u5904\u7406\u51fd\u6570\u63a5\u6536\u4e09\u4e2a\u53c2\u6570\u3002\n\n * ``bot: Bot``: Bot \u5bf9\u8c61\n * ``event: Event``: Event \u5bf9\u8c61\n * ``state: T_State``: \u5f53\u524d State\n \"\"\"\n _event_preprocessors.add(func)\n return func\n\n\ndef event_postprocessor(func: T_EventPostProcessor) -> T_EventPostProcessor:\n \"\"\"\n :\u8bf4\u660e:\n\n \u4e8b\u4ef6\u540e\u5904\u7406\u3002\u88c5\u9970\u4e00\u4e2a\u51fd\u6570\uff0c\u4f7f\u5b83\u5728\u6bcf\u6b21\u63a5\u6536\u5230\u4e8b\u4ef6\u5e76\u5206\u53d1\u7ed9\u5404\u54cd\u5e94\u5668\u4e4b\u540e\u6267\u884c\u3002\n\n :\u53c2\u6570:\n\n \u4e8b\u4ef6\u540e\u5904\u7406\u51fd\u6570\u63a5\u6536\u4e09\u4e2a\u53c2\u6570\u3002\n\n * ``bot: Bot``: Bot \u5bf9\u8c61\n * ``event: Event``: Event \u5bf9\u8c61\n * ``state: T_State``: \u5f53\u524d\u4e8b\u4ef6\u8fd0\u884c\u524d State\n \"\"\"\n _event_postprocessors.add(func)\n return func\n\n\ndef run_preprocessor(func: T_RunPreProcessor) -> T_RunPreProcessor:\n \"\"\"\n :\u8bf4\u660e:\n\n \u8fd0\u884c\u9884\u5904\u7406\u3002\u88c5\u9970\u4e00\u4e2a\u51fd\u6570\uff0c\u4f7f\u5b83\u5728\u6bcf\u6b21\u4e8b\u4ef6\u54cd\u5e94\u5668\u8fd0\u884c\u524d\u6267\u884c\u3002\n\n :\u53c2\u6570:\n\n \u8fd0\u884c\u9884\u5904\u7406\u51fd\u6570\u63a5\u6536\u56db\u4e2a\u53c2\u6570\u3002\n\n * ``matcher: Matcher``: \u5f53\u524d\u8981\u8fd0\u884c\u7684\u4e8b\u4ef6\u54cd\u5e94\u5668\n * ``bot: Bot``: Bot \u5bf9\u8c61\n * ``event: Event``: Event \u5bf9\u8c61\n * ``state: T_State``: \u5f53\u524d State\n \"\"\"\n _run_preprocessors.add(func)\n return func\n\n\ndef run_postprocessor(func: T_RunPostProcessor) -> T_RunPostProcessor:\n \"\"\"\n :\u8bf4\u660e:\n\n \u8fd0\u884c\u540e\u5904\u7406\u3002\u88c5\u9970\u4e00\u4e2a\u51fd\u6570\uff0c\u4f7f\u5b83\u5728\u6bcf\u6b21\u4e8b\u4ef6\u54cd\u5e94\u5668\u8fd0\u884c\u540e\u6267\u884c\u3002\n\n :\u53c2\u6570:\n\n \u8fd0\u884c\u540e\u5904\u7406\u51fd\u6570\u63a5\u6536\u4e94\u4e2a\u53c2\u6570\u3002\n\n * ``matcher: Matcher``: \u8fd0\u884c\u5b8c\u6bd5\u7684\u4e8b\u4ef6\u54cd\u5e94\u5668\n * ``exception: Optional[Exception]``: \u4e8b\u4ef6\u54cd\u5e94\u5668\u8fd0\u884c\u9519\u8bef\uff08\u5982\u679c\u5b58\u5728\uff09\n * ``bot: Bot``: Bot \u5bf9\u8c61\n * ``event: Event``: Event \u5bf9\u8c61\n * ``state: T_State``: \u5f53\u524d State\n \"\"\"\n _run_postprocessors.add(func)\n return func\n\n\nasync def _check_matcher(priority: int, bot: \"Bot\", event: \"Event\",\n state: T_State) -> Iterable[Type[Matcher]]:\n current_matchers = matchers[priority].copy()\n\n async def _check(Matcher: Type[Matcher], bot: \"Bot\", event: \"Event\",\n state: T_State) -> Optional[Type[Matcher]]:\n try:\n if (not Matcher.expire_time or datetime.now() <= Matcher.expire_time\n ) and await Matcher.check_perm(\n bot, event) and await Matcher.check_rule(bot, event, state):\n return Matcher\n except Exception as e:\n logger.opt(colors=True, exception=e).error(\n f\"<r><bg #f8bbd0>Rule check failed for {Matcher}.</bg #f8bbd0></r>\"\n )\n return None\n\n async def _check_expire(Matcher: Type[Matcher]) -> Optional[Type[Matcher]]:\n if Matcher.expire_time and datetime.now() > Matcher.expire_time:\n return Matcher\n return None\n\n checking_tasks = [\n _check(Matcher, bot, event, state) for Matcher in current_matchers\n ]\n checking_expire_tasks = [\n _check_expire(Matcher) for Matcher in current_matchers\n ]\n results = await asyncio.gather(*checking_tasks)\n expired = await asyncio.gather(*checking_expire_tasks)\n for expired_matcher in filter(lambda x: x, expired):\n try:\n matchers[priority].remove(expired_matcher) # type: ignore\n except Exception:\n pass\n for temp_matcher in filter(lambda x: x and x.temp, results):\n try:\n matchers[priority].remove(temp_matcher) # type: ignore\n except Exception:\n pass\n return filter(lambda x: x, results) # type: ignore\n\n\nasync def _run_matcher(Matcher: Type[Matcher], bot: \"Bot\", event: \"Event\",\n state: T_State) -> None:\n logger.info(f\"Event will be handled by {Matcher}\")\n\n matcher = Matcher()\n\n coros = list(\n map(lambda x: x(matcher, bot, event, state), _run_preprocessors))\n if coros:\n try:\n await asyncio.gather(*coros)\n except IgnoredException:\n logger.opt(colors=True).info(\n f\"Matcher {matcher} running is <b>cancelled</b>\")\n return\n except Exception as e:\n logger.opt(colors=True, exception=e).error(\n \"<r><bg #f8bbd0>Error when running RunPreProcessors. \"\n \"Running cancelled!</bg #f8bbd0></r>\")\n return\n\n exception = None\n\n try:\n logger.debug(f\"Running matcher {matcher}\")\n await matcher.run(bot, event, state)\n except Exception as e:\n logger.opt(colors=True, exception=e).error(\n f\"<r><bg #f8bbd0>Running matcher {matcher} failed.</bg #f8bbd0></r>\"\n )\n exception = e\n\n coros = list(\n map(lambda x: x(matcher, exception, bot, event, state),\n _run_postprocessors))\n if coros:\n try:\n await asyncio.gather(*coros)\n except Exception as e:\n logger.opt(colors=True, exception=e).error(\n \"<r><bg #f8bbd0>Error when running RunPostProcessors</bg #f8bbd0></r>\"\n )\n\n if matcher.block:\n raise StopPropagation\n return\n\n\nasync def handle_event(bot: \"Bot\", event: \"Event\"):\n \"\"\"\n :\u8bf4\u660e:\n\n \u5904\u7406\u4e00\u4e2a\u4e8b\u4ef6\u3002\u8c03\u7528\u8be5\u51fd\u6570\u4ee5\u5b9e\u73b0\u5206\u53d1\u4e8b\u4ef6\u3002\n\n :\u53c2\u6570:\n\n * ``bot: Bot``: Bot \u5bf9\u8c61\n * ``event: Event``: Event \u5bf9\u8c61\n\n :\u793a\u4f8b:\n\n .. code-block:: python\n\n import asyncio\n asyncio.create_task(handle_event(bot, event))\n \"\"\"\n show_log = True\n log_msg = f\"<m>{bot.type.upper()} {bot.self_id}</m> | \"\n try:\n log_msg += event.get_log_string()\n except NoLogException:\n show_log = False\n if show_log:\n logger.opt(colors=True).info(log_msg)\n\n state = {}\n coros = list(map(lambda x: x(bot, event, state), _event_preprocessors))\n if coros:\n try:\n logger.debug(\"Running PreProcessors...\")\n await asyncio.gather(*coros)\n except IgnoredException:\n logger.opt(colors=True).info(\n f\"Event {event.get_event_name()} is <b>ignored</b>\")\n return\n except Exception as e:\n logger.opt(colors=True, exception=e).error(\n \"<r><bg #f8bbd0>Error when running EventPreProcessors. \"\n \"Event ignored!</bg #f8bbd0></r>\")\n return\n\n # Trie Match\n _, _ = TrieRule.get_value(bot, event, state)\n\n break_flag = False\n for priority in sorted(matchers.keys()):\n if break_flag:\n break\n\n if show_log:\n logger.debug(f\"Checking for matchers in priority {priority}...\")\n\n run_matchers = await _check_matcher(priority, bot, event, state)\n\n pending_tasks = [\n _run_matcher(matcher, bot, event, state.copy())\n for matcher in run_matchers\n ]\n\n results = await asyncio.gather(*pending_tasks, return_exceptions=True)\n\n for result in results:\n if isinstance(result, StopPropagation):\n if not break_flag:\n break_flag = True\n logger.debug(\"Stop event propagation\")\n\n coros = list(map(lambda x: x(bot, event, state), _event_postprocessors))\n if coros:\n try:\n logger.debug(\"Running PostProcessors...\")\n await asyncio.gather(*coros)\n except Exception as e:\n logger.opt(colors=True, exception=e).error(\n \"<r><bg #f8bbd0>Error when running EventPostProcessors</bg #f8bbd0></r>\"\n )\n", "path": "nonebot/message.py"}]} | 3,073 | 340 |
gh_patches_debug_1452 | rasdani/github-patches | git_diff | wagtail__wagtail-11660 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Wagtail Documentation favicon icon is missing (Not Found)
<!--
Summarise the documentation change you’re suggesting in the Issue title.
-->
### Pertinent section of the Wagtail docs
<!--
Copy the section link here.
-->
https://docs.wagtail.org/en/stable/getting_started/index.html
https://docs.wagtail.org/en/stable/getting_started/tutorial.html
https://docs.wagtail.org/en/stable/reference/index.html
https://docs.wagtail.org/en/stable/reference/pages/index.html
The issue persists in all the pages of documentation.
### Details
<!--
Provide a clear and concise description of what you want to happen.
-->
Wagtail has a nice favicon as per the logo which is displayed fine on this page of the documentation https://docs.wagtail.org/en/stable/
But on all the other pages the favicon is missing and not showing on the tab of chrome or any other browser tabs. When I checked the page source I found the favicon.ico is linked via `<link rel="shortcut icon" href="../../_static/favicon.ico" />` and this is going to https://docs.wagtail.org/en/_static/favicon.ico which is Not Found!
When I checked other sources for example CSS or logo image I found that is sourced like `src="../_static/img/wagtail-logo-new.svg` and takes to https://docs.wagtail.org/en/stable/_static/img/wagtail-logo-new.svg which is correct.
The difference between the favicon going 404 and the logo being available is that the favicon icon source is '../../_static' with an extra `../` which needs to be removed.
<img src="https://img001.prntscr.com/file/img001/zEYpfzNSQHqssOSc2_naxg.png" width="500">
<!--
If you're suggesting a very specific change to the documentation, feel free to directly submit a pull request.
-->
### Working on this
<!--
Do you have thoughts on skills needed?
Are you keen to work on this yourself once the issue has been accepted?
Please let us know here.
-->
It's a very minor fix and I already described the issue above. I could fix it but I am not sure exactly where in the documentation this favicon is coming from.
Anyone can contribute to this. View our [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html), add a comment to the issue once you’re ready to start.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 from wagtail import __version__
4 from wagtail.utils.setup import assets, check_bdist_egg, sdist
5
6 try:
7 from setuptools import find_packages, setup
8 except ImportError:
9 from distutils.core import setup
10
11
12 # Hack to prevent "TypeError: 'NoneType' object is not callable" error
13 # in multiprocessing/util.py _exit_function when setup.py exits
14 # (see http://www.eby-sarna.com/pipermail/peak/2010-May/003357.html)
15 try:
16 import multiprocessing # noqa: F401
17 except ImportError:
18 pass
19
20
21 install_requires = [
22 "Django>=4.2,<6.0",
23 "django-modelcluster>=6.2.1,<7.0",
24 "django-permissionedforms>=0.1,<1.0",
25 "django-taggit>=4.0,<5.1",
26 "django-treebeard>=4.5.1,<5.0",
27 "djangorestframework>=3.11.1,<4.0",
28 "django-filter>=23.3,<24",
29 "draftjs_exporter>=2.1.5,<6.0",
30 "Pillow>=9.1.0,<11.0.0",
31 "beautifulsoup4>=4.8,<4.13",
32 "Willow[heif]>=1.8.0,<2",
33 "requests>=2.11.1,<3.0",
34 "l18n>=2018.5",
35 "openpyxl>=3.0.10,<4.0",
36 "anyascii>=0.1.5",
37 "telepath>=0.3.1,<1",
38 "laces>=0.1,<0.2",
39 ]
40
41 # Testing dependencies
42 testing_extras = [
43 # Required for running the tests
44 "python-dateutil>=2.7",
45 "pytz>=2014.7",
46 "Jinja2>=3.0,<3.2",
47 "boto3>=1.28,<2",
48 "freezegun>=0.3.8",
49 "azure-mgmt-cdn>=12.0,<13.0",
50 "azure-mgmt-frontdoor>=1.0,<1.1",
51 "django-pattern-library>=0.7",
52 # For coverage and PEP8 linting
53 "coverage>=3.7.0",
54 "doc8==0.8.1",
55 "ruff==0.1.5",
56 # For enforcing string formatting mechanism in source files
57 "semgrep==1.40.0",
58 # For templates linting
59 "curlylint==0.13.1",
60 # For template indenting
61 "djhtml==3.0.6",
62 # For validating string formats in .po translation files
63 "polib>=1.1,<2.0",
64 # For wagtail.test.utils.wagtail_factories (used for streamfield migration toolkit)
65 "factory-boy>=3.2",
66 # For running tests in parallel
67 "tblib>=2.0,<3.0",
68 ]
69
70 # Documentation dependencies
71 documentation_extras = [
72 "pyenchant>=3.1.1,<4",
73 "sphinxcontrib-spelling>=7,<8",
74 "Sphinx>=1.5.2",
75 "sphinx-autobuild>=0.6.0",
76 "sphinx-wagtail-theme==6.2.0",
77 "myst_parser==2.0.0",
78 "sphinx_copybutton>=0.5,<1.0",
79 ]
80
81 setup(
82 name="wagtail",
83 version=__version__,
84 description="A Django content management system.",
85 author="Wagtail core team + contributors",
86 author_email="[email protected]", # For support queries, please see https://docs.wagtail.org/en/stable/support.html
87 url="https://wagtail.org/",
88 project_urls={
89 "Changelog": "https://github.com/wagtail/wagtail/blob/main/CHANGELOG.txt",
90 "Documentation": "https://docs.wagtail.org",
91 "Source": "https://github.com/wagtail/wagtail",
92 "Tracker": "https://github.com/wagtail/wagtail/issues",
93 },
94 packages=find_packages(),
95 include_package_data=True,
96 license="BSD",
97 long_description="Wagtail is an open source content management \
98 system built on Django, with a strong community and commercial support. \
99 It’s focused on user experience, and offers precise control for \
100 designers and developers.\n\n\
101 For more details, see https://wagtail.org, https://docs.wagtail.org and \
102 https://github.com/wagtail/wagtail/.",
103 classifiers=[
104 "Development Status :: 5 - Production/Stable",
105 "Environment :: Web Environment",
106 "Intended Audience :: Developers",
107 "License :: OSI Approved :: BSD License",
108 "Operating System :: OS Independent",
109 "Programming Language :: Python",
110 "Programming Language :: Python :: 3",
111 "Programming Language :: Python :: 3.8",
112 "Programming Language :: Python :: 3.9",
113 "Programming Language :: Python :: 3.10",
114 "Programming Language :: Python :: 3.11",
115 "Programming Language :: Python :: 3.12",
116 "Framework :: Django",
117 "Framework :: Django :: 4.2",
118 "Framework :: Django :: 5.0",
119 "Framework :: Wagtail",
120 "Topic :: Internet :: WWW/HTTP :: Site Management",
121 ],
122 python_requires=">=3.8",
123 install_requires=install_requires,
124 extras_require={"testing": testing_extras, "docs": documentation_extras},
125 entry_points="""
126 [console_scripts]
127 wagtail=wagtail.bin.wagtail:main
128 """,
129 zip_safe=False,
130 cmdclass={
131 "sdist": sdist,
132 "bdist_egg": check_bdist_egg,
133 "assets": assets,
134 },
135 )
136
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -73,7 +73,7 @@
"sphinxcontrib-spelling>=7,<8",
"Sphinx>=1.5.2",
"sphinx-autobuild>=0.6.0",
- "sphinx-wagtail-theme==6.2.0",
+ "sphinx-wagtail-theme==6.3.0",
"myst_parser==2.0.0",
"sphinx_copybutton>=0.5,<1.0",
]
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -73,7 +73,7 @@\n \"sphinxcontrib-spelling>=7,<8\",\n \"Sphinx>=1.5.2\",\n \"sphinx-autobuild>=0.6.0\",\n- \"sphinx-wagtail-theme==6.2.0\",\n+ \"sphinx-wagtail-theme==6.3.0\",\n \"myst_parser==2.0.0\",\n \"sphinx_copybutton>=0.5,<1.0\",\n ]\n", "issue": "Wagtail Documentation favicon icon is missing (Not Found)\n<!--\r\n Summarise the documentation change you\u2019re suggesting in the Issue title.\r\n-->\r\n\r\n### Pertinent section of the Wagtail docs\r\n\r\n<!--\r\n Copy the section link here.\r\n-->\r\nhttps://docs.wagtail.org/en/stable/getting_started/index.html\r\nhttps://docs.wagtail.org/en/stable/getting_started/tutorial.html\r\nhttps://docs.wagtail.org/en/stable/reference/index.html\r\nhttps://docs.wagtail.org/en/stable/reference/pages/index.html\r\n\r\nThe issue persists in all the pages of documentation. \r\n\r\n### Details\r\n\r\n<!--\r\n Provide a clear and concise description of what you want to happen.\r\n-->\r\n\r\nWagtail has a nice favicon as per the logo which is displayed fine on this page of the documentation https://docs.wagtail.org/en/stable/\r\nBut on all the other pages the favicon is missing and not showing on the tab of chrome or any other browser tabs. When I checked the page source I found the favicon.ico is linked via `<link rel=\"shortcut icon\" href=\"../../_static/favicon.ico\" />` and this is going to https://docs.wagtail.org/en/_static/favicon.ico which is Not Found! \r\nWhen I checked other sources for example CSS or logo image I found that is sourced like `src=\"../_static/img/wagtail-logo-new.svg` and takes to https://docs.wagtail.org/en/stable/_static/img/wagtail-logo-new.svg which is correct. \r\n\r\nThe difference between the favicon going 404 and the logo being available is that the favicon icon source is '../../_static' with an extra `../` which needs to be removed. \r\n\r\n<img src=\"https://img001.prntscr.com/file/img001/zEYpfzNSQHqssOSc2_naxg.png\" width=\"500\">\r\n\r\n<!--\r\n If you're suggesting a very specific change to the documentation, feel free to directly submit a pull request.\r\n-->\r\n\r\n### Working on this\r\n\r\n<!--\r\n Do you have thoughts on skills needed?\r\n Are you keen to work on this yourself once the issue has been accepted?\r\n Please let us know here.\r\n-->\r\nIt's a very minor fix and I already described the issue above. I could fix it but I am not sure exactly where in the documentation this favicon is coming from.\r\n\r\nAnyone can contribute to this. View our [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html), add a comment to the issue once you\u2019re ready to start.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nfrom wagtail import __version__\nfrom wagtail.utils.setup import assets, check_bdist_egg, sdist\n\ntry:\n from setuptools import find_packages, setup\nexcept ImportError:\n from distutils.core import setup\n\n\n# Hack to prevent \"TypeError: 'NoneType' object is not callable\" error\n# in multiprocessing/util.py _exit_function when setup.py exits\n# (see http://www.eby-sarna.com/pipermail/peak/2010-May/003357.html)\ntry:\n import multiprocessing # noqa: F401\nexcept ImportError:\n pass\n\n\ninstall_requires = [\n \"Django>=4.2,<6.0\",\n \"django-modelcluster>=6.2.1,<7.0\",\n \"django-permissionedforms>=0.1,<1.0\",\n \"django-taggit>=4.0,<5.1\",\n \"django-treebeard>=4.5.1,<5.0\",\n \"djangorestframework>=3.11.1,<4.0\",\n \"django-filter>=23.3,<24\",\n \"draftjs_exporter>=2.1.5,<6.0\",\n \"Pillow>=9.1.0,<11.0.0\",\n \"beautifulsoup4>=4.8,<4.13\",\n \"Willow[heif]>=1.8.0,<2\",\n \"requests>=2.11.1,<3.0\",\n \"l18n>=2018.5\",\n \"openpyxl>=3.0.10,<4.0\",\n \"anyascii>=0.1.5\",\n \"telepath>=0.3.1,<1\",\n \"laces>=0.1,<0.2\",\n]\n\n# Testing dependencies\ntesting_extras = [\n # Required for running the tests\n \"python-dateutil>=2.7\",\n \"pytz>=2014.7\",\n \"Jinja2>=3.0,<3.2\",\n \"boto3>=1.28,<2\",\n \"freezegun>=0.3.8\",\n \"azure-mgmt-cdn>=12.0,<13.0\",\n \"azure-mgmt-frontdoor>=1.0,<1.1\",\n \"django-pattern-library>=0.7\",\n # For coverage and PEP8 linting\n \"coverage>=3.7.0\",\n \"doc8==0.8.1\",\n \"ruff==0.1.5\",\n # For enforcing string formatting mechanism in source files\n \"semgrep==1.40.0\",\n # For templates linting\n \"curlylint==0.13.1\",\n # For template indenting\n \"djhtml==3.0.6\",\n # For validating string formats in .po translation files\n \"polib>=1.1,<2.0\",\n # For wagtail.test.utils.wagtail_factories (used for streamfield migration toolkit)\n \"factory-boy>=3.2\",\n # For running tests in parallel\n \"tblib>=2.0,<3.0\",\n]\n\n# Documentation dependencies\ndocumentation_extras = [\n \"pyenchant>=3.1.1,<4\",\n \"sphinxcontrib-spelling>=7,<8\",\n \"Sphinx>=1.5.2\",\n \"sphinx-autobuild>=0.6.0\",\n \"sphinx-wagtail-theme==6.2.0\",\n \"myst_parser==2.0.0\",\n \"sphinx_copybutton>=0.5,<1.0\",\n]\n\nsetup(\n name=\"wagtail\",\n version=__version__,\n description=\"A Django content management system.\",\n author=\"Wagtail core team + contributors\",\n author_email=\"[email protected]\", # For support queries, please see https://docs.wagtail.org/en/stable/support.html\n url=\"https://wagtail.org/\",\n project_urls={\n \"Changelog\": \"https://github.com/wagtail/wagtail/blob/main/CHANGELOG.txt\",\n \"Documentation\": \"https://docs.wagtail.org\",\n \"Source\": \"https://github.com/wagtail/wagtail\",\n \"Tracker\": \"https://github.com/wagtail/wagtail/issues\",\n },\n packages=find_packages(),\n include_package_data=True,\n license=\"BSD\",\n long_description=\"Wagtail is an open source content management \\\nsystem built on Django, with a strong community and commercial support. \\\nIt\u2019s focused on user experience, and offers precise control for \\\ndesigners and developers.\\n\\n\\\nFor more details, see https://wagtail.org, https://docs.wagtail.org and \\\nhttps://github.com/wagtail/wagtail/.\",\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Web Environment\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Programming Language :: Python :: 3.12\",\n \"Framework :: Django\",\n \"Framework :: Django :: 4.2\",\n \"Framework :: Django :: 5.0\",\n \"Framework :: Wagtail\",\n \"Topic :: Internet :: WWW/HTTP :: Site Management\",\n ],\n python_requires=\">=3.8\",\n install_requires=install_requires,\n extras_require={\"testing\": testing_extras, \"docs\": documentation_extras},\n entry_points=\"\"\"\n [console_scripts]\n wagtail=wagtail.bin.wagtail:main\n \"\"\",\n zip_safe=False,\n cmdclass={\n \"sdist\": sdist,\n \"bdist_egg\": check_bdist_egg,\n \"assets\": assets,\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nfrom wagtail import __version__\nfrom wagtail.utils.setup import assets, check_bdist_egg, sdist\n\ntry:\n from setuptools import find_packages, setup\nexcept ImportError:\n from distutils.core import setup\n\n\n# Hack to prevent \"TypeError: 'NoneType' object is not callable\" error\n# in multiprocessing/util.py _exit_function when setup.py exits\n# (see http://www.eby-sarna.com/pipermail/peak/2010-May/003357.html)\ntry:\n import multiprocessing # noqa: F401\nexcept ImportError:\n pass\n\n\ninstall_requires = [\n \"Django>=4.2,<6.0\",\n \"django-modelcluster>=6.2.1,<7.0\",\n \"django-permissionedforms>=0.1,<1.0\",\n \"django-taggit>=4.0,<5.1\",\n \"django-treebeard>=4.5.1,<5.0\",\n \"djangorestframework>=3.11.1,<4.0\",\n \"django-filter>=23.3,<24\",\n \"draftjs_exporter>=2.1.5,<6.0\",\n \"Pillow>=9.1.0,<11.0.0\",\n \"beautifulsoup4>=4.8,<4.13\",\n \"Willow[heif]>=1.8.0,<2\",\n \"requests>=2.11.1,<3.0\",\n \"l18n>=2018.5\",\n \"openpyxl>=3.0.10,<4.0\",\n \"anyascii>=0.1.5\",\n \"telepath>=0.3.1,<1\",\n \"laces>=0.1,<0.2\",\n]\n\n# Testing dependencies\ntesting_extras = [\n # Required for running the tests\n \"python-dateutil>=2.7\",\n \"pytz>=2014.7\",\n \"Jinja2>=3.0,<3.2\",\n \"boto3>=1.28,<2\",\n \"freezegun>=0.3.8\",\n \"azure-mgmt-cdn>=12.0,<13.0\",\n \"azure-mgmt-frontdoor>=1.0,<1.1\",\n \"django-pattern-library>=0.7\",\n # For coverage and PEP8 linting\n \"coverage>=3.7.0\",\n \"doc8==0.8.1\",\n \"ruff==0.1.5\",\n # For enforcing string formatting mechanism in source files\n \"semgrep==1.40.0\",\n # For templates linting\n \"curlylint==0.13.1\",\n # For template indenting\n \"djhtml==3.0.6\",\n # For validating string formats in .po translation files\n \"polib>=1.1,<2.0\",\n # For wagtail.test.utils.wagtail_factories (used for streamfield migration toolkit)\n \"factory-boy>=3.2\",\n # For running tests in parallel\n \"tblib>=2.0,<3.0\",\n]\n\n# Documentation dependencies\ndocumentation_extras = [\n \"pyenchant>=3.1.1,<4\",\n \"sphinxcontrib-spelling>=7,<8\",\n \"Sphinx>=1.5.2\",\n \"sphinx-autobuild>=0.6.0\",\n \"sphinx-wagtail-theme==6.3.0\",\n \"myst_parser==2.0.0\",\n \"sphinx_copybutton>=0.5,<1.0\",\n]\n\nsetup(\n name=\"wagtail\",\n version=__version__,\n description=\"A Django content management system.\",\n author=\"Wagtail core team + contributors\",\n author_email=\"[email protected]\", # For support queries, please see https://docs.wagtail.org/en/stable/support.html\n url=\"https://wagtail.org/\",\n project_urls={\n \"Changelog\": \"https://github.com/wagtail/wagtail/blob/main/CHANGELOG.txt\",\n \"Documentation\": \"https://docs.wagtail.org\",\n \"Source\": \"https://github.com/wagtail/wagtail\",\n \"Tracker\": \"https://github.com/wagtail/wagtail/issues\",\n },\n packages=find_packages(),\n include_package_data=True,\n license=\"BSD\",\n long_description=\"Wagtail is an open source content management \\\nsystem built on Django, with a strong community and commercial support. \\\nIt\u2019s focused on user experience, and offers precise control for \\\ndesigners and developers.\\n\\n\\\nFor more details, see https://wagtail.org, https://docs.wagtail.org and \\\nhttps://github.com/wagtail/wagtail/.\",\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Web Environment\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Programming Language :: Python :: 3.12\",\n \"Framework :: Django\",\n \"Framework :: Django :: 4.2\",\n \"Framework :: Django :: 5.0\",\n \"Framework :: Wagtail\",\n \"Topic :: Internet :: WWW/HTTP :: Site Management\",\n ],\n python_requires=\">=3.8\",\n install_requires=install_requires,\n extras_require={\"testing\": testing_extras, \"docs\": documentation_extras},\n entry_points=\"\"\"\n [console_scripts]\n wagtail=wagtail.bin.wagtail:main\n \"\"\",\n zip_safe=False,\n cmdclass={\n \"sdist\": sdist,\n \"bdist_egg\": check_bdist_egg,\n \"assets\": assets,\n },\n)\n", "path": "setup.py"}]} | 2,424 | 129 |
gh_patches_debug_2792 | rasdani/github-patches | git_diff | docker__docker-py-3257 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Breaks with requests 2.32.0: Not supported URL scheme http+docker
With requests 2.32.0 (released about an hour ago as I write this), the docker library as called by [tox-docker](https://github.com/tox-dev/tox-docker) fails with the following exception:
```
Traceback (most recent call last):
File "/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/requests/adapters.py", line 532, in send
conn = self._get_connection(request, verify, proxies=proxies, cert=cert)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/requests/adapters.py", line 400, in _get_connection
conn = self.poolmanager.connection_from_host(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/urllib3/poolmanager.py", line 304, in connection_from_host
return self.connection_from_context(request_context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/urllib3/poolmanager.py", line 326, in connection_from_context
raise URLSchemeUnknown(scheme)
urllib3.exceptions.URLSchemeUnknown: Not supported URL scheme http+docker
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/docker/api/client.py", line 214, in _retrieve_server_version
return self.version(api_version=False)["ApiVersion"]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/docker/api/daemon.py", line 181, in version
return self._result(self._get(url), json=True)
^^^^^^^^^^^^^^
File "/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/docker/utils/decorators.py", line 46, in inner
return f(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/docker/api/client.py", line 237, in _get
return self.get(url, **self._set_request_timeout(kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/requests/adapters.py", line 534, in send
raise InvalidURL(e, request=request)
requests.exceptions.InvalidURL: Not supported URL scheme http+docker
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/tox/session/cmd/run/single.py", line 48, in _evaluate
code, outcomes = run_commands(tox_env, no_test)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/tox/session/cmd/run/single.py", line 79, in run_commands
MANAGER.tox_before_run_commands(tox_env)
File "/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/tox/plugin/manager.py", line 88, in tox_before_run_commands
self.manager.hook.tox_before_run_commands(tox_env=tox_env)
File "/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/pluggy/_hooks.py", line 513, in __call__
return self._hookexec(self.name, self._hookimpls.copy(), kwargs, firstresult)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/pluggy/_manager.py", line 120, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/pluggy/_callers.py", line 139, in _multicall
raise exception.with_traceback(exception.__traceback__)
File "/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/pluggy/_callers.py", line 103, in _multicall
res = hook_impl.function(*args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/tox_docker/tox4/plugin.py", line 73, in tox_before_run_commands
docker_build_or_pull(container_config, log)
File "/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/tox_docker/plugin.py", line 57, in docker_build_or_pull
docker_pull(container_config, log)
File "/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/tox_docker/plugin.py", line 65, in docker_pull
docker = docker_module.from_env(version="auto")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/docker/client.py", line 96, in from_env
return cls(
^^^^
File "/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/docker/client.py", line 45, in __init__
self.api = APIClient(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/docker/api/client.py", line 197, in __init__
self._version = self._retrieve_server_version()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/docker/api/client.py", line 221, in _retrieve_server_version
raise DockerException(
docker.errors.DockerException: Error while fetching server API version: Not supported URL scheme http+docker
```
Reverting to requests 2.31.0 without any other changes fixes the problem.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docker/transport/basehttpadapter.py`
Content:
```
1 import requests.adapters
2
3
4 class BaseHTTPAdapter(requests.adapters.HTTPAdapter):
5 def close(self):
6 super().close()
7 if hasattr(self, 'pools'):
8 self.pools.clear()
9
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docker/transport/basehttpadapter.py b/docker/transport/basehttpadapter.py
--- a/docker/transport/basehttpadapter.py
+++ b/docker/transport/basehttpadapter.py
@@ -6,3 +6,8 @@
super().close()
if hasattr(self, 'pools'):
self.pools.clear()
+
+ # Fix for requests 2.32.2+:
+ # https://github.com/psf/requests/commit/c98e4d133ef29c46a9b68cd783087218a8075e05
+ def get_connection_with_tls_context(self, request, verify, proxies=None, cert=None):
+ return self.get_connection(request.url, proxies)
| {"golden_diff": "diff --git a/docker/transport/basehttpadapter.py b/docker/transport/basehttpadapter.py\n--- a/docker/transport/basehttpadapter.py\n+++ b/docker/transport/basehttpadapter.py\n@@ -6,3 +6,8 @@\n super().close()\n if hasattr(self, 'pools'):\n self.pools.clear()\n+\n+ # Fix for requests 2.32.2+:\n+ # https://github.com/psf/requests/commit/c98e4d133ef29c46a9b68cd783087218a8075e05\n+ def get_connection_with_tls_context(self, request, verify, proxies=None, cert=None):\n+ return self.get_connection(request.url, proxies)\n", "issue": "Breaks with requests 2.32.0: Not supported URL scheme http+docker\nWith requests 2.32.0 (released about an hour ago as I write this), the docker library as called by [tox-docker](https://github.com/tox-dev/tox-docker) fails with the following exception:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/requests/adapters.py\", line 532, in send\r\n conn = self._get_connection(request, verify, proxies=proxies, cert=cert)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/requests/adapters.py\", line 400, in _get_connection\r\n conn = self.poolmanager.connection_from_host(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/urllib3/poolmanager.py\", line 304, in connection_from_host\r\n return self.connection_from_context(request_context)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/urllib3/poolmanager.py\", line 326, in connection_from_context\r\n raise URLSchemeUnknown(scheme)\r\nurllib3.exceptions.URLSchemeUnknown: Not supported URL scheme http+docker\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/docker/api/client.py\", line 214, in _retrieve_server_version\r\n return self.version(api_version=False)[\"ApiVersion\"]\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/docker/api/daemon.py\", line 181, in version\r\n return self._result(self._get(url), json=True)\r\n ^^^^^^^^^^^^^^\r\n File \"/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/docker/utils/decorators.py\", line 46, in inner\r\n return f(self, *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/docker/api/client.py\", line 237, in _get\r\n return self.get(url, **self._set_request_timeout(kwargs))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/requests/sessions.py\", line 602, in get\r\n return self.request(\"GET\", url, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/requests/sessions.py\", line 589, in request\r\n resp = self.send(prep, **send_kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/requests/sessions.py\", line 703, in send\r\n r = adapter.send(request, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/requests/adapters.py\", line 534, in send\r\n raise InvalidURL(e, request=request)\r\nrequests.exceptions.InvalidURL: Not supported URL scheme http+docker\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/tox/session/cmd/run/single.py\", line 48, in _evaluate\r\n code, outcomes = run_commands(tox_env, no_test)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/tox/session/cmd/run/single.py\", line 79, in run_commands\r\n MANAGER.tox_before_run_commands(tox_env)\r\n File \"/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/tox/plugin/manager.py\", line 88, in tox_before_run_commands\r\n self.manager.hook.tox_before_run_commands(tox_env=tox_env)\r\n File \"/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/pluggy/_hooks.py\", line 513, in __call__\r\n return self._hookexec(self.name, self._hookimpls.copy(), kwargs, firstresult)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/pluggy/_manager.py\", line 120, in _hookexec\r\n return self._inner_hookexec(hook_name, methods, kwargs, firstresult)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/pluggy/_callers.py\", line 139, in _multicall\r\n raise exception.with_traceback(exception.__traceback__)\r\n File \"/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/pluggy/_callers.py\", line 103, in _multicall\r\n res = hook_impl.function(*args)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/tox_docker/tox4/plugin.py\", line 73, in tox_before_run_commands\r\n docker_build_or_pull(container_config, log)\r\n File \"/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/tox_docker/plugin.py\", line 57, in docker_build_or_pull\r\n docker_pull(container_config, log)\r\n File \"/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/tox_docker/plugin.py\", line 65, in docker_pull\r\n docker = docker_module.from_env(version=\"auto\")\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/docker/client.py\", line 96, in from_env\r\n return cls(\r\n ^^^^\r\n File \"/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/docker/client.py\", line 45, in __init__\r\n self.api = APIClient(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/docker/api/client.py\", line 197, in __init__\r\n self._version = self._retrieve_server_version()\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/eagle/dvl/venvs/gafaelfawr/lib/python3.12/site-packages/docker/api/client.py\", line 221, in _retrieve_server_version\r\n raise DockerException(\r\ndocker.errors.DockerException: Error while fetching server API version: Not supported URL scheme http+docker\r\n```\r\n\r\nReverting to requests 2.31.0 without any other changes fixes the problem.\n", "before_files": [{"content": "import requests.adapters\n\n\nclass BaseHTTPAdapter(requests.adapters.HTTPAdapter):\n def close(self):\n super().close()\n if hasattr(self, 'pools'):\n self.pools.clear()\n", "path": "docker/transport/basehttpadapter.py"}], "after_files": [{"content": "import requests.adapters\n\n\nclass BaseHTTPAdapter(requests.adapters.HTTPAdapter):\n def close(self):\n super().close()\n if hasattr(self, 'pools'):\n self.pools.clear()\n\n # Fix for requests 2.32.2+:\n # https://github.com/psf/requests/commit/c98e4d133ef29c46a9b68cd783087218a8075e05\n def get_connection_with_tls_context(self, request, verify, proxies=None, cert=None):\n return self.get_connection(request.url, proxies)\n", "path": "docker/transport/basehttpadapter.py"}]} | 2,080 | 170 |
gh_patches_debug_32124 | rasdani/github-patches | git_diff | translate__pootle-5451 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove SubfieldBase from MultiStringField
In django 1.9 SubFieldBase is deprecated, and removed in 1.10
Related Stackoverflow - http://stackoverflow.com/questions/35166085/how-to-deal-with-subfieldbase-has-been-deprecated-use-field-from-db-value-inst
https://docs.djangoproject.com/en/1.9/ref/models/fields/#field-api-reference
afaict we can just safely remove - it seems it just needs to have to/from db methods
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pootle/apps/pootle_store/fields.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 """Fields required for handling translation files"""
10
11 import logging
12 import os
13
14 from translate.misc.multistring import multistring
15
16 from django.db import models
17 from django.db.models.fields.files import FieldFile, FileField
18
19 from pootle.core.utils.multistring import (parse_multistring,
20 unparse_multistring)
21
22
23 # # # # # # # # # String # # # # # # # # # # # # # # #
24
25
26 def to_db(value):
27 """Flatten the given value (string, list of plurals or multistring) into
28 the database string representation.
29 """
30 if value is None:
31 return None
32
33 return unparse_multistring(value)
34
35
36 def to_python(value):
37 """Reconstruct a multistring from the database string representation."""
38 if not value:
39 return multistring("", encoding="UTF-8")
40 elif isinstance(value, multistring):
41 return value
42 elif isinstance(value, basestring):
43 return parse_multistring(value)
44 elif isinstance(value, dict):
45 return multistring([val for __, val in sorted(value.items())],
46 encoding="UTF-8")
47 else:
48 return multistring(value, encoding="UTF-8")
49
50
51 class MultiStringField(models.Field):
52 description = \
53 "a field imitating translate.misc.multistring used for plurals"
54 __metaclass__ = models.SubfieldBase
55
56 def __init__(self, *args, **kwargs):
57 super(MultiStringField, self).__init__(*args, **kwargs)
58
59 def get_internal_type(self):
60 return "TextField"
61
62 def to_python(self, value):
63 return to_python(value)
64
65 def get_prep_value(self, value):
66 return to_db(value)
67
68 def get_prep_lookup(self, lookup_type, value):
69 if (lookup_type in ('exact', 'iexact') or
70 not isinstance(value, basestring)):
71 value = self.get_prep_value(value)
72 return super(MultiStringField, self).get_prep_lookup(lookup_type,
73 value)
74
75
76 # # # # # # # # # File # # # # # # # # # # # # # # # #
77
78
79 class StoreTuple(object):
80 """Encapsulates toolkit stores in the in memory cache, needed
81 since LRUCachingDict is based on a weakref.WeakValueDictionary
82 which cannot reference normal tuples
83 """
84
85 def __init__(self, store, mod_info, realpath):
86 self.store = store
87 self.mod_info = mod_info
88 self.realpath = realpath
89
90
91 class TranslationStoreFieldFile(FieldFile):
92 """FieldFile is the file-like object of a FileField, that is found in a
93 TranslationStoreField.
94 """
95
96 from translate.misc.lru import LRUCachingDict
97 from django.conf import settings
98
99 _store_cache = LRUCachingDict(settings.PARSE_POOL_SIZE,
100 settings.PARSE_POOL_CULL_FREQUENCY)
101
102 def getpomtime(self):
103 file_stat = os.stat(self.realpath)
104 return file_stat.st_mtime, file_stat.st_size
105
106 @property
107 def filename(self):
108 return os.path.basename(self.name)
109
110 def _get_realpath(self):
111 """Return realpath resolving symlinks if necessary."""
112 if not hasattr(self, "_realpath"):
113 # Django's db.models.fields.files.FieldFile raises ValueError if
114 # if the file field has no name - and tests "if self" to check
115 if self:
116 self._realpath = os.path.realpath(self.path)
117 else:
118 self._realpath = ''
119 return self._realpath
120
121 @property
122 def realpath(self):
123 """Get real path from cache before attempting to check for symlinks."""
124 if not hasattr(self, "_store_tuple"):
125 return self._get_realpath()
126 else:
127 return self._store_tuple.realpath
128
129 @property
130 def store(self):
131 """Get translation store from dictionary cache, populate if store not
132 already cached.
133 """
134 self._update_store_cache()
135 return self._store_tuple.store
136
137 def _update_store_cache(self):
138 """Add translation store to dictionary cache, replace old cached
139 version if needed.
140 """
141 if self.exists():
142 mod_info = self.getpomtime()
143 else:
144 mod_info = 0
145 if (not hasattr(self, "_store_tuple") or
146 self._store_tuple.mod_info != mod_info):
147 try:
148 self._store_tuple = self._store_cache[self.path]
149 if self._store_tuple.mod_info != mod_info:
150 # if file is modified act as if it doesn't exist in cache
151 raise KeyError
152 except KeyError:
153 logging.debug(u"Cache miss for %s", self.path)
154 from translate.storage import factory
155
156 fileclass = self.instance.syncer.file_class
157 classes = {
158 str(self.instance.filetype.extension): fileclass,
159 str(self.instance.filetype.template_extension): fileclass}
160 store_obj = factory.getobject(self.path,
161 ignore=self.field.ignore,
162 classes=classes)
163 self._store_tuple = StoreTuple(store_obj, mod_info,
164 self.realpath)
165 self._store_cache[self.path] = self._store_tuple
166
167 def _touch_store_cache(self):
168 """Update stored mod_info without reparsing file."""
169 if hasattr(self, "_store_tuple"):
170 mod_info = self.getpomtime()
171 if self._store_tuple.mod_info != mod_info:
172 self._store_tuple.mod_info = mod_info
173 else:
174 # FIXME: do we really need that?
175 self._update_store_cache()
176
177 def _delete_store_cache(self):
178 """Remove translation store from cache."""
179 try:
180 del self._store_cache[self.path]
181 except KeyError:
182 pass
183
184 try:
185 del self._store_tuple
186 except AttributeError:
187 pass
188
189 def exists(self):
190 return os.path.exists(self.realpath)
191
192 def savestore(self):
193 """Saves to temporary file then moves over original file. This way we
194 avoid the need for locking.
195 """
196 import shutil
197 from pootle.core.utils import ptempfile as tempfile
198 tmpfile, tmpfilename = tempfile.mkstemp(suffix=self.filename)
199 os.close(tmpfile)
200 self.store.savefile(tmpfilename)
201 shutil.move(tmpfilename, self.realpath)
202 self._touch_store_cache()
203
204 def save(self, name, content, save=True):
205 # FIXME: implement save to tmp file then move instead of directly
206 # saving
207 super(TranslationStoreFieldFile, self).save(name, content, save)
208 self._delete_store_cache()
209
210 def delete(self, save=True):
211 self._delete_store_cache()
212 if save:
213 super(TranslationStoreFieldFile, self).delete(save)
214
215
216 class TranslationStoreField(FileField):
217 """This is the field class to represent a FileField in a model that
218 represents a translation store.
219 """
220
221 attr_class = TranslationStoreFieldFile
222
223 def __init__(self, ignore=None, **kwargs):
224 """ignore: postfix to be stripped from filename when trying to
225 determine file format for parsing, useful for .pending files
226 """
227 self.ignore = ignore
228 super(TranslationStoreField, self).__init__(**kwargs)
229
230 def deconstruct(self):
231 name, path, args, kwargs = super(TranslationStoreField,
232 self).deconstruct()
233 if self.ignore is not None:
234 kwargs['ignore'] = self.ignore
235 return name, path, args, kwargs
236
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pootle/apps/pootle_store/fields.py b/pootle/apps/pootle_store/fields.py
--- a/pootle/apps/pootle_store/fields.py
+++ b/pootle/apps/pootle_store/fields.py
@@ -48,10 +48,30 @@
return multistring(value, encoding="UTF-8")
+class CastOnAssignDescriptor(object):
+ """
+ A property descriptor which ensures that `field.to_python()` is called on
+ _every_ assignment to the field. This used to be provided by the
+ `django.db.models.subclassing.Creator` class, which in turn was used by the
+ deprecated-in-Django-1.10 `SubfieldBase` class, hence the reimplementation
+ here.
+ """
+
+ def __init__(self, field):
+ self.field = field
+
+ def __get__(self, obj, type=None):
+ if obj is None:
+ return self
+ return obj.__dict__[self.field.name]
+
+ def __set__(self, obj, value):
+ obj.__dict__[self.field.name] = self.field.to_python(value)
+
+
class MultiStringField(models.Field):
description = \
"a field imitating translate.misc.multistring used for plurals"
- __metaclass__ = models.SubfieldBase
def __init__(self, *args, **kwargs):
super(MultiStringField, self).__init__(*args, **kwargs)
@@ -62,6 +82,9 @@
def to_python(self, value):
return to_python(value)
+ def from_db_value(self, value, expression, connection, context):
+ return to_python(value)
+
def get_prep_value(self, value):
return to_db(value)
@@ -72,6 +95,10 @@
return super(MultiStringField, self).get_prep_lookup(lookup_type,
value)
+ def contribute_to_class(self, cls, name):
+ super(MultiStringField, self).contribute_to_class(cls, name)
+ setattr(cls, name, CastOnAssignDescriptor(self))
+
# # # # # # # # # File # # # # # # # # # # # # # # # #
| {"golden_diff": "diff --git a/pootle/apps/pootle_store/fields.py b/pootle/apps/pootle_store/fields.py\n--- a/pootle/apps/pootle_store/fields.py\n+++ b/pootle/apps/pootle_store/fields.py\n@@ -48,10 +48,30 @@\n return multistring(value, encoding=\"UTF-8\")\n \n \n+class CastOnAssignDescriptor(object):\n+ \"\"\"\n+ A property descriptor which ensures that `field.to_python()` is called on\n+ _every_ assignment to the field. This used to be provided by the\n+ `django.db.models.subclassing.Creator` class, which in turn was used by the\n+ deprecated-in-Django-1.10 `SubfieldBase` class, hence the reimplementation\n+ here.\n+ \"\"\"\n+\n+ def __init__(self, field):\n+ self.field = field\n+\n+ def __get__(self, obj, type=None):\n+ if obj is None:\n+ return self\n+ return obj.__dict__[self.field.name]\n+\n+ def __set__(self, obj, value):\n+ obj.__dict__[self.field.name] = self.field.to_python(value)\n+\n+\n class MultiStringField(models.Field):\n description = \\\n \"a field imitating translate.misc.multistring used for plurals\"\n- __metaclass__ = models.SubfieldBase\n \n def __init__(self, *args, **kwargs):\n super(MultiStringField, self).__init__(*args, **kwargs)\n@@ -62,6 +82,9 @@\n def to_python(self, value):\n return to_python(value)\n \n+ def from_db_value(self, value, expression, connection, context):\n+ return to_python(value)\n+\n def get_prep_value(self, value):\n return to_db(value)\n \n@@ -72,6 +95,10 @@\n return super(MultiStringField, self).get_prep_lookup(lookup_type,\n value)\n \n+ def contribute_to_class(self, cls, name):\n+ super(MultiStringField, self).contribute_to_class(cls, name)\n+ setattr(cls, name, CastOnAssignDescriptor(self))\n+\n \n # # # # # # # # # File # # # # # # # # # # # # # # # #\n", "issue": "Remove SubfieldBase from MultiStringField\nIn django 1.9 SubFieldBase is deprecated, and removed in 1.10\n\nRelated Stackoverflow - http://stackoverflow.com/questions/35166085/how-to-deal-with-subfieldbase-has-been-deprecated-use-field-from-db-value-inst\n\nhttps://docs.djangoproject.com/en/1.9/ref/models/fields/#field-api-reference\n\nafaict we can just safely remove - it seems it just needs to have to/from db methods\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\n\"\"\"Fields required for handling translation files\"\"\"\n\nimport logging\nimport os\n\nfrom translate.misc.multistring import multistring\n\nfrom django.db import models\nfrom django.db.models.fields.files import FieldFile, FileField\n\nfrom pootle.core.utils.multistring import (parse_multistring,\n unparse_multistring)\n\n\n# # # # # # # # # String # # # # # # # # # # # # # # #\n\n\ndef to_db(value):\n \"\"\"Flatten the given value (string, list of plurals or multistring) into\n the database string representation.\n \"\"\"\n if value is None:\n return None\n\n return unparse_multistring(value)\n\n\ndef to_python(value):\n \"\"\"Reconstruct a multistring from the database string representation.\"\"\"\n if not value:\n return multistring(\"\", encoding=\"UTF-8\")\n elif isinstance(value, multistring):\n return value\n elif isinstance(value, basestring):\n return parse_multistring(value)\n elif isinstance(value, dict):\n return multistring([val for __, val in sorted(value.items())],\n encoding=\"UTF-8\")\n else:\n return multistring(value, encoding=\"UTF-8\")\n\n\nclass MultiStringField(models.Field):\n description = \\\n \"a field imitating translate.misc.multistring used for plurals\"\n __metaclass__ = models.SubfieldBase\n\n def __init__(self, *args, **kwargs):\n super(MultiStringField, self).__init__(*args, **kwargs)\n\n def get_internal_type(self):\n return \"TextField\"\n\n def to_python(self, value):\n return to_python(value)\n\n def get_prep_value(self, value):\n return to_db(value)\n\n def get_prep_lookup(self, lookup_type, value):\n if (lookup_type in ('exact', 'iexact') or\n not isinstance(value, basestring)):\n value = self.get_prep_value(value)\n return super(MultiStringField, self).get_prep_lookup(lookup_type,\n value)\n\n\n# # # # # # # # # File # # # # # # # # # # # # # # # #\n\n\nclass StoreTuple(object):\n \"\"\"Encapsulates toolkit stores in the in memory cache, needed\n since LRUCachingDict is based on a weakref.WeakValueDictionary\n which cannot reference normal tuples\n \"\"\"\n\n def __init__(self, store, mod_info, realpath):\n self.store = store\n self.mod_info = mod_info\n self.realpath = realpath\n\n\nclass TranslationStoreFieldFile(FieldFile):\n \"\"\"FieldFile is the file-like object of a FileField, that is found in a\n TranslationStoreField.\n \"\"\"\n\n from translate.misc.lru import LRUCachingDict\n from django.conf import settings\n\n _store_cache = LRUCachingDict(settings.PARSE_POOL_SIZE,\n settings.PARSE_POOL_CULL_FREQUENCY)\n\n def getpomtime(self):\n file_stat = os.stat(self.realpath)\n return file_stat.st_mtime, file_stat.st_size\n\n @property\n def filename(self):\n return os.path.basename(self.name)\n\n def _get_realpath(self):\n \"\"\"Return realpath resolving symlinks if necessary.\"\"\"\n if not hasattr(self, \"_realpath\"):\n # Django's db.models.fields.files.FieldFile raises ValueError if\n # if the file field has no name - and tests \"if self\" to check\n if self:\n self._realpath = os.path.realpath(self.path)\n else:\n self._realpath = ''\n return self._realpath\n\n @property\n def realpath(self):\n \"\"\"Get real path from cache before attempting to check for symlinks.\"\"\"\n if not hasattr(self, \"_store_tuple\"):\n return self._get_realpath()\n else:\n return self._store_tuple.realpath\n\n @property\n def store(self):\n \"\"\"Get translation store from dictionary cache, populate if store not\n already cached.\n \"\"\"\n self._update_store_cache()\n return self._store_tuple.store\n\n def _update_store_cache(self):\n \"\"\"Add translation store to dictionary cache, replace old cached\n version if needed.\n \"\"\"\n if self.exists():\n mod_info = self.getpomtime()\n else:\n mod_info = 0\n if (not hasattr(self, \"_store_tuple\") or\n self._store_tuple.mod_info != mod_info):\n try:\n self._store_tuple = self._store_cache[self.path]\n if self._store_tuple.mod_info != mod_info:\n # if file is modified act as if it doesn't exist in cache\n raise KeyError\n except KeyError:\n logging.debug(u\"Cache miss for %s\", self.path)\n from translate.storage import factory\n\n fileclass = self.instance.syncer.file_class\n classes = {\n str(self.instance.filetype.extension): fileclass,\n str(self.instance.filetype.template_extension): fileclass}\n store_obj = factory.getobject(self.path,\n ignore=self.field.ignore,\n classes=classes)\n self._store_tuple = StoreTuple(store_obj, mod_info,\n self.realpath)\n self._store_cache[self.path] = self._store_tuple\n\n def _touch_store_cache(self):\n \"\"\"Update stored mod_info without reparsing file.\"\"\"\n if hasattr(self, \"_store_tuple\"):\n mod_info = self.getpomtime()\n if self._store_tuple.mod_info != mod_info:\n self._store_tuple.mod_info = mod_info\n else:\n # FIXME: do we really need that?\n self._update_store_cache()\n\n def _delete_store_cache(self):\n \"\"\"Remove translation store from cache.\"\"\"\n try:\n del self._store_cache[self.path]\n except KeyError:\n pass\n\n try:\n del self._store_tuple\n except AttributeError:\n pass\n\n def exists(self):\n return os.path.exists(self.realpath)\n\n def savestore(self):\n \"\"\"Saves to temporary file then moves over original file. This way we\n avoid the need for locking.\n \"\"\"\n import shutil\n from pootle.core.utils import ptempfile as tempfile\n tmpfile, tmpfilename = tempfile.mkstemp(suffix=self.filename)\n os.close(tmpfile)\n self.store.savefile(tmpfilename)\n shutil.move(tmpfilename, self.realpath)\n self._touch_store_cache()\n\n def save(self, name, content, save=True):\n # FIXME: implement save to tmp file then move instead of directly\n # saving\n super(TranslationStoreFieldFile, self).save(name, content, save)\n self._delete_store_cache()\n\n def delete(self, save=True):\n self._delete_store_cache()\n if save:\n super(TranslationStoreFieldFile, self).delete(save)\n\n\nclass TranslationStoreField(FileField):\n \"\"\"This is the field class to represent a FileField in a model that\n represents a translation store.\n \"\"\"\n\n attr_class = TranslationStoreFieldFile\n\n def __init__(self, ignore=None, **kwargs):\n \"\"\"ignore: postfix to be stripped from filename when trying to\n determine file format for parsing, useful for .pending files\n \"\"\"\n self.ignore = ignore\n super(TranslationStoreField, self).__init__(**kwargs)\n\n def deconstruct(self):\n name, path, args, kwargs = super(TranslationStoreField,\n self).deconstruct()\n if self.ignore is not None:\n kwargs['ignore'] = self.ignore\n return name, path, args, kwargs\n", "path": "pootle/apps/pootle_store/fields.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\n\"\"\"Fields required for handling translation files\"\"\"\n\nimport logging\nimport os\n\nfrom translate.misc.multistring import multistring\n\nfrom django.db import models\nfrom django.db.models.fields.files import FieldFile, FileField\n\nfrom pootle.core.utils.multistring import (parse_multistring,\n unparse_multistring)\n\n\n# # # # # # # # # String # # # # # # # # # # # # # # #\n\n\ndef to_db(value):\n \"\"\"Flatten the given value (string, list of plurals or multistring) into\n the database string representation.\n \"\"\"\n if value is None:\n return None\n\n return unparse_multistring(value)\n\n\ndef to_python(value):\n \"\"\"Reconstruct a multistring from the database string representation.\"\"\"\n if not value:\n return multistring(\"\", encoding=\"UTF-8\")\n elif isinstance(value, multistring):\n return value\n elif isinstance(value, basestring):\n return parse_multistring(value)\n elif isinstance(value, dict):\n return multistring([val for __, val in sorted(value.items())],\n encoding=\"UTF-8\")\n else:\n return multistring(value, encoding=\"UTF-8\")\n\n\nclass CastOnAssignDescriptor(object):\n \"\"\"\n A property descriptor which ensures that `field.to_python()` is called on\n _every_ assignment to the field. This used to be provided by the\n `django.db.models.subclassing.Creator` class, which in turn was used by the\n deprecated-in-Django-1.10 `SubfieldBase` class, hence the reimplementation\n here.\n \"\"\"\n\n def __init__(self, field):\n self.field = field\n\n def __get__(self, obj, type=None):\n if obj is None:\n return self\n return obj.__dict__[self.field.name]\n\n def __set__(self, obj, value):\n obj.__dict__[self.field.name] = self.field.to_python(value)\n\n\nclass MultiStringField(models.Field):\n description = \\\n \"a field imitating translate.misc.multistring used for plurals\"\n\n def __init__(self, *args, **kwargs):\n super(MultiStringField, self).__init__(*args, **kwargs)\n\n def get_internal_type(self):\n return \"TextField\"\n\n def to_python(self, value):\n return to_python(value)\n\n def from_db_value(self, value, expression, connection, context):\n return to_python(value)\n\n def get_prep_value(self, value):\n return to_db(value)\n\n def get_prep_lookup(self, lookup_type, value):\n if (lookup_type in ('exact', 'iexact') or\n not isinstance(value, basestring)):\n value = self.get_prep_value(value)\n return super(MultiStringField, self).get_prep_lookup(lookup_type,\n value)\n\n def contribute_to_class(self, cls, name):\n super(MultiStringField, self).contribute_to_class(cls, name)\n setattr(cls, name, CastOnAssignDescriptor(self))\n\n\n# # # # # # # # # File # # # # # # # # # # # # # # # #\n\n\nclass StoreTuple(object):\n \"\"\"Encapsulates toolkit stores in the in memory cache, needed\n since LRUCachingDict is based on a weakref.WeakValueDictionary\n which cannot reference normal tuples\n \"\"\"\n\n def __init__(self, store, mod_info, realpath):\n self.store = store\n self.mod_info = mod_info\n self.realpath = realpath\n\n\nclass TranslationStoreFieldFile(FieldFile):\n \"\"\"FieldFile is the file-like object of a FileField, that is found in a\n TranslationStoreField.\n \"\"\"\n\n from translate.misc.lru import LRUCachingDict\n from django.conf import settings\n\n _store_cache = LRUCachingDict(settings.PARSE_POOL_SIZE,\n settings.PARSE_POOL_CULL_FREQUENCY)\n\n def getpomtime(self):\n file_stat = os.stat(self.realpath)\n return file_stat.st_mtime, file_stat.st_size\n\n @property\n def filename(self):\n return os.path.basename(self.name)\n\n def _get_realpath(self):\n \"\"\"Return realpath resolving symlinks if necessary.\"\"\"\n if not hasattr(self, \"_realpath\"):\n # Django's db.models.fields.files.FieldFile raises ValueError if\n # if the file field has no name - and tests \"if self\" to check\n if self:\n self._realpath = os.path.realpath(self.path)\n else:\n self._realpath = ''\n return self._realpath\n\n @property\n def realpath(self):\n \"\"\"Get real path from cache before attempting to check for symlinks.\"\"\"\n if not hasattr(self, \"_store_tuple\"):\n return self._get_realpath()\n else:\n return self._store_tuple.realpath\n\n @property\n def store(self):\n \"\"\"Get translation store from dictionary cache, populate if store not\n already cached.\n \"\"\"\n self._update_store_cache()\n return self._store_tuple.store\n\n def _update_store_cache(self):\n \"\"\"Add translation store to dictionary cache, replace old cached\n version if needed.\n \"\"\"\n if self.exists():\n mod_info = self.getpomtime()\n else:\n mod_info = 0\n if (not hasattr(self, \"_store_tuple\") or\n self._store_tuple.mod_info != mod_info):\n try:\n self._store_tuple = self._store_cache[self.path]\n if self._store_tuple.mod_info != mod_info:\n # if file is modified act as if it doesn't exist in cache\n raise KeyError\n except KeyError:\n logging.debug(u\"Cache miss for %s\", self.path)\n from translate.storage import factory\n\n fileclass = self.instance.syncer.file_class\n classes = {\n str(self.instance.filetype.extension): fileclass,\n str(self.instance.filetype.template_extension): fileclass}\n store_obj = factory.getobject(self.path,\n ignore=self.field.ignore,\n classes=classes)\n self._store_tuple = StoreTuple(store_obj, mod_info,\n self.realpath)\n self._store_cache[self.path] = self._store_tuple\n\n def _touch_store_cache(self):\n \"\"\"Update stored mod_info without reparsing file.\"\"\"\n if hasattr(self, \"_store_tuple\"):\n mod_info = self.getpomtime()\n if self._store_tuple.mod_info != mod_info:\n self._store_tuple.mod_info = mod_info\n else:\n # FIXME: do we really need that?\n self._update_store_cache()\n\n def _delete_store_cache(self):\n \"\"\"Remove translation store from cache.\"\"\"\n try:\n del self._store_cache[self.path]\n except KeyError:\n pass\n\n try:\n del self._store_tuple\n except AttributeError:\n pass\n\n def exists(self):\n return os.path.exists(self.realpath)\n\n def savestore(self):\n \"\"\"Saves to temporary file then moves over original file. This way we\n avoid the need for locking.\n \"\"\"\n import shutil\n from pootle.core.utils import ptempfile as tempfile\n tmpfile, tmpfilename = tempfile.mkstemp(suffix=self.filename)\n os.close(tmpfile)\n self.store.savefile(tmpfilename)\n shutil.move(tmpfilename, self.realpath)\n self._touch_store_cache()\n\n def save(self, name, content, save=True):\n # FIXME: implement save to tmp file then move instead of directly\n # saving\n super(TranslationStoreFieldFile, self).save(name, content, save)\n self._delete_store_cache()\n\n def delete(self, save=True):\n self._delete_store_cache()\n if save:\n super(TranslationStoreFieldFile, self).delete(save)\n\n\nclass TranslationStoreField(FileField):\n \"\"\"This is the field class to represent a FileField in a model that\n represents a translation store.\n \"\"\"\n\n attr_class = TranslationStoreFieldFile\n\n def __init__(self, ignore=None, **kwargs):\n \"\"\"ignore: postfix to be stripped from filename when trying to\n determine file format for parsing, useful for .pending files\n \"\"\"\n self.ignore = ignore\n super(TranslationStoreField, self).__init__(**kwargs)\n\n def deconstruct(self):\n name, path, args, kwargs = super(TranslationStoreField,\n self).deconstruct()\n if self.ignore is not None:\n kwargs['ignore'] = self.ignore\n return name, path, args, kwargs\n", "path": "pootle/apps/pootle_store/fields.py"}]} | 2,653 | 507 |
gh_patches_debug_14953 | rasdani/github-patches | git_diff | python-poetry__poetry-1909 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Setting credentials through env. variable are not working
<!-- Checked checkbox should look like this: [x] -->
- [ x ] I am on the [latest](https://github.com/python-poetry/poetry/releases/latest) Poetry version.
- [ x ] I have searched the [issues](https://github.com/python-poetry/poetry/issues) of this repo and believe that this is not a duplicate.
- [ x ] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option).
<!--
Once those are done, if you're able to fill in the following list with your information,
it'd be very helpful to whoever handles the issue.
-->
- **OS version and name**: *nix
- **Poetry version**: Poetry version 1.0.0
## Issue
<!-- Now feel free to write your issue, but please be descriptive! Thanks again 🙌 ❤️ -->
I'm failing to use env. variables to set custom pypi credentials.
My pyproject.toml contains private pypi's like this:
```toml
[[tool.poetry.source]]
url = "https://XXXXX/nexus/repository/pypi-central/simple"
name = "nexus"
```
I'm running this script:
```sh
export POETRY_HTTP_BASIC_NEXUS_USERNAME=****
export POETRY_HTTP_BASIC_NEXUS_PASSWORD=****
poetry install
```
and it fails with:
```
[EnvCommandError]
Command ['/opt/.cache/pypoetry/virtualenvs/YYYY-4zvP7SOo-py3.8/bin/pip', 'install', '--no-deps', '--index-url', 'https://XXXXX/nexus/repository/pypi-central/simple', '--extra-index-url', 'https://pypi.org/', 'six==1.12.0'] errored with the following return code 2, and output:
Looking in indexes: https://RESOLVED-XXXXX/nexus/repository/pypi-central/simple, https://****:****@XXXXX/nexus/repository/epd-pypi/simple, https://pypi.org/
Collecting six==1.12.0
....
File "/opt/.cache/pypoetry/virtualenvs/YYYY-4zvP7SOo-py3.8/lib/python3.8/site-packages/pip/_internal/download.py", line 386, in handle_401
username, password, save = self._prompt_for_password(parsed.netloc)
File "/opt/.cache/pypoetry/virtualenvs/YYYY-4zvP7SOo-py3.8/lib/python3.8/site-packages/pip/_internal/download.py", line 358, in _prompt_for_password
username = ask_input("User for %s: " % netloc)
File "/opt/.cache/pypoetry/virtualenvs/YYYY-4zvP7SOo-py3.8/lib/python3.8/site-packages/pip/_internal/utils/misc.py", line 281, in ask_input
return input(message)
EOFError: EOF when reading a line
User for XXXXX:
```
I investigated the code and it seems that credentials are never acquired separately from config, but always as a pair. That means that code never ask for `http-basic.nexus.password` and `http-basic.nexus.username`, but for `http-basic.nexus` then the value is used as a dict (search for `password_manager.get_http_auth` usage). I could not find single test case, so I wrote one, fill free to use it:
```diff
diff --git a/tests/config/test_config.py b/tests/config/test_config.py
index 07373ad..72ad236 100644
--- a/tests/config/test_config.py
+++ b/tests/config/test_config.py
@@ -14,3 +14,13 @@ def test_config_get_from_environment_variable(config, environ):
os.environ["POETRY_VIRTUALENVS_CREATE"] = "false"
assert not config.get("virtualenvs.create")
+
+def test_basic_http_credentials_through_env(config, environ):
+ assert config.get("http-basic.test_repo") is None
+
+ os.environ["POETRY_HTTP_BASIC_TEST_REPO_USERNAME"] = "foo"
+ os.environ["POETRY_HTTP_BASIC_TEST_REPO_PASSWORD"] = "bar"
+ credentials = config.get("http-basic.test-repo")
+ assert credentials is not None
+ assert credentials["username"] == "foo"
+ assert credentials["password"] == "bar"
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `poetry/utils/password_manager.py`
Content:
```
1 import logging
2
3
4 logger = logging.getLogger(__name__)
5
6
7 class PasswordManagerError(Exception):
8
9 pass
10
11
12 class KeyRingError(Exception):
13
14 pass
15
16
17 class KeyRing:
18 def __init__(self, namespace):
19 self._namespace = namespace
20 self._is_available = True
21
22 self._check()
23
24 def is_available(self):
25 return self._is_available
26
27 def get_password(self, name, username):
28 if not self.is_available():
29 return
30
31 import keyring
32 import keyring.errors
33
34 name = self.get_entry_name(name)
35
36 try:
37 return keyring.get_password(name, username)
38 except (RuntimeError, keyring.errors.KeyringError):
39 raise KeyRingError(
40 "Unable to retrieve the password for {} from the key ring".format(name)
41 )
42
43 def set_password(self, name, username, password):
44 if not self.is_available():
45 return
46
47 import keyring
48 import keyring.errors
49
50 name = self.get_entry_name(name)
51
52 try:
53 keyring.set_password(name, username, password)
54 except (RuntimeError, keyring.errors.KeyringError) as e:
55 raise KeyRingError(
56 "Unable to store the password for {} in the key ring: {}".format(
57 name, str(e)
58 )
59 )
60
61 def delete_password(self, name, username):
62 if not self.is_available():
63 return
64
65 import keyring
66 import keyring.errors
67
68 name = self.get_entry_name(name)
69
70 try:
71 keyring.delete_password(name, username)
72 except (RuntimeError, keyring.errors.KeyringError):
73 raise KeyRingError(
74 "Unable to delete the password for {} from the key ring".format(name)
75 )
76
77 def get_entry_name(self, name):
78 return "{}-{}".format(self._namespace, name)
79
80 def _check(self):
81 try:
82 import keyring
83 except Exception as e:
84 logger.debug("An error occurred while importing keyring: {}".format(str(e)))
85 self._is_available = False
86
87 return
88
89 backend = keyring.get_keyring()
90 name = backend.name.split(" ")[0]
91 if name == "fail":
92 logger.debug("No suitable keyring backend found")
93 self._is_available = False
94 elif "plaintext" in backend.name.lower():
95 logger.debug("Only a plaintext keyring backend is available. Not using it.")
96 self._is_available = False
97 elif name == "chainer":
98 try:
99 import keyring.backend
100
101 backends = keyring.backend.get_all_keyring()
102
103 self._is_available = any(
104 [
105 b.name.split(" ")[0] not in ["chainer", "fail"]
106 and "plaintext" not in b.name.lower()
107 for b in backends
108 ]
109 )
110 except Exception:
111 self._is_available = False
112
113 if not self._is_available:
114 logger.warning("No suitable keyring backends were found")
115
116
117 class PasswordManager:
118 def __init__(self, config):
119 self._config = config
120 self._keyring = None
121
122 @property
123 def keyring(self):
124 if self._keyring is None:
125 self._keyring = KeyRing("poetry-repository")
126 if not self._keyring.is_available():
127 logger.warning(
128 "Using a plaintext file to store and retrieve credentials"
129 )
130
131 return self._keyring
132
133 def set_pypi_token(self, name, token):
134 if not self.keyring.is_available():
135 self._config.auth_config_source.add_property(
136 "pypi-token.{}".format(name), token
137 )
138 else:
139 self.keyring.set_password(name, "__token__", token)
140
141 def get_pypi_token(self, name):
142 if not self.keyring.is_available():
143 return self._config.get("pypi-token.{}".format(name))
144
145 return self.keyring.get_password(name, "__token__")
146
147 def delete_pypi_token(self, name):
148 if not self.keyring.is_available():
149 return self._config.auth_config_source.remove_property(
150 "pypi-token.{}".format(name)
151 )
152
153 self.keyring.delete_password(name, "__token__")
154
155 def get_http_auth(self, name):
156 auth = self._config.get("http-basic.{}".format(name))
157 if not auth:
158 return None
159
160 username, password = auth["username"], auth.get("password")
161 if password is None:
162 password = self.keyring.get_password(name, username)
163
164 return {
165 "username": username,
166 "password": password,
167 }
168
169 def set_http_password(self, name, username, password):
170 auth = {"username": username}
171
172 if not self.keyring.is_available():
173 auth["password"] = password
174 else:
175 self.keyring.set_password(name, username, password)
176
177 self._config.auth_config_source.add_property("http-basic.{}".format(name), auth)
178
179 def delete_http_password(self, name):
180 auth = self.get_http_auth(name)
181 if not auth or "username" not in auth:
182 return
183
184 try:
185 self.keyring.delete_password(name, auth["username"])
186 except KeyRingError:
187 pass
188
189 self._config.auth_config_source.remove_property("http-basic.{}".format(name))
190
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/poetry/utils/password_manager.py b/poetry/utils/password_manager.py
--- a/poetry/utils/password_manager.py
+++ b/poetry/utils/password_manager.py
@@ -155,11 +155,14 @@
def get_http_auth(self, name):
auth = self._config.get("http-basic.{}".format(name))
if not auth:
- return None
-
- username, password = auth["username"], auth.get("password")
- if password is None:
- password = self.keyring.get_password(name, username)
+ username = self._config.get("http-basic.{}.username".format(name))
+ password = self._config.get("http-basic.{}.password".format(name))
+ if not username and not password:
+ return None
+ else:
+ username, password = auth["username"], auth.get("password")
+ if password is None:
+ password = self.keyring.get_password(name, username)
return {
"username": username,
| {"golden_diff": "diff --git a/poetry/utils/password_manager.py b/poetry/utils/password_manager.py\n--- a/poetry/utils/password_manager.py\n+++ b/poetry/utils/password_manager.py\n@@ -155,11 +155,14 @@\n def get_http_auth(self, name):\n auth = self._config.get(\"http-basic.{}\".format(name))\n if not auth:\n- return None\n-\n- username, password = auth[\"username\"], auth.get(\"password\")\n- if password is None:\n- password = self.keyring.get_password(name, username)\n+ username = self._config.get(\"http-basic.{}.username\".format(name))\n+ password = self._config.get(\"http-basic.{}.password\".format(name))\n+ if not username and not password:\n+ return None\n+ else:\n+ username, password = auth[\"username\"], auth.get(\"password\")\n+ if password is None:\n+ password = self.keyring.get_password(name, username)\n \n return {\n \"username\": username,\n", "issue": "Setting credentials through env. variable are not working\n<!-- Checked checkbox should look like this: [x] -->\r\n- [ x ] I am on the [latest](https://github.com/python-poetry/poetry/releases/latest) Poetry version.\r\n- [ x ] I have searched the [issues](https://github.com/python-poetry/poetry/issues) of this repo and believe that this is not a duplicate.\r\n- [ x ] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option).\r\n\r\n<!--\r\n Once those are done, if you're able to fill in the following list with your information,\r\n it'd be very helpful to whoever handles the issue.\r\n-->\r\n\r\n- **OS version and name**: *nix\r\n- **Poetry version**: Poetry version 1.0.0\r\n\r\n## Issue\r\n<!-- Now feel free to write your issue, but please be descriptive! Thanks again \ud83d\ude4c \u2764\ufe0f -->\r\nI'm failing to use env. variables to set custom pypi credentials. \r\n\r\nMy pyproject.toml contains private pypi's like this:\r\n```toml\r\n[[tool.poetry.source]]\r\nurl = \"https://XXXXX/nexus/repository/pypi-central/simple\"\r\nname = \"nexus\"\r\n```\r\nI'm running this script:\r\n```sh\r\nexport POETRY_HTTP_BASIC_NEXUS_USERNAME=****\r\nexport POETRY_HTTP_BASIC_NEXUS_PASSWORD=****\r\npoetry install\r\n```\r\nand it fails with:\r\n```\r\n[EnvCommandError]\r\nCommand ['/opt/.cache/pypoetry/virtualenvs/YYYY-4zvP7SOo-py3.8/bin/pip', 'install', '--no-deps', '--index-url', 'https://XXXXX/nexus/repository/pypi-central/simple', '--extra-index-url', 'https://pypi.org/', 'six==1.12.0'] errored with the following return code 2, and output: \r\nLooking in indexes: https://RESOLVED-XXXXX/nexus/repository/pypi-central/simple, https://****:****@XXXXX/nexus/repository/epd-pypi/simple, https://pypi.org/\r\nCollecting six==1.12.0\r\n\r\n....\r\n\r\nFile \"/opt/.cache/pypoetry/virtualenvs/YYYY-4zvP7SOo-py3.8/lib/python3.8/site-packages/pip/_internal/download.py\", line 386, in handle_401\r\n username, password, save = self._prompt_for_password(parsed.netloc)\r\n File \"/opt/.cache/pypoetry/virtualenvs/YYYY-4zvP7SOo-py3.8/lib/python3.8/site-packages/pip/_internal/download.py\", line 358, in _prompt_for_password\r\n username = ask_input(\"User for %s: \" % netloc)\r\n File \"/opt/.cache/pypoetry/virtualenvs/YYYY-4zvP7SOo-py3.8/lib/python3.8/site-packages/pip/_internal/utils/misc.py\", line 281, in ask_input\r\n return input(message)\r\nEOFError: EOF when reading a line\r\nUser for XXXXX: \r\n```\r\n\r\nI investigated the code and it seems that credentials are never acquired separately from config, but always as a pair. That means that code never ask for `http-basic.nexus.password` and `http-basic.nexus.username`, but for `http-basic.nexus` then the value is used as a dict (search for `password_manager.get_http_auth` usage). I could not find single test case, so I wrote one, fill free to use it:\r\n```diff\r\ndiff --git a/tests/config/test_config.py b/tests/config/test_config.py\r\nindex 07373ad..72ad236 100644\r\n--- a/tests/config/test_config.py\r\n+++ b/tests/config/test_config.py\r\n@@ -14,3 +14,13 @@ def test_config_get_from_environment_variable(config, environ):\r\n\r\n os.environ[\"POETRY_VIRTUALENVS_CREATE\"] = \"false\"\r\n assert not config.get(\"virtualenvs.create\")\r\n+\r\n+def test_basic_http_credentials_through_env(config, environ):\r\n+ assert config.get(\"http-basic.test_repo\") is None\r\n+\r\n+ os.environ[\"POETRY_HTTP_BASIC_TEST_REPO_USERNAME\"] = \"foo\"\r\n+ os.environ[\"POETRY_HTTP_BASIC_TEST_REPO_PASSWORD\"] = \"bar\"\r\n+ credentials = config.get(\"http-basic.test-repo\")\r\n+ assert credentials is not None\r\n+ assert credentials[\"username\"] == \"foo\"\r\n+ assert credentials[\"password\"] == \"bar\"\r\n```\n", "before_files": [{"content": "import logging\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass PasswordManagerError(Exception):\n\n pass\n\n\nclass KeyRingError(Exception):\n\n pass\n\n\nclass KeyRing:\n def __init__(self, namespace):\n self._namespace = namespace\n self._is_available = True\n\n self._check()\n\n def is_available(self):\n return self._is_available\n\n def get_password(self, name, username):\n if not self.is_available():\n return\n\n import keyring\n import keyring.errors\n\n name = self.get_entry_name(name)\n\n try:\n return keyring.get_password(name, username)\n except (RuntimeError, keyring.errors.KeyringError):\n raise KeyRingError(\n \"Unable to retrieve the password for {} from the key ring\".format(name)\n )\n\n def set_password(self, name, username, password):\n if not self.is_available():\n return\n\n import keyring\n import keyring.errors\n\n name = self.get_entry_name(name)\n\n try:\n keyring.set_password(name, username, password)\n except (RuntimeError, keyring.errors.KeyringError) as e:\n raise KeyRingError(\n \"Unable to store the password for {} in the key ring: {}\".format(\n name, str(e)\n )\n )\n\n def delete_password(self, name, username):\n if not self.is_available():\n return\n\n import keyring\n import keyring.errors\n\n name = self.get_entry_name(name)\n\n try:\n keyring.delete_password(name, username)\n except (RuntimeError, keyring.errors.KeyringError):\n raise KeyRingError(\n \"Unable to delete the password for {} from the key ring\".format(name)\n )\n\n def get_entry_name(self, name):\n return \"{}-{}\".format(self._namespace, name)\n\n def _check(self):\n try:\n import keyring\n except Exception as e:\n logger.debug(\"An error occurred while importing keyring: {}\".format(str(e)))\n self._is_available = False\n\n return\n\n backend = keyring.get_keyring()\n name = backend.name.split(\" \")[0]\n if name == \"fail\":\n logger.debug(\"No suitable keyring backend found\")\n self._is_available = False\n elif \"plaintext\" in backend.name.lower():\n logger.debug(\"Only a plaintext keyring backend is available. Not using it.\")\n self._is_available = False\n elif name == \"chainer\":\n try:\n import keyring.backend\n\n backends = keyring.backend.get_all_keyring()\n\n self._is_available = any(\n [\n b.name.split(\" \")[0] not in [\"chainer\", \"fail\"]\n and \"plaintext\" not in b.name.lower()\n for b in backends\n ]\n )\n except Exception:\n self._is_available = False\n\n if not self._is_available:\n logger.warning(\"No suitable keyring backends were found\")\n\n\nclass PasswordManager:\n def __init__(self, config):\n self._config = config\n self._keyring = None\n\n @property\n def keyring(self):\n if self._keyring is None:\n self._keyring = KeyRing(\"poetry-repository\")\n if not self._keyring.is_available():\n logger.warning(\n \"Using a plaintext file to store and retrieve credentials\"\n )\n\n return self._keyring\n\n def set_pypi_token(self, name, token):\n if not self.keyring.is_available():\n self._config.auth_config_source.add_property(\n \"pypi-token.{}\".format(name), token\n )\n else:\n self.keyring.set_password(name, \"__token__\", token)\n\n def get_pypi_token(self, name):\n if not self.keyring.is_available():\n return self._config.get(\"pypi-token.{}\".format(name))\n\n return self.keyring.get_password(name, \"__token__\")\n\n def delete_pypi_token(self, name):\n if not self.keyring.is_available():\n return self._config.auth_config_source.remove_property(\n \"pypi-token.{}\".format(name)\n )\n\n self.keyring.delete_password(name, \"__token__\")\n\n def get_http_auth(self, name):\n auth = self._config.get(\"http-basic.{}\".format(name))\n if not auth:\n return None\n\n username, password = auth[\"username\"], auth.get(\"password\")\n if password is None:\n password = self.keyring.get_password(name, username)\n\n return {\n \"username\": username,\n \"password\": password,\n }\n\n def set_http_password(self, name, username, password):\n auth = {\"username\": username}\n\n if not self.keyring.is_available():\n auth[\"password\"] = password\n else:\n self.keyring.set_password(name, username, password)\n\n self._config.auth_config_source.add_property(\"http-basic.{}\".format(name), auth)\n\n def delete_http_password(self, name):\n auth = self.get_http_auth(name)\n if not auth or \"username\" not in auth:\n return\n\n try:\n self.keyring.delete_password(name, auth[\"username\"])\n except KeyRingError:\n pass\n\n self._config.auth_config_source.remove_property(\"http-basic.{}\".format(name))\n", "path": "poetry/utils/password_manager.py"}], "after_files": [{"content": "import logging\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass PasswordManagerError(Exception):\n\n pass\n\n\nclass KeyRingError(Exception):\n\n pass\n\n\nclass KeyRing:\n def __init__(self, namespace):\n self._namespace = namespace\n self._is_available = True\n\n self._check()\n\n def is_available(self):\n return self._is_available\n\n def get_password(self, name, username):\n if not self.is_available():\n return\n\n import keyring\n import keyring.errors\n\n name = self.get_entry_name(name)\n\n try:\n return keyring.get_password(name, username)\n except (RuntimeError, keyring.errors.KeyringError):\n raise KeyRingError(\n \"Unable to retrieve the password for {} from the key ring\".format(name)\n )\n\n def set_password(self, name, username, password):\n if not self.is_available():\n return\n\n import keyring\n import keyring.errors\n\n name = self.get_entry_name(name)\n\n try:\n keyring.set_password(name, username, password)\n except (RuntimeError, keyring.errors.KeyringError) as e:\n raise KeyRingError(\n \"Unable to store the password for {} in the key ring: {}\".format(\n name, str(e)\n )\n )\n\n def delete_password(self, name, username):\n if not self.is_available():\n return\n\n import keyring\n import keyring.errors\n\n name = self.get_entry_name(name)\n\n try:\n keyring.delete_password(name, username)\n except (RuntimeError, keyring.errors.KeyringError):\n raise KeyRingError(\n \"Unable to delete the password for {} from the key ring\".format(name)\n )\n\n def get_entry_name(self, name):\n return \"{}-{}\".format(self._namespace, name)\n\n def _check(self):\n try:\n import keyring\n except Exception as e:\n logger.debug(\"An error occurred while importing keyring: {}\".format(str(e)))\n self._is_available = False\n\n return\n\n backend = keyring.get_keyring()\n name = backend.name.split(\" \")[0]\n if name == \"fail\":\n logger.debug(\"No suitable keyring backend found\")\n self._is_available = False\n elif \"plaintext\" in backend.name.lower():\n logger.debug(\"Only a plaintext keyring backend is available. Not using it.\")\n self._is_available = False\n elif name == \"chainer\":\n try:\n import keyring.backend\n\n backends = keyring.backend.get_all_keyring()\n\n self._is_available = any(\n [\n b.name.split(\" \")[0] not in [\"chainer\", \"fail\"]\n and \"plaintext\" not in b.name.lower()\n for b in backends\n ]\n )\n except Exception:\n self._is_available = False\n\n if not self._is_available:\n logger.warning(\"No suitable keyring backends were found\")\n\n\nclass PasswordManager:\n def __init__(self, config):\n self._config = config\n self._keyring = None\n\n @property\n def keyring(self):\n if self._keyring is None:\n self._keyring = KeyRing(\"poetry-repository\")\n if not self._keyring.is_available():\n logger.warning(\n \"Using a plaintext file to store and retrieve credentials\"\n )\n\n return self._keyring\n\n def set_pypi_token(self, name, token):\n if not self.keyring.is_available():\n self._config.auth_config_source.add_property(\n \"pypi-token.{}\".format(name), token\n )\n else:\n self.keyring.set_password(name, \"__token__\", token)\n\n def get_pypi_token(self, name):\n if not self.keyring.is_available():\n return self._config.get(\"pypi-token.{}\".format(name))\n\n return self.keyring.get_password(name, \"__token__\")\n\n def delete_pypi_token(self, name):\n if not self.keyring.is_available():\n return self._config.auth_config_source.remove_property(\n \"pypi-token.{}\".format(name)\n )\n\n self.keyring.delete_password(name, \"__token__\")\n\n def get_http_auth(self, name):\n auth = self._config.get(\"http-basic.{}\".format(name))\n if not auth:\n username = self._config.get(\"http-basic.{}.username\".format(name))\n password = self._config.get(\"http-basic.{}.password\".format(name))\n if not username and not password:\n return None\n else:\n username, password = auth[\"username\"], auth.get(\"password\")\n if password is None:\n password = self.keyring.get_password(name, username)\n\n return {\n \"username\": username,\n \"password\": password,\n }\n\n def set_http_password(self, name, username, password):\n auth = {\"username\": username}\n\n if not self.keyring.is_available():\n auth[\"password\"] = password\n else:\n self.keyring.set_password(name, username, password)\n\n self._config.auth_config_source.add_property(\"http-basic.{}\".format(name), auth)\n\n def delete_http_password(self, name):\n auth = self.get_http_auth(name)\n if not auth or \"username\" not in auth:\n return\n\n try:\n self.keyring.delete_password(name, auth[\"username\"])\n except KeyRingError:\n pass\n\n self._config.auth_config_source.remove_property(\"http-basic.{}\".format(name))\n", "path": "poetry/utils/password_manager.py"}]} | 2,865 | 225 |
gh_patches_debug_24821 | rasdani/github-patches | git_diff | nilearn__nilearn-3077 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
nilearn.plotting overrides the matplotlib backend, causing troubles on remote SSH development
I am working on a remote settings, accessing a machine (drago/margaret for INRIA context) through a PyCharm Remote Interpreter.
I have been struggling to display nilearn figures in the scientific view of PyCharm, leading to a painful process of saving the image => rsync figures dir => visualize figures once there are on personal laptop.
I did a simple experiment drawing first a simple `plot(list(range(10)))` which is displayed and then a nilearn figure which does not show up (script at the end of the issue).
I found out the reason for this, which is that in `nilearn.plotting.__init__`, the matplotlib backend is set to `agg` if not in `[inline, agg]`, but the backend used initially (which is working for me) is `module://backend_interagg`.
Setting the backend to the initial value after importing `nilearn.datasets` fixes the problem.
I don't know exactly how those matplotlib backends work, if the list of accepted backends in `nilearn.datasets.__init__` could be extended or if we could have a more robust list of valid matplotlib backend. Also, I feel that nilearn shouldn't override matplotlib backend silently?
<!--Please fill in the following information, to the best of your ability.-->
Nilearn version: 0.7.1
### Expected behavior
Nilearn does not override matplotlib backend silently.
### Actual behavior
Nilearn sets backend to "agg" if not in ["inline", "agg"], causing trouble with remote SSH development.
### Steps and code to reproduce bug
Run the following script through SSH remote interpreter
```python
import matplotlib
initial_backend = matplotlib.get_backend().lower()
print(initial_backend)
import matplotlib.pyplot as plt
import numpy as np
arr = np.zeros((100, 100))
plt.figure()
plt.plot(list(range(10)))
plt.show()
# >> The show is displayed in PyCharm
from nilearn import datasets, plotting
print(matplotlib.get_backend().lower())
# >> Backend has switched to "agg"
sample_brain_map = datasets.fetch_neurovault_motor_task().images[0]
plotting.plot_stat_map(
sample_brain_map,
threshold=3,
title="Before setting back the backend",
)
plt.show()
# >> Does not show up
matplotlib.use(initial_backend)
plotting.plot_stat_map(
sample_brain_map,
threshold=3,
title="After setting back the backend",
)
plt.show()
# >> Shows up
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nilearn/plotting/__init__.py`
Content:
```
1 """
2 Plotting code for nilearn
3 """
4 # Original Authors: Chris Filo Gorgolewski, Gael Varoquaux
5 import os
6 import sys
7 import importlib
8
9
10 ###############################################################################
11 # Make sure that we don't get DISPLAY problems when running without X on
12 # unices
13 def _set_mpl_backend():
14 # We are doing local imports here to avoid polluting our namespace
15 try:
16 import matplotlib
17 except ImportError:
18 if importlib.util.find_spec("pytest") is not None:
19 from .._utils.testing import skip_if_running_tests
20 # No need to fail when running tests
21 skip_if_running_tests('matplotlib not installed')
22 raise
23 else:
24 from ..version import (_import_module_with_version_check,
25 OPTIONAL_MATPLOTLIB_MIN_VERSION)
26 # When matplotlib was successfully imported we need to check
27 # that the version is greater that the minimum required one
28 _import_module_with_version_check('matplotlib',
29 OPTIONAL_MATPLOTLIB_MIN_VERSION)
30 current_backend = matplotlib.get_backend().lower()
31
32 if 'inline' in current_backend or 'nbagg' in current_backend:
33 return
34 # Set the backend to a non-interactive one for unices without X
35 # (see gh-2560)
36 if (sys.platform not in ('darwin', 'win32') and
37 'DISPLAY' not in os.environ):
38 matplotlib.use('Agg')
39
40
41 _set_mpl_backend()
42
43 ###############################################################################
44 from . import cm
45 from .img_plotting import (
46 plot_img, plot_anat, plot_epi, plot_roi, plot_stat_map,
47 plot_glass_brain, plot_connectome, plot_connectome_strength,
48 plot_markers, plot_prob_atlas, plot_carpet, plot_img_comparison, show)
49 from .find_cuts import find_xyz_cut_coords, find_cut_slices, \
50 find_parcellation_cut_coords, find_probabilistic_atlas_cut_coords
51 from .matrix_plotting import (plot_matrix, plot_contrast_matrix,
52 plot_design_matrix, plot_event)
53 from .html_surface import view_surf, view_img_on_surf
54 from .html_stat_map import view_img
55 from .html_connectome import view_connectome, view_markers
56 from .surf_plotting import (plot_surf, plot_surf_stat_map, plot_surf_roi,
57 plot_img_on_surf, plot_surf_contours)
58
59 __all__ = ['cm', 'plot_img', 'plot_anat', 'plot_epi',
60 'plot_roi', 'plot_stat_map', 'plot_glass_brain',
61 'plot_markers', 'plot_connectome', 'plot_prob_atlas',
62 'find_xyz_cut_coords', 'find_cut_slices',
63 'plot_img_comparison',
64 'show', 'plot_matrix',
65 'plot_design_matrix', 'plot_contrast_matrix', 'plot_event',
66 'view_surf', 'view_img_on_surf',
67 'view_img', 'view_connectome', 'view_markers',
68 'find_parcellation_cut_coords',
69 'find_probabilistic_atlas_cut_coords',
70 'plot_surf', 'plot_surf_stat_map', 'plot_surf_roi',
71 'plot_img_on_surf', 'plot_connectome_strength', 'plot_carpet',
72 'plot_surf_contours']
73
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nilearn/plotting/__init__.py b/nilearn/plotting/__init__.py
--- a/nilearn/plotting/__init__.py
+++ b/nilearn/plotting/__init__.py
@@ -2,9 +2,8 @@
Plotting code for nilearn
"""
# Original Authors: Chris Filo Gorgolewski, Gael Varoquaux
-import os
-import sys
import importlib
+import warnings
###############################################################################
@@ -29,13 +28,17 @@
OPTIONAL_MATPLOTLIB_MIN_VERSION)
current_backend = matplotlib.get_backend().lower()
- if 'inline' in current_backend or 'nbagg' in current_backend:
- return
- # Set the backend to a non-interactive one for unices without X
- # (see gh-2560)
- if (sys.platform not in ('darwin', 'win32') and
- 'DISPLAY' not in os.environ):
- matplotlib.use('Agg')
+ try:
+ # Making sure the current backend is usable by matplotlib
+ matplotlib.use(current_backend)
+ except Exception:
+ # If not, switching to default agg backend
+ matplotlib.use("Agg")
+ new_backend = matplotlib.get_backend().lower()
+
+ if new_backend != current_backend:
+ # Matplotlib backend has been changed, let's warn the user
+ warnings.warn(f"Backend changed to {new_backend}...")
_set_mpl_backend()
| {"golden_diff": "diff --git a/nilearn/plotting/__init__.py b/nilearn/plotting/__init__.py\n--- a/nilearn/plotting/__init__.py\n+++ b/nilearn/plotting/__init__.py\n@@ -2,9 +2,8 @@\n Plotting code for nilearn\n \"\"\"\n # Original Authors: Chris Filo Gorgolewski, Gael Varoquaux\n-import os\n-import sys\n import importlib\n+import warnings\n \n \n ###############################################################################\n@@ -29,13 +28,17 @@\n OPTIONAL_MATPLOTLIB_MIN_VERSION)\n current_backend = matplotlib.get_backend().lower()\n \n- if 'inline' in current_backend or 'nbagg' in current_backend:\n- return\n- # Set the backend to a non-interactive one for unices without X\n- # (see gh-2560)\n- if (sys.platform not in ('darwin', 'win32') and\n- 'DISPLAY' not in os.environ):\n- matplotlib.use('Agg')\n+ try:\n+ # Making sure the current backend is usable by matplotlib\n+ matplotlib.use(current_backend)\n+ except Exception:\n+ # If not, switching to default agg backend\n+ matplotlib.use(\"Agg\")\n+ new_backend = matplotlib.get_backend().lower()\n+\n+ if new_backend != current_backend:\n+ # Matplotlib backend has been changed, let's warn the user\n+ warnings.warn(f\"Backend changed to {new_backend}...\")\n \n \n _set_mpl_backend()\n", "issue": "nilearn.plotting overrides the matplotlib backend, causing troubles on remote SSH development\nI am working on a remote settings, accessing a machine (drago/margaret for INRIA context) through a PyCharm Remote Interpreter.\r\nI have been struggling to display nilearn figures in the scientific view of PyCharm, leading to a painful process of saving the image => rsync figures dir => visualize figures once there are on personal laptop.\r\n\r\nI did a simple experiment drawing first a simple `plot(list(range(10)))` which is displayed and then a nilearn figure which does not show up (script at the end of the issue).\r\n\r\nI found out the reason for this, which is that in `nilearn.plotting.__init__`, the matplotlib backend is set to `agg` if not in `[inline, agg]`, but the backend used initially (which is working for me) is `module://backend_interagg`.\r\n\r\nSetting the backend to the initial value after importing `nilearn.datasets` fixes the problem.\r\n\r\nI don't know exactly how those matplotlib backends work, if the list of accepted backends in `nilearn.datasets.__init__` could be extended or if we could have a more robust list of valid matplotlib backend. Also, I feel that nilearn shouldn't override matplotlib backend silently? \r\n\r\n<!--Please fill in the following information, to the best of your ability.-->\r\nNilearn version: 0.7.1\r\n\r\n### Expected behavior\r\n\r\nNilearn does not override matplotlib backend silently.\r\n\r\n### Actual behavior\r\n\r\nNilearn sets backend to \"agg\" if not in [\"inline\", \"agg\"], causing trouble with remote SSH development.\r\n\r\n### Steps and code to reproduce bug\r\n\r\nRun the following script through SSH remote interpreter\r\n\r\n```python\r\nimport matplotlib\r\ninitial_backend = matplotlib.get_backend().lower()\r\nprint(initial_backend)\r\n\r\nimport matplotlib.pyplot as plt\r\nimport numpy as np\r\n\r\narr = np.zeros((100, 100))\r\nplt.figure()\r\nplt.plot(list(range(10)))\r\nplt.show()\r\n# >> The show is displayed in PyCharm\r\n\r\nfrom nilearn import datasets, plotting\r\nprint(matplotlib.get_backend().lower())\r\n# >> Backend has switched to \"agg\"\r\n\r\nsample_brain_map = datasets.fetch_neurovault_motor_task().images[0]\r\nplotting.plot_stat_map(\r\n sample_brain_map,\r\n threshold=3,\r\n title=\"Before setting back the backend\",\r\n)\r\n\r\nplt.show()\r\n# >> Does not show up\r\n\r\nmatplotlib.use(initial_backend)\r\nplotting.plot_stat_map(\r\n sample_brain_map,\r\n threshold=3,\r\n title=\"After setting back the backend\",\r\n)\r\n\r\nplt.show()\r\n# >> Shows up\r\n\r\n```\r\n\n", "before_files": [{"content": "\"\"\"\nPlotting code for nilearn\n\"\"\"\n# Original Authors: Chris Filo Gorgolewski, Gael Varoquaux\nimport os\nimport sys\nimport importlib\n\n\n###############################################################################\n# Make sure that we don't get DISPLAY problems when running without X on\n# unices\ndef _set_mpl_backend():\n # We are doing local imports here to avoid polluting our namespace\n try:\n import matplotlib\n except ImportError:\n if importlib.util.find_spec(\"pytest\") is not None:\n from .._utils.testing import skip_if_running_tests\n # No need to fail when running tests\n skip_if_running_tests('matplotlib not installed')\n raise\n else:\n from ..version import (_import_module_with_version_check,\n OPTIONAL_MATPLOTLIB_MIN_VERSION)\n # When matplotlib was successfully imported we need to check\n # that the version is greater that the minimum required one\n _import_module_with_version_check('matplotlib',\n OPTIONAL_MATPLOTLIB_MIN_VERSION)\n current_backend = matplotlib.get_backend().lower()\n\n if 'inline' in current_backend or 'nbagg' in current_backend:\n return\n # Set the backend to a non-interactive one for unices without X\n # (see gh-2560)\n if (sys.platform not in ('darwin', 'win32') and\n 'DISPLAY' not in os.environ):\n matplotlib.use('Agg')\n\n\n_set_mpl_backend()\n\n###############################################################################\nfrom . import cm\nfrom .img_plotting import (\n plot_img, plot_anat, plot_epi, plot_roi, plot_stat_map,\n plot_glass_brain, plot_connectome, plot_connectome_strength,\n plot_markers, plot_prob_atlas, plot_carpet, plot_img_comparison, show)\nfrom .find_cuts import find_xyz_cut_coords, find_cut_slices, \\\n find_parcellation_cut_coords, find_probabilistic_atlas_cut_coords\nfrom .matrix_plotting import (plot_matrix, plot_contrast_matrix,\n plot_design_matrix, plot_event)\nfrom .html_surface import view_surf, view_img_on_surf\nfrom .html_stat_map import view_img\nfrom .html_connectome import view_connectome, view_markers\nfrom .surf_plotting import (plot_surf, plot_surf_stat_map, plot_surf_roi,\n plot_img_on_surf, plot_surf_contours)\n\n__all__ = ['cm', 'plot_img', 'plot_anat', 'plot_epi',\n 'plot_roi', 'plot_stat_map', 'plot_glass_brain',\n 'plot_markers', 'plot_connectome', 'plot_prob_atlas',\n 'find_xyz_cut_coords', 'find_cut_slices',\n 'plot_img_comparison',\n 'show', 'plot_matrix',\n 'plot_design_matrix', 'plot_contrast_matrix', 'plot_event',\n 'view_surf', 'view_img_on_surf',\n 'view_img', 'view_connectome', 'view_markers',\n 'find_parcellation_cut_coords',\n 'find_probabilistic_atlas_cut_coords',\n 'plot_surf', 'plot_surf_stat_map', 'plot_surf_roi',\n 'plot_img_on_surf', 'plot_connectome_strength', 'plot_carpet',\n 'plot_surf_contours']\n", "path": "nilearn/plotting/__init__.py"}], "after_files": [{"content": "\"\"\"\nPlotting code for nilearn\n\"\"\"\n# Original Authors: Chris Filo Gorgolewski, Gael Varoquaux\nimport importlib\nimport warnings\n\n\n###############################################################################\n# Make sure that we don't get DISPLAY problems when running without X on\n# unices\ndef _set_mpl_backend():\n # We are doing local imports here to avoid polluting our namespace\n try:\n import matplotlib\n except ImportError:\n if importlib.util.find_spec(\"pytest\") is not None:\n from .._utils.testing import skip_if_running_tests\n # No need to fail when running tests\n skip_if_running_tests('matplotlib not installed')\n raise\n else:\n from ..version import (_import_module_with_version_check,\n OPTIONAL_MATPLOTLIB_MIN_VERSION)\n # When matplotlib was successfully imported we need to check\n # that the version is greater that the minimum required one\n _import_module_with_version_check('matplotlib',\n OPTIONAL_MATPLOTLIB_MIN_VERSION)\n current_backend = matplotlib.get_backend().lower()\n\n try:\n # Making sure the current backend is usable by matplotlib\n matplotlib.use(current_backend)\n except Exception:\n # If not, switching to default agg backend\n matplotlib.use(\"Agg\")\n new_backend = matplotlib.get_backend().lower()\n\n if new_backend != current_backend:\n # Matplotlib backend has been changed, let's warn the user\n warnings.warn(f\"Backend changed to {new_backend}...\")\n\n\n_set_mpl_backend()\n\n###############################################################################\nfrom . import cm\nfrom .img_plotting import (\n plot_img, plot_anat, plot_epi, plot_roi, plot_stat_map,\n plot_glass_brain, plot_connectome, plot_connectome_strength,\n plot_markers, plot_prob_atlas, plot_carpet, plot_img_comparison, show)\nfrom .find_cuts import find_xyz_cut_coords, find_cut_slices, \\\n find_parcellation_cut_coords, find_probabilistic_atlas_cut_coords\nfrom .matrix_plotting import (plot_matrix, plot_contrast_matrix,\n plot_design_matrix, plot_event)\nfrom .html_surface import view_surf, view_img_on_surf\nfrom .html_stat_map import view_img\nfrom .html_connectome import view_connectome, view_markers\nfrom .surf_plotting import (plot_surf, plot_surf_stat_map, plot_surf_roi,\n plot_img_on_surf, plot_surf_contours)\n\n__all__ = ['cm', 'plot_img', 'plot_anat', 'plot_epi',\n 'plot_roi', 'plot_stat_map', 'plot_glass_brain',\n 'plot_markers', 'plot_connectome', 'plot_prob_atlas',\n 'find_xyz_cut_coords', 'find_cut_slices',\n 'plot_img_comparison',\n 'show', 'plot_matrix',\n 'plot_design_matrix', 'plot_contrast_matrix', 'plot_event',\n 'view_surf', 'view_img_on_surf',\n 'view_img', 'view_connectome', 'view_markers',\n 'find_parcellation_cut_coords',\n 'find_probabilistic_atlas_cut_coords',\n 'plot_surf', 'plot_surf_stat_map', 'plot_surf_roi',\n 'plot_img_on_surf', 'plot_connectome_strength', 'plot_carpet',\n 'plot_surf_contours']\n", "path": "nilearn/plotting/__init__.py"}]} | 1,642 | 337 |
gh_patches_debug_130 | rasdani/github-patches | git_diff | svthalia__concrexit-1750 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Event registration member serializer should be read only
### Describe the bug
https://github.com/svthalia/concrexit/blob/4ab37961f50e398cc52422cdc1df66f6ab8ff2ee/website/events/api/v2/serializers/event_registration.py#L34 This serializer should be read-only
### How to reproduce
https://staging.thalia.nu/api/v2/events/150/registrations/ shows that you can POST to update the member profile, that should not be the case
### Expected behaviour
Be read only
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/events/api/v2/serializers/event_registration.py`
Content:
```
1 from rest_framework import serializers
2
3 from events.models import EventRegistration
4 from members.api.v2.serializers.member import MemberSerializer
5
6
7 class EventRegistrationSerializer(serializers.ModelSerializer):
8 """Serializer for event registrations."""
9
10 def __init__(self, *args, **kwargs):
11 # Don't pass the 'fields' arg up to the superclass
12 fields = kwargs.pop("fields", {"pk", "member", "name"})
13
14 # Instantiate the superclass normally
15 super().__init__(*args, **kwargs)
16
17 allowed = set(fields)
18 existing = set(self.fields.keys())
19 for field_name in existing - allowed:
20 self.fields.pop(field_name)
21
22 class Meta:
23 model = EventRegistration
24 fields = (
25 "pk",
26 "present",
27 "queue_position",
28 "date",
29 "payment",
30 "member",
31 "name",
32 )
33
34 member = MemberSerializer(detailed=False)
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/website/events/api/v2/serializers/event_registration.py b/website/events/api/v2/serializers/event_registration.py
--- a/website/events/api/v2/serializers/event_registration.py
+++ b/website/events/api/v2/serializers/event_registration.py
@@ -31,4 +31,4 @@
"name",
)
- member = MemberSerializer(detailed=False)
+ member = MemberSerializer(detailed=False, read_only=True)
| {"golden_diff": "diff --git a/website/events/api/v2/serializers/event_registration.py b/website/events/api/v2/serializers/event_registration.py\n--- a/website/events/api/v2/serializers/event_registration.py\n+++ b/website/events/api/v2/serializers/event_registration.py\n@@ -31,4 +31,4 @@\n \"name\",\n )\n \n- member = MemberSerializer(detailed=False)\n+ member = MemberSerializer(detailed=False, read_only=True)\n", "issue": "Event registration member serializer should be read only\n### Describe the bug\r\nhttps://github.com/svthalia/concrexit/blob/4ab37961f50e398cc52422cdc1df66f6ab8ff2ee/website/events/api/v2/serializers/event_registration.py#L34 This serializer should be read-only\r\n\r\n### How to reproduce\r\nhttps://staging.thalia.nu/api/v2/events/150/registrations/ shows that you can POST to update the member profile, that should not be the case\r\n\r\n### Expected behaviour\r\nBe read only\r\n\n", "before_files": [{"content": "from rest_framework import serializers\n\nfrom events.models import EventRegistration\nfrom members.api.v2.serializers.member import MemberSerializer\n\n\nclass EventRegistrationSerializer(serializers.ModelSerializer):\n \"\"\"Serializer for event registrations.\"\"\"\n\n def __init__(self, *args, **kwargs):\n # Don't pass the 'fields' arg up to the superclass\n fields = kwargs.pop(\"fields\", {\"pk\", \"member\", \"name\"})\n\n # Instantiate the superclass normally\n super().__init__(*args, **kwargs)\n\n allowed = set(fields)\n existing = set(self.fields.keys())\n for field_name in existing - allowed:\n self.fields.pop(field_name)\n\n class Meta:\n model = EventRegistration\n fields = (\n \"pk\",\n \"present\",\n \"queue_position\",\n \"date\",\n \"payment\",\n \"member\",\n \"name\",\n )\n\n member = MemberSerializer(detailed=False)\n", "path": "website/events/api/v2/serializers/event_registration.py"}], "after_files": [{"content": "from rest_framework import serializers\n\nfrom events.models import EventRegistration\nfrom members.api.v2.serializers.member import MemberSerializer\n\n\nclass EventRegistrationSerializer(serializers.ModelSerializer):\n \"\"\"Serializer for event registrations.\"\"\"\n\n def __init__(self, *args, **kwargs):\n # Don't pass the 'fields' arg up to the superclass\n fields = kwargs.pop(\"fields\", {\"pk\", \"member\", \"name\"})\n\n # Instantiate the superclass normally\n super().__init__(*args, **kwargs)\n\n allowed = set(fields)\n existing = set(self.fields.keys())\n for field_name in existing - allowed:\n self.fields.pop(field_name)\n\n class Meta:\n model = EventRegistration\n fields = (\n \"pk\",\n \"present\",\n \"queue_position\",\n \"date\",\n \"payment\",\n \"member\",\n \"name\",\n )\n\n member = MemberSerializer(detailed=False, read_only=True)\n", "path": "website/events/api/v2/serializers/event_registration.py"}]} | 653 | 103 |
gh_patches_debug_32911 | rasdani/github-patches | git_diff | openstates__openstates-scrapers-2745 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CA not running with new image
it seems like CA isn't running right now w/ the switch to Alpine, need to investigate
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openstates/ca/models.py`
Content:
```
1 from sqlalchemy import (Column, Integer, String, ForeignKey,
2 DateTime, Numeric, UnicodeText)
3 from sqlalchemy.sql import and_
4 from sqlalchemy.orm import backref, relation
5 from sqlalchemy.ext.declarative import declarative_base
6
7 from lxml import etree
8
9 Base = declarative_base()
10
11
12 class CABill(Base):
13 __tablename__ = "bill_tbl"
14
15 bill_id = Column(String(20), primary_key=True)
16 session_year = Column(String(8))
17 session_num = Column(String(2))
18 measure_type = Column(String(4))
19 measure_num = Column(Integer)
20 measure_state = Column(String(40))
21 chapter_year = Column(String(4))
22 chapter_type = Column(String(10))
23 chapter_session_num = Column(String(2))
24 chapter_num = Column(String(10))
25 latest_bill_version_id = Column(String(30))
26 active_flg = Column(String(1))
27 trans_uid = Column(String(30))
28 trans_update = Column(DateTime)
29 current_location = Column(String(200))
30 current_secondary_loc = Column(String(60))
31 current_house = Column(String(60))
32 current_status = Column(String(60))
33
34 actions = relation('CABillAction', backref=backref('bill'),
35 order_by="CABillAction.bill_history_id")
36
37 versions = relation('CABillVersion', backref=backref('bill'),
38 order_by='desc(CABillVersion.version_num)')
39
40 votes = relation('CAVoteSummary', backref=backref('bill'),
41 order_by='CAVoteSummary.vote_date_time')
42
43 @property
44 def short_bill_id(self):
45 return "%s%d" % (self.measure_type, self.measure_num)
46
47
48 class CABillVersion(Base):
49 __tablename__ = "bill_version_tbl"
50
51 bill_version_id = Column(String(30), primary_key=True)
52 bill_id = Column(String(19), ForeignKey(CABill.bill_id))
53 version_num = Column(Integer)
54 bill_version_action_date = Column(DateTime)
55 bill_version_action = Column(String(100))
56 request_num = Column(String(10))
57 subject = Column(String(1000))
58 vote_required = Column(String(100))
59 appropriation = Column(String(3))
60 fiscal_committee = Column(String(3))
61 local_program = Column(String(3))
62 substantive_changes = Column(String(3))
63 urgency = Column(String(3))
64 taxlevy = Column(String(3))
65 bill_xml = Column(UnicodeText)
66 active_flg = Column(String(1))
67 trans_uid = Column(String(30))
68 trans_update = Column(DateTime)
69
70 @property
71 def xml(self):
72 if '_xml' not in self.__dict__:
73 self._xml = etree.fromstring(self.bill_xml.encode('utf-8'),
74 etree.XMLParser(recover=True))
75 return self._xml
76
77 @property
78 def title(self):
79 text = self.xml.xpath("string(//*[local-name() = 'Title'])") or ''
80 return text.strip()
81
82 @property
83 def short_title(self):
84 text = self.xml.xpath("string(//*[local-name() = 'Subject'])") or ''
85 return text.strip()
86
87
88 class CABillVersionAuthor(Base):
89 __tablename__ = "bill_version_authors_tbl"
90
91 # Note: the primary_keys here are a lie - the actual table has no pk
92 # but SQLAlchemy seems to demand one. Furthermore, I get strange
93 # exceptions when trying to use bill_version_id as part of a
94 # composite primary key.
95
96 bill_version_id = Column(String(30),
97 ForeignKey(CABillVersion.bill_version_id))
98 type = Column(String(15))
99 house = Column(String(100))
100 name = Column(String(100), primary_key=True)
101 contribution = Column(String(100))
102 committee_members = Column(String(2000))
103 active_flg = Column(String(1))
104 trans_uid = Column(String(30))
105 trans_update = Column(DateTime, primary_key=True)
106 primary_author_flg = Column(String(1))
107
108 version = relation(CABillVersion, backref=backref('authors'))
109
110
111 class CABillAction(Base):
112 __tablename__ = "bill_history_tbl"
113
114 bill_id = Column(String(20), ForeignKey(CABill.bill_id))
115 bill_history_id = Column(Numeric, primary_key=True)
116 action_date = Column(DateTime)
117 action = Column(String(2000))
118 trans_uid = Column(String(20))
119 trans_update_dt = Column(DateTime)
120 action_sequence = Column(Integer)
121 action_code = Column(String(5))
122 action_status = Column(String(60))
123 primary_location = Column(String(60))
124 secondary_location = Column(String(60))
125 ternary_location = Column(String(60))
126 end_status = Column(String(60))
127
128 @property
129 def actor(self):
130 # TODO: replace committee codes w/ names
131
132 if not self.primary_location:
133 return None
134
135 actor = self.primary_location
136
137 if self.secondary_location:
138 actor += " (%s" % self.secondary_location
139
140 if self.ternary_location:
141 actor += " %s" % self.ternary_location
142
143 actor += ")"
144
145 return actor
146
147
148 class CALegislator(Base):
149 __tablename__ = 'legislator_tbl'
150
151 district = Column(String(5), primary_key=True)
152 session_year = Column(String(8), primary_key=True)
153 legislator_name = Column(String(30), primary_key=True)
154 house_type = Column(String(1), primary_key=True)
155 author_name = Column(String(200))
156 first_name = Column(String(30))
157 last_name = Column(String(30))
158 middle_initial = Column(String(1))
159 name_suffix = Column(String(12))
160 name_title = Column(String(34))
161 web_name_title = Column(String(34))
162 party = Column(String(4))
163 active_flg = Column(String(1))
164 trans_uid = Column(String(30))
165 trans_update = Column(DateTime)
166
167
168 class CAMotion(Base):
169 __tablename__ = "bill_motion_tbl"
170
171 motion_id = Column(Integer, primary_key=True)
172 motion_text = Column(String(250))
173 trans_uid = Column(String(30))
174 trans_update = Column(DateTime)
175
176
177 class CALocation(Base):
178 __tablename__ = "location_code_tbl"
179
180 session_year = Column(String(8), primary_key=True)
181 location_code = Column(String(6), primary_key=True)
182 location_type = Column(String(1), primary_key=True)
183 consent_calendar_code = Column(String(2), primary_key=True)
184 description = Column(String(60))
185 long_description = Column(String(200))
186 active_flg = Column(String(1))
187 trans_uid = Column(String(30))
188 trans_update = Column(DateTime)
189
190
191 class CAVoteSummary(Base):
192 __tablename__ = "bill_summary_vote_tbl"
193
194 bill_id = Column(String(20), ForeignKey(CABill.bill_id), primary_key=True)
195 location_code = Column(String(6), ForeignKey(CALocation.location_code), primary_key=True)
196 vote_date_time = Column(DateTime, primary_key=True)
197 vote_date_seq = Column(Integer, primary_key=True)
198 motion_id = Column(Integer, ForeignKey(CAMotion.motion_id), primary_key=True)
199 ayes = Column(Integer)
200 noes = Column(Integer)
201 abstain = Column(Integer)
202 vote_result = Column(String(6))
203 trans_uid = Column(String(30))
204 trans_update = Column(DateTime, primary_key=True)
205
206 motion = relation(CAMotion)
207 location = relation(CALocation)
208
209 @property
210 def threshold(self):
211 # This may not always be true...
212 if self.location_code != "AFLOOR" and self.location_code != "SFLOOR":
213 return '1/2'
214
215 # Get the associated bill version (probably?)
216 version = next(filter(lambda v: v.bill_version_action_date <= self.vote_date_time,
217 self.bill.versions))
218
219 if version.vote_required == 'Majority':
220 return '1/2'
221 else:
222 return '2/3'
223
224
225 class CAVoteDetail(Base):
226 __tablename__ = "bill_detail_vote_tbl"
227
228 bill_id = Column(String(20), ForeignKey(CABill.bill_id),
229 ForeignKey(CAVoteSummary.bill_id), primary_key=True)
230 location_code = Column(String(6), ForeignKey(CAVoteSummary.location_code),
231 primary_key=True)
232 legislator_name = Column(String(50), primary_key=True)
233 vote_date_time = Column(DateTime, ForeignKey(CAVoteSummary.vote_date_time),
234 primary_key=True)
235 vote_date_seq = Column(Integer, ForeignKey(CAVoteSummary.vote_date_seq),
236 primary_key=True)
237 vote_code = Column(String(5), primary_key=True)
238 motion_id = Column(Integer, ForeignKey(CAVoteSummary.motion_id),
239 primary_key=True)
240 trans_uid = Column(String(30), primary_key=True)
241 trans_update = Column(DateTime, primary_key=True)
242
243 bill = relation(CABill, backref=backref('detail_votes'))
244 summary = relation(
245 CAVoteSummary,
246 primaryjoin=and_(CAVoteSummary.bill_id == bill_id,
247 CAVoteSummary.location_code == location_code,
248 CAVoteSummary.vote_date_time == vote_date_time,
249 CAVoteSummary.vote_date_seq == vote_date_seq,
250 CAVoteSummary.motion_id == motion_id),
251 backref=backref('votes'))
252
253
254 class CACommitteeHearing(Base):
255 __tablename__ = "committee_hearing_tbl"
256
257 bill_id = Column(String(20), ForeignKey(CABill.bill_id),
258 ForeignKey(CAVoteSummary.bill_id), primary_key=True)
259 committee_type = Column(String(2), primary_key=True)
260 committee_nr = Column(Integer, primary_key=True)
261 hearing_date = Column(DateTime, primary_key=True)
262 location_code = Column(String(6), primary_key=True)
263 trans_uid = Column(String(30), primary_key=True)
264 trans_update_date = Column(DateTime, primary_key=True)
265
266 bill = relation(CABill, backref=backref('committee_hearings'))
267
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/openstates/ca/models.py b/openstates/ca/models.py
--- a/openstates/ca/models.py
+++ b/openstates/ca/models.py
@@ -1,7 +1,7 @@
from sqlalchemy import (Column, Integer, String, ForeignKey,
DateTime, Numeric, UnicodeText)
from sqlalchemy.sql import and_
-from sqlalchemy.orm import backref, relation
+from sqlalchemy.orm import backref, relation, foreign
from sqlalchemy.ext.declarative import declarative_base
from lxml import etree
@@ -225,8 +225,7 @@
class CAVoteDetail(Base):
__tablename__ = "bill_detail_vote_tbl"
- bill_id = Column(String(20), ForeignKey(CABill.bill_id),
- ForeignKey(CAVoteSummary.bill_id), primary_key=True)
+ bill_id = Column(String(20), ForeignKey(CABill.bill_id), primary_key=True)
location_code = Column(String(6), ForeignKey(CAVoteSummary.location_code),
primary_key=True)
legislator_name = Column(String(50), primary_key=True)
@@ -240,10 +239,12 @@
trans_uid = Column(String(30), primary_key=True)
trans_update = Column(DateTime, primary_key=True)
- bill = relation(CABill, backref=backref('detail_votes'))
+ bill = relation(CABill,
+ primaryjoin="CABill.bill_id == foreign(CAVoteDetail.bill_id)",
+ backref=backref('detail_votes'))
summary = relation(
CAVoteSummary,
- primaryjoin=and_(CAVoteSummary.bill_id == bill_id,
+ primaryjoin=and_(CAVoteSummary.bill_id == foreign(bill_id),
CAVoteSummary.location_code == location_code,
CAVoteSummary.vote_date_time == vote_date_time,
CAVoteSummary.vote_date_seq == vote_date_seq,
| {"golden_diff": "diff --git a/openstates/ca/models.py b/openstates/ca/models.py\n--- a/openstates/ca/models.py\n+++ b/openstates/ca/models.py\n@@ -1,7 +1,7 @@\n from sqlalchemy import (Column, Integer, String, ForeignKey,\n DateTime, Numeric, UnicodeText)\n from sqlalchemy.sql import and_\n-from sqlalchemy.orm import backref, relation\n+from sqlalchemy.orm import backref, relation, foreign\n from sqlalchemy.ext.declarative import declarative_base\n \n from lxml import etree\n@@ -225,8 +225,7 @@\n class CAVoteDetail(Base):\n __tablename__ = \"bill_detail_vote_tbl\"\n \n- bill_id = Column(String(20), ForeignKey(CABill.bill_id),\n- ForeignKey(CAVoteSummary.bill_id), primary_key=True)\n+ bill_id = Column(String(20), ForeignKey(CABill.bill_id), primary_key=True)\n location_code = Column(String(6), ForeignKey(CAVoteSummary.location_code),\n primary_key=True)\n legislator_name = Column(String(50), primary_key=True)\n@@ -240,10 +239,12 @@\n trans_uid = Column(String(30), primary_key=True)\n trans_update = Column(DateTime, primary_key=True)\n \n- bill = relation(CABill, backref=backref('detail_votes'))\n+ bill = relation(CABill,\n+ primaryjoin=\"CABill.bill_id == foreign(CAVoteDetail.bill_id)\",\n+ backref=backref('detail_votes'))\n summary = relation(\n CAVoteSummary,\n- primaryjoin=and_(CAVoteSummary.bill_id == bill_id,\n+ primaryjoin=and_(CAVoteSummary.bill_id == foreign(bill_id),\n CAVoteSummary.location_code == location_code,\n CAVoteSummary.vote_date_time == vote_date_time,\n CAVoteSummary.vote_date_seq == vote_date_seq,\n", "issue": "CA not running with new image\nit seems like CA isn't running right now w/ the switch to Alpine, need to investigate\n", "before_files": [{"content": "from sqlalchemy import (Column, Integer, String, ForeignKey,\n DateTime, Numeric, UnicodeText)\nfrom sqlalchemy.sql import and_\nfrom sqlalchemy.orm import backref, relation\nfrom sqlalchemy.ext.declarative import declarative_base\n\nfrom lxml import etree\n\nBase = declarative_base()\n\n\nclass CABill(Base):\n __tablename__ = \"bill_tbl\"\n\n bill_id = Column(String(20), primary_key=True)\n session_year = Column(String(8))\n session_num = Column(String(2))\n measure_type = Column(String(4))\n measure_num = Column(Integer)\n measure_state = Column(String(40))\n chapter_year = Column(String(4))\n chapter_type = Column(String(10))\n chapter_session_num = Column(String(2))\n chapter_num = Column(String(10))\n latest_bill_version_id = Column(String(30))\n active_flg = Column(String(1))\n trans_uid = Column(String(30))\n trans_update = Column(DateTime)\n current_location = Column(String(200))\n current_secondary_loc = Column(String(60))\n current_house = Column(String(60))\n current_status = Column(String(60))\n\n actions = relation('CABillAction', backref=backref('bill'),\n order_by=\"CABillAction.bill_history_id\")\n\n versions = relation('CABillVersion', backref=backref('bill'),\n order_by='desc(CABillVersion.version_num)')\n\n votes = relation('CAVoteSummary', backref=backref('bill'),\n order_by='CAVoteSummary.vote_date_time')\n\n @property\n def short_bill_id(self):\n return \"%s%d\" % (self.measure_type, self.measure_num)\n\n\nclass CABillVersion(Base):\n __tablename__ = \"bill_version_tbl\"\n\n bill_version_id = Column(String(30), primary_key=True)\n bill_id = Column(String(19), ForeignKey(CABill.bill_id))\n version_num = Column(Integer)\n bill_version_action_date = Column(DateTime)\n bill_version_action = Column(String(100))\n request_num = Column(String(10))\n subject = Column(String(1000))\n vote_required = Column(String(100))\n appropriation = Column(String(3))\n fiscal_committee = Column(String(3))\n local_program = Column(String(3))\n substantive_changes = Column(String(3))\n urgency = Column(String(3))\n taxlevy = Column(String(3))\n bill_xml = Column(UnicodeText)\n active_flg = Column(String(1))\n trans_uid = Column(String(30))\n trans_update = Column(DateTime)\n\n @property\n def xml(self):\n if '_xml' not in self.__dict__:\n self._xml = etree.fromstring(self.bill_xml.encode('utf-8'),\n etree.XMLParser(recover=True))\n return self._xml\n\n @property\n def title(self):\n text = self.xml.xpath(\"string(//*[local-name() = 'Title'])\") or ''\n return text.strip()\n\n @property\n def short_title(self):\n text = self.xml.xpath(\"string(//*[local-name() = 'Subject'])\") or ''\n return text.strip()\n\n\nclass CABillVersionAuthor(Base):\n __tablename__ = \"bill_version_authors_tbl\"\n\n # Note: the primary_keys here are a lie - the actual table has no pk\n # but SQLAlchemy seems to demand one. Furthermore, I get strange\n # exceptions when trying to use bill_version_id as part of a\n # composite primary key.\n\n bill_version_id = Column(String(30),\n ForeignKey(CABillVersion.bill_version_id))\n type = Column(String(15))\n house = Column(String(100))\n name = Column(String(100), primary_key=True)\n contribution = Column(String(100))\n committee_members = Column(String(2000))\n active_flg = Column(String(1))\n trans_uid = Column(String(30))\n trans_update = Column(DateTime, primary_key=True)\n primary_author_flg = Column(String(1))\n\n version = relation(CABillVersion, backref=backref('authors'))\n\n\nclass CABillAction(Base):\n __tablename__ = \"bill_history_tbl\"\n\n bill_id = Column(String(20), ForeignKey(CABill.bill_id))\n bill_history_id = Column(Numeric, primary_key=True)\n action_date = Column(DateTime)\n action = Column(String(2000))\n trans_uid = Column(String(20))\n trans_update_dt = Column(DateTime)\n action_sequence = Column(Integer)\n action_code = Column(String(5))\n action_status = Column(String(60))\n primary_location = Column(String(60))\n secondary_location = Column(String(60))\n ternary_location = Column(String(60))\n end_status = Column(String(60))\n\n @property\n def actor(self):\n # TODO: replace committee codes w/ names\n\n if not self.primary_location:\n return None\n\n actor = self.primary_location\n\n if self.secondary_location:\n actor += \" (%s\" % self.secondary_location\n\n if self.ternary_location:\n actor += \" %s\" % self.ternary_location\n\n actor += \")\"\n\n return actor\n\n\nclass CALegislator(Base):\n __tablename__ = 'legislator_tbl'\n\n district = Column(String(5), primary_key=True)\n session_year = Column(String(8), primary_key=True)\n legislator_name = Column(String(30), primary_key=True)\n house_type = Column(String(1), primary_key=True)\n author_name = Column(String(200))\n first_name = Column(String(30))\n last_name = Column(String(30))\n middle_initial = Column(String(1))\n name_suffix = Column(String(12))\n name_title = Column(String(34))\n web_name_title = Column(String(34))\n party = Column(String(4))\n active_flg = Column(String(1))\n trans_uid = Column(String(30))\n trans_update = Column(DateTime)\n\n\nclass CAMotion(Base):\n __tablename__ = \"bill_motion_tbl\"\n\n motion_id = Column(Integer, primary_key=True)\n motion_text = Column(String(250))\n trans_uid = Column(String(30))\n trans_update = Column(DateTime)\n\n\nclass CALocation(Base):\n __tablename__ = \"location_code_tbl\"\n\n session_year = Column(String(8), primary_key=True)\n location_code = Column(String(6), primary_key=True)\n location_type = Column(String(1), primary_key=True)\n consent_calendar_code = Column(String(2), primary_key=True)\n description = Column(String(60))\n long_description = Column(String(200))\n active_flg = Column(String(1))\n trans_uid = Column(String(30))\n trans_update = Column(DateTime)\n\n\nclass CAVoteSummary(Base):\n __tablename__ = \"bill_summary_vote_tbl\"\n\n bill_id = Column(String(20), ForeignKey(CABill.bill_id), primary_key=True)\n location_code = Column(String(6), ForeignKey(CALocation.location_code), primary_key=True)\n vote_date_time = Column(DateTime, primary_key=True)\n vote_date_seq = Column(Integer, primary_key=True)\n motion_id = Column(Integer, ForeignKey(CAMotion.motion_id), primary_key=True)\n ayes = Column(Integer)\n noes = Column(Integer)\n abstain = Column(Integer)\n vote_result = Column(String(6))\n trans_uid = Column(String(30))\n trans_update = Column(DateTime, primary_key=True)\n\n motion = relation(CAMotion)\n location = relation(CALocation)\n\n @property\n def threshold(self):\n # This may not always be true...\n if self.location_code != \"AFLOOR\" and self.location_code != \"SFLOOR\":\n return '1/2'\n\n # Get the associated bill version (probably?)\n version = next(filter(lambda v: v.bill_version_action_date <= self.vote_date_time,\n self.bill.versions))\n\n if version.vote_required == 'Majority':\n return '1/2'\n else:\n return '2/3'\n\n\nclass CAVoteDetail(Base):\n __tablename__ = \"bill_detail_vote_tbl\"\n\n bill_id = Column(String(20), ForeignKey(CABill.bill_id),\n ForeignKey(CAVoteSummary.bill_id), primary_key=True)\n location_code = Column(String(6), ForeignKey(CAVoteSummary.location_code),\n primary_key=True)\n legislator_name = Column(String(50), primary_key=True)\n vote_date_time = Column(DateTime, ForeignKey(CAVoteSummary.vote_date_time),\n primary_key=True)\n vote_date_seq = Column(Integer, ForeignKey(CAVoteSummary.vote_date_seq),\n primary_key=True)\n vote_code = Column(String(5), primary_key=True)\n motion_id = Column(Integer, ForeignKey(CAVoteSummary.motion_id),\n primary_key=True)\n trans_uid = Column(String(30), primary_key=True)\n trans_update = Column(DateTime, primary_key=True)\n\n bill = relation(CABill, backref=backref('detail_votes'))\n summary = relation(\n CAVoteSummary,\n primaryjoin=and_(CAVoteSummary.bill_id == bill_id,\n CAVoteSummary.location_code == location_code,\n CAVoteSummary.vote_date_time == vote_date_time,\n CAVoteSummary.vote_date_seq == vote_date_seq,\n CAVoteSummary.motion_id == motion_id),\n backref=backref('votes'))\n\n\nclass CACommitteeHearing(Base):\n __tablename__ = \"committee_hearing_tbl\"\n\n bill_id = Column(String(20), ForeignKey(CABill.bill_id),\n ForeignKey(CAVoteSummary.bill_id), primary_key=True)\n committee_type = Column(String(2), primary_key=True)\n committee_nr = Column(Integer, primary_key=True)\n hearing_date = Column(DateTime, primary_key=True)\n location_code = Column(String(6), primary_key=True)\n trans_uid = Column(String(30), primary_key=True)\n trans_update_date = Column(DateTime, primary_key=True)\n\n bill = relation(CABill, backref=backref('committee_hearings'))\n", "path": "openstates/ca/models.py"}], "after_files": [{"content": "from sqlalchemy import (Column, Integer, String, ForeignKey,\n DateTime, Numeric, UnicodeText)\nfrom sqlalchemy.sql import and_\nfrom sqlalchemy.orm import backref, relation, foreign\nfrom sqlalchemy.ext.declarative import declarative_base\n\nfrom lxml import etree\n\nBase = declarative_base()\n\n\nclass CABill(Base):\n __tablename__ = \"bill_tbl\"\n\n bill_id = Column(String(20), primary_key=True)\n session_year = Column(String(8))\n session_num = Column(String(2))\n measure_type = Column(String(4))\n measure_num = Column(Integer)\n measure_state = Column(String(40))\n chapter_year = Column(String(4))\n chapter_type = Column(String(10))\n chapter_session_num = Column(String(2))\n chapter_num = Column(String(10))\n latest_bill_version_id = Column(String(30))\n active_flg = Column(String(1))\n trans_uid = Column(String(30))\n trans_update = Column(DateTime)\n current_location = Column(String(200))\n current_secondary_loc = Column(String(60))\n current_house = Column(String(60))\n current_status = Column(String(60))\n\n actions = relation('CABillAction', backref=backref('bill'),\n order_by=\"CABillAction.bill_history_id\")\n\n versions = relation('CABillVersion', backref=backref('bill'),\n order_by='desc(CABillVersion.version_num)')\n\n votes = relation('CAVoteSummary', backref=backref('bill'),\n order_by='CAVoteSummary.vote_date_time')\n\n @property\n def short_bill_id(self):\n return \"%s%d\" % (self.measure_type, self.measure_num)\n\n\nclass CABillVersion(Base):\n __tablename__ = \"bill_version_tbl\"\n\n bill_version_id = Column(String(30), primary_key=True)\n bill_id = Column(String(19), ForeignKey(CABill.bill_id))\n version_num = Column(Integer)\n bill_version_action_date = Column(DateTime)\n bill_version_action = Column(String(100))\n request_num = Column(String(10))\n subject = Column(String(1000))\n vote_required = Column(String(100))\n appropriation = Column(String(3))\n fiscal_committee = Column(String(3))\n local_program = Column(String(3))\n substantive_changes = Column(String(3))\n urgency = Column(String(3))\n taxlevy = Column(String(3))\n bill_xml = Column(UnicodeText)\n active_flg = Column(String(1))\n trans_uid = Column(String(30))\n trans_update = Column(DateTime)\n\n @property\n def xml(self):\n if '_xml' not in self.__dict__:\n self._xml = etree.fromstring(self.bill_xml.encode('utf-8'),\n etree.XMLParser(recover=True))\n return self._xml\n\n @property\n def title(self):\n text = self.xml.xpath(\"string(//*[local-name() = 'Title'])\") or ''\n return text.strip()\n\n @property\n def short_title(self):\n text = self.xml.xpath(\"string(//*[local-name() = 'Subject'])\") or ''\n return text.strip()\n\n\nclass CABillVersionAuthor(Base):\n __tablename__ = \"bill_version_authors_tbl\"\n\n # Note: the primary_keys here are a lie - the actual table has no pk\n # but SQLAlchemy seems to demand one. Furthermore, I get strange\n # exceptions when trying to use bill_version_id as part of a\n # composite primary key.\n\n bill_version_id = Column(String(30),\n ForeignKey(CABillVersion.bill_version_id))\n type = Column(String(15))\n house = Column(String(100))\n name = Column(String(100), primary_key=True)\n contribution = Column(String(100))\n committee_members = Column(String(2000))\n active_flg = Column(String(1))\n trans_uid = Column(String(30))\n trans_update = Column(DateTime, primary_key=True)\n primary_author_flg = Column(String(1))\n\n version = relation(CABillVersion, backref=backref('authors'))\n\n\nclass CABillAction(Base):\n __tablename__ = \"bill_history_tbl\"\n\n bill_id = Column(String(20), ForeignKey(CABill.bill_id))\n bill_history_id = Column(Numeric, primary_key=True)\n action_date = Column(DateTime)\n action = Column(String(2000))\n trans_uid = Column(String(20))\n trans_update_dt = Column(DateTime)\n action_sequence = Column(Integer)\n action_code = Column(String(5))\n action_status = Column(String(60))\n primary_location = Column(String(60))\n secondary_location = Column(String(60))\n ternary_location = Column(String(60))\n end_status = Column(String(60))\n\n @property\n def actor(self):\n # TODO: replace committee codes w/ names\n\n if not self.primary_location:\n return None\n\n actor = self.primary_location\n\n if self.secondary_location:\n actor += \" (%s\" % self.secondary_location\n\n if self.ternary_location:\n actor += \" %s\" % self.ternary_location\n\n actor += \")\"\n\n return actor\n\n\nclass CALegislator(Base):\n __tablename__ = 'legislator_tbl'\n\n district = Column(String(5), primary_key=True)\n session_year = Column(String(8), primary_key=True)\n legislator_name = Column(String(30), primary_key=True)\n house_type = Column(String(1), primary_key=True)\n author_name = Column(String(200))\n first_name = Column(String(30))\n last_name = Column(String(30))\n middle_initial = Column(String(1))\n name_suffix = Column(String(12))\n name_title = Column(String(34))\n web_name_title = Column(String(34))\n party = Column(String(4))\n active_flg = Column(String(1))\n trans_uid = Column(String(30))\n trans_update = Column(DateTime)\n\n\nclass CAMotion(Base):\n __tablename__ = \"bill_motion_tbl\"\n\n motion_id = Column(Integer, primary_key=True)\n motion_text = Column(String(250))\n trans_uid = Column(String(30))\n trans_update = Column(DateTime)\n\n\nclass CALocation(Base):\n __tablename__ = \"location_code_tbl\"\n\n session_year = Column(String(8), primary_key=True)\n location_code = Column(String(6), primary_key=True)\n location_type = Column(String(1), primary_key=True)\n consent_calendar_code = Column(String(2), primary_key=True)\n description = Column(String(60))\n long_description = Column(String(200))\n active_flg = Column(String(1))\n trans_uid = Column(String(30))\n trans_update = Column(DateTime)\n\n\nclass CAVoteSummary(Base):\n __tablename__ = \"bill_summary_vote_tbl\"\n\n bill_id = Column(String(20), ForeignKey(CABill.bill_id), primary_key=True)\n location_code = Column(String(6), ForeignKey(CALocation.location_code), primary_key=True)\n vote_date_time = Column(DateTime, primary_key=True)\n vote_date_seq = Column(Integer, primary_key=True)\n motion_id = Column(Integer, ForeignKey(CAMotion.motion_id), primary_key=True)\n ayes = Column(Integer)\n noes = Column(Integer)\n abstain = Column(Integer)\n vote_result = Column(String(6))\n trans_uid = Column(String(30))\n trans_update = Column(DateTime, primary_key=True)\n\n motion = relation(CAMotion)\n location = relation(CALocation)\n\n @property\n def threshold(self):\n # This may not always be true...\n if self.location_code != \"AFLOOR\" and self.location_code != \"SFLOOR\":\n return '1/2'\n\n # Get the associated bill version (probably?)\n version = next(filter(lambda v: v.bill_version_action_date <= self.vote_date_time,\n self.bill.versions))\n\n if version.vote_required == 'Majority':\n return '1/2'\n else:\n return '2/3'\n\n\nclass CAVoteDetail(Base):\n __tablename__ = \"bill_detail_vote_tbl\"\n\n bill_id = Column(String(20), ForeignKey(CABill.bill_id), primary_key=True)\n location_code = Column(String(6), ForeignKey(CAVoteSummary.location_code),\n primary_key=True)\n legislator_name = Column(String(50), primary_key=True)\n vote_date_time = Column(DateTime, ForeignKey(CAVoteSummary.vote_date_time),\n primary_key=True)\n vote_date_seq = Column(Integer, ForeignKey(CAVoteSummary.vote_date_seq),\n primary_key=True)\n vote_code = Column(String(5), primary_key=True)\n motion_id = Column(Integer, ForeignKey(CAVoteSummary.motion_id),\n primary_key=True)\n trans_uid = Column(String(30), primary_key=True)\n trans_update = Column(DateTime, primary_key=True)\n\n bill = relation(CABill,\n primaryjoin=\"CABill.bill_id == foreign(CAVoteDetail.bill_id)\",\n backref=backref('detail_votes'))\n summary = relation(\n CAVoteSummary,\n primaryjoin=and_(CAVoteSummary.bill_id == foreign(bill_id),\n CAVoteSummary.location_code == location_code,\n CAVoteSummary.vote_date_time == vote_date_time,\n CAVoteSummary.vote_date_seq == vote_date_seq,\n CAVoteSummary.motion_id == motion_id),\n backref=backref('votes'))\n\n\nclass CACommitteeHearing(Base):\n __tablename__ = \"committee_hearing_tbl\"\n\n bill_id = Column(String(20), ForeignKey(CABill.bill_id),\n ForeignKey(CAVoteSummary.bill_id), primary_key=True)\n committee_type = Column(String(2), primary_key=True)\n committee_nr = Column(Integer, primary_key=True)\n hearing_date = Column(DateTime, primary_key=True)\n location_code = Column(String(6), primary_key=True)\n trans_uid = Column(String(30), primary_key=True)\n trans_update_date = Column(DateTime, primary_key=True)\n\n bill = relation(CABill, backref=backref('committee_hearings'))\n", "path": "openstates/ca/models.py"}]} | 3,195 | 408 |
gh_patches_debug_30897 | rasdani/github-patches | git_diff | encode__starlette-186 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Session middleware is highly insecure
The session middleware does not have any timestamp component to it, which means that if a session happens to be somehow leaked, it can be reused any time in the future.
Can we have a timestamp component added to the session, so that the session middleware can reject such sessions after a reasonable period of time (e.g. 24 hours)?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `starlette/middleware/sessions.py`
Content:
```
1 import functools
2 import json
3 from base64 import b64decode, b64encode
4
5 import itsdangerous
6
7 from starlette.datastructures import MutableHeaders
8 from starlette.requests import Request
9 from starlette.types import ASGIApp, ASGIInstance, Message, Receive, Scope, Send
10
11
12 class SessionMiddleware:
13 def __init__(
14 self, app: ASGIApp, secret_key: str, session_cookie: str = "session"
15 ) -> None:
16 self.app = app
17 self.signer = itsdangerous.Signer(secret_key)
18 self.session_cookie = session_cookie
19
20 def __call__(self, scope: Scope) -> ASGIInstance:
21 if scope["type"] in ("http", "websocket"):
22 request = Request(scope)
23 if self.session_cookie in request.cookies:
24 data = request.cookies[self.session_cookie].encode("utf-8")
25 data = self.signer.unsign(data)
26 scope["session"] = json.loads(b64decode(data))
27 else:
28 scope["session"] = {}
29 return functools.partial(self.asgi, scope=scope)
30 return self.app(scope) # pragma: no cover
31
32 async def asgi(self, receive: Receive, send: Send, scope: Scope) -> None:
33 was_empty_session = not scope["session"]
34 inner = self.app(scope)
35
36 async def sender(message: Message) -> None:
37 if message["type"] == "http.response.start":
38 if scope["session"]:
39 # We have session data to persist.
40 data = b64encode(json.dumps(scope["session"]).encode("utf-8"))
41 data = self.signer.sign(data)
42 headers = MutableHeaders(scope=message)
43 header_value = "%s=%s" % (self.session_cookie, data.decode("utf-8"))
44 headers.append("Set-Cookie", header_value)
45 elif not was_empty_session:
46 # The session has been cleared.
47 headers = MutableHeaders(scope=message)
48 header_value = "%s=%s" % (
49 self.session_cookie,
50 "null; path=/; expires=Thu, 01 Jan 1970 00:00:00 GMT",
51 )
52 headers.append("Set-Cookie", header_value)
53 await send(message)
54
55 await inner(receive, sender)
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/starlette/middleware/sessions.py b/starlette/middleware/sessions.py
--- a/starlette/middleware/sessions.py
+++ b/starlette/middleware/sessions.py
@@ -3,6 +3,7 @@
from base64 import b64decode, b64encode
import itsdangerous
+from itsdangerous.exc import BadTimeSignature, SignatureExpired
from starlette.datastructures import MutableHeaders
from starlette.requests import Request
@@ -11,19 +12,27 @@
class SessionMiddleware:
def __init__(
- self, app: ASGIApp, secret_key: str, session_cookie: str = "session"
+ self,
+ app: ASGIApp,
+ secret_key: str,
+ session_cookie: str = "session",
+ max_age: int = 14 * 24 * 60 * 60, # 14 days, in seconds
) -> None:
self.app = app
- self.signer = itsdangerous.Signer(secret_key)
+ self.signer = itsdangerous.TimestampSigner(secret_key)
self.session_cookie = session_cookie
+ self.max_age = max_age
def __call__(self, scope: Scope) -> ASGIInstance:
if scope["type"] in ("http", "websocket"):
request = Request(scope)
if self.session_cookie in request.cookies:
data = request.cookies[self.session_cookie].encode("utf-8")
- data = self.signer.unsign(data)
- scope["session"] = json.loads(b64decode(data))
+ try:
+ data = self.signer.unsign(data, max_age=self.max_age)
+ scope["session"] = json.loads(b64decode(data))
+ except (BadTimeSignature, SignatureExpired):
+ scope["session"] = {}
else:
scope["session"] = {}
return functools.partial(self.asgi, scope=scope)
| {"golden_diff": "diff --git a/starlette/middleware/sessions.py b/starlette/middleware/sessions.py\n--- a/starlette/middleware/sessions.py\n+++ b/starlette/middleware/sessions.py\n@@ -3,6 +3,7 @@\n from base64 import b64decode, b64encode\n \n import itsdangerous\n+from itsdangerous.exc import BadTimeSignature, SignatureExpired\n \n from starlette.datastructures import MutableHeaders\n from starlette.requests import Request\n@@ -11,19 +12,27 @@\n \n class SessionMiddleware:\n def __init__(\n- self, app: ASGIApp, secret_key: str, session_cookie: str = \"session\"\n+ self,\n+ app: ASGIApp,\n+ secret_key: str,\n+ session_cookie: str = \"session\",\n+ max_age: int = 14 * 24 * 60 * 60, # 14 days, in seconds\n ) -> None:\n self.app = app\n- self.signer = itsdangerous.Signer(secret_key)\n+ self.signer = itsdangerous.TimestampSigner(secret_key)\n self.session_cookie = session_cookie\n+ self.max_age = max_age\n \n def __call__(self, scope: Scope) -> ASGIInstance:\n if scope[\"type\"] in (\"http\", \"websocket\"):\n request = Request(scope)\n if self.session_cookie in request.cookies:\n data = request.cookies[self.session_cookie].encode(\"utf-8\")\n- data = self.signer.unsign(data)\n- scope[\"session\"] = json.loads(b64decode(data))\n+ try:\n+ data = self.signer.unsign(data, max_age=self.max_age)\n+ scope[\"session\"] = json.loads(b64decode(data))\n+ except (BadTimeSignature, SignatureExpired):\n+ scope[\"session\"] = {}\n else:\n scope[\"session\"] = {}\n return functools.partial(self.asgi, scope=scope)\n", "issue": "Session middleware is highly insecure\nThe session middleware does not have any timestamp component to it, which means that if a session happens to be somehow leaked, it can be reused any time in the future.\r\n\r\nCan we have a timestamp component added to the session, so that the session middleware can reject such sessions after a reasonable period of time (e.g. 24 hours)?\n", "before_files": [{"content": "import functools\nimport json\nfrom base64 import b64decode, b64encode\n\nimport itsdangerous\n\nfrom starlette.datastructures import MutableHeaders\nfrom starlette.requests import Request\nfrom starlette.types import ASGIApp, ASGIInstance, Message, Receive, Scope, Send\n\n\nclass SessionMiddleware:\n def __init__(\n self, app: ASGIApp, secret_key: str, session_cookie: str = \"session\"\n ) -> None:\n self.app = app\n self.signer = itsdangerous.Signer(secret_key)\n self.session_cookie = session_cookie\n\n def __call__(self, scope: Scope) -> ASGIInstance:\n if scope[\"type\"] in (\"http\", \"websocket\"):\n request = Request(scope)\n if self.session_cookie in request.cookies:\n data = request.cookies[self.session_cookie].encode(\"utf-8\")\n data = self.signer.unsign(data)\n scope[\"session\"] = json.loads(b64decode(data))\n else:\n scope[\"session\"] = {}\n return functools.partial(self.asgi, scope=scope)\n return self.app(scope) # pragma: no cover\n\n async def asgi(self, receive: Receive, send: Send, scope: Scope) -> None:\n was_empty_session = not scope[\"session\"]\n inner = self.app(scope)\n\n async def sender(message: Message) -> None:\n if message[\"type\"] == \"http.response.start\":\n if scope[\"session\"]:\n # We have session data to persist.\n data = b64encode(json.dumps(scope[\"session\"]).encode(\"utf-8\"))\n data = self.signer.sign(data)\n headers = MutableHeaders(scope=message)\n header_value = \"%s=%s\" % (self.session_cookie, data.decode(\"utf-8\"))\n headers.append(\"Set-Cookie\", header_value)\n elif not was_empty_session:\n # The session has been cleared.\n headers = MutableHeaders(scope=message)\n header_value = \"%s=%s\" % (\n self.session_cookie,\n \"null; path=/; expires=Thu, 01 Jan 1970 00:00:00 GMT\",\n )\n headers.append(\"Set-Cookie\", header_value)\n await send(message)\n\n await inner(receive, sender)\n", "path": "starlette/middleware/sessions.py"}], "after_files": [{"content": "import functools\nimport json\nfrom base64 import b64decode, b64encode\n\nimport itsdangerous\nfrom itsdangerous.exc import BadTimeSignature, SignatureExpired\n\nfrom starlette.datastructures import MutableHeaders\nfrom starlette.requests import Request\nfrom starlette.types import ASGIApp, ASGIInstance, Message, Receive, Scope, Send\n\n\nclass SessionMiddleware:\n def __init__(\n self,\n app: ASGIApp,\n secret_key: str,\n session_cookie: str = \"session\",\n max_age: int = 14 * 24 * 60 * 60, # 14 days, in seconds\n ) -> None:\n self.app = app\n self.signer = itsdangerous.TimestampSigner(secret_key)\n self.session_cookie = session_cookie\n self.max_age = max_age\n\n def __call__(self, scope: Scope) -> ASGIInstance:\n if scope[\"type\"] in (\"http\", \"websocket\"):\n request = Request(scope)\n if self.session_cookie in request.cookies:\n data = request.cookies[self.session_cookie].encode(\"utf-8\")\n try:\n data = self.signer.unsign(data, max_age=self.max_age)\n scope[\"session\"] = json.loads(b64decode(data))\n except (BadTimeSignature, SignatureExpired):\n scope[\"session\"] = {}\n else:\n scope[\"session\"] = {}\n return functools.partial(self.asgi, scope=scope)\n return self.app(scope) # pragma: no cover\n\n async def asgi(self, receive: Receive, send: Send, scope: Scope) -> None:\n was_empty_session = not scope[\"session\"]\n inner = self.app(scope)\n\n async def sender(message: Message) -> None:\n if message[\"type\"] == \"http.response.start\":\n if scope[\"session\"]:\n # We have session data to persist.\n data = b64encode(json.dumps(scope[\"session\"]).encode(\"utf-8\"))\n data = self.signer.sign(data)\n headers = MutableHeaders(scope=message)\n header_value = \"%s=%s\" % (self.session_cookie, data.decode(\"utf-8\"))\n headers.append(\"Set-Cookie\", header_value)\n elif not was_empty_session:\n # The session has been cleared.\n headers = MutableHeaders(scope=message)\n header_value = \"%s=%s\" % (\n self.session_cookie,\n \"null; path=/; expires=Thu, 01 Jan 1970 00:00:00 GMT\",\n )\n headers.append(\"Set-Cookie\", header_value)\n await send(message)\n\n await inner(receive, sender)\n", "path": "starlette/middleware/sessions.py"}]} | 933 | 428 |
gh_patches_debug_21051 | rasdani/github-patches | git_diff | geopandas__geopandas-512 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BUG/COMPAT: missing values handling / fillna no longer working with shapely 1.6
See https://github.com/Toblerity/Shapely/issues/510 for more detailed exploration of the problem
Temporarily pinned our CI to older shapely version (but kept one with the latest shapely in the allowed failures section) in https://github.com/geopandas/geopandas/pull/508
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `geopandas/geoseries.py`
Content:
```
1 from functools import partial
2 import json
3 from warnings import warn
4
5 import numpy as np
6 from pandas import Series, DataFrame
7
8 import pyproj
9 from shapely.geometry import shape, Polygon, Point
10 from shapely.geometry.collection import GeometryCollection
11 from shapely.geometry.base import BaseGeometry
12 from shapely.ops import transform
13
14 from geopandas.plotting import plot_series
15 from geopandas.base import GeoPandasBase, _series_unary_op, _CoordinateIndexer
16
17
18 def _is_empty(x):
19 try:
20 return x.is_empty
21 except:
22 return False
23
24
25 class GeoSeries(GeoPandasBase, Series):
26 """A Series object designed to store shapely geometry objects."""
27 _metadata = ['name', 'crs']
28
29 def __new__(cls, *args, **kwargs):
30 kwargs.pop('crs', None)
31 arr = Series.__new__(cls)
32 if type(arr) is GeoSeries:
33 return arr
34 else:
35 return arr.view(GeoSeries)
36
37 def __init__(self, *args, **kwargs):
38 # fix problem for scalar geometries passed
39 if len(args) == 1 and isinstance(args[0], BaseGeometry):
40 args = ([args[0]],)
41
42 crs = kwargs.pop('crs', None)
43
44 super(GeoSeries, self).__init__(*args, **kwargs)
45 self.crs = crs
46 self._invalidate_sindex()
47
48 def append(self, *args, **kwargs):
49 return self._wrapped_pandas_method('append', *args, **kwargs)
50
51 @property
52 def geometry(self):
53 return self
54
55 @property
56 def x(self):
57 """Return the x location of point geometries in a GeoSeries"""
58 if (self.geom_type == "Point").all():
59 return _series_unary_op(self, 'x', null_value=np.nan)
60 else:
61 message = "x attribute access only provided for Point geometries"
62 raise ValueError(message)
63
64 @property
65 def y(self):
66 """Return the y location of point geometries in a GeoSeries"""
67 if (self.geom_type == "Point").all():
68 return _series_unary_op(self, 'y', null_value=np.nan)
69 else:
70 message = "y attribute access only provided for Point geometries"
71 raise ValueError(message)
72
73 @classmethod
74 def from_file(cls, filename, **kwargs):
75 """
76 Alternate constructor to create a GeoSeries from a file
77
78 Parameters
79 ----------
80
81 filename : str
82 File path or file handle to read from. Depending on which kwargs
83 are included, the content of filename may vary, see:
84 http://toblerity.github.io/fiona/README.html#usage
85 for usage details.
86 kwargs : key-word arguments
87 These arguments are passed to fiona.open, and can be used to
88 access multi-layer data, data stored within archives (zip files),
89 etc.
90
91 """
92 import fiona
93 geoms = []
94 with fiona.open(filename, **kwargs) as f:
95 crs = f.crs
96 for rec in f:
97 geoms.append(shape(rec['geometry']))
98 g = GeoSeries(geoms)
99 g.crs = crs
100 return g
101
102 @property
103 def __geo_interface__(self):
104 """Returns a GeoSeries as a python feature collection
105 """
106 from geopandas import GeoDataFrame
107 return GeoDataFrame({'geometry': self}).__geo_interface__
108
109 def to_file(self, filename, driver="ESRI Shapefile", **kwargs):
110 from geopandas import GeoDataFrame
111 data = GeoDataFrame({"geometry": self,
112 "id":self.index.values},
113 index=self.index)
114 data.crs = self.crs
115 data.to_file(filename, driver, **kwargs)
116
117 #
118 # Implement pandas methods
119 #
120
121 @property
122 def _constructor(self):
123 return GeoSeries
124
125 def _wrapped_pandas_method(self, mtd, *args, **kwargs):
126 """Wrap a generic pandas method to ensure it returns a GeoSeries"""
127 val = getattr(super(GeoSeries, self), mtd)(*args, **kwargs)
128 if type(val) == Series:
129 val.__class__ = GeoSeries
130 val.crs = self.crs
131 val._invalidate_sindex()
132 return val
133
134 def __getitem__(self, key):
135 return self._wrapped_pandas_method('__getitem__', key)
136
137 def sort_index(self, *args, **kwargs):
138 return self._wrapped_pandas_method('sort_index', *args, **kwargs)
139
140 def take(self, *args, **kwargs):
141 return self._wrapped_pandas_method('take', *args, **kwargs)
142
143 def select(self, *args, **kwargs):
144 return self._wrapped_pandas_method('select', *args, **kwargs)
145
146 @property
147 def _can_hold_na(self):
148 return False
149
150 def __finalize__(self, other, method=None, **kwargs):
151 """ propagate metadata from other to self """
152 # NOTE: backported from pandas master (upcoming v0.13)
153 for name in self._metadata:
154 object.__setattr__(self, name, getattr(other, name, None))
155 return self
156
157 def copy(self, order='C'):
158 """
159 Make a copy of this GeoSeries object
160
161 Parameters
162 ----------
163 deep : boolean, default True
164 Make a deep copy, i.e. also copy data
165
166 Returns
167 -------
168 copy : GeoSeries
169 """
170 # FIXME: this will likely be unnecessary in pandas >= 0.13
171 return GeoSeries(self.values.copy(order), index=self.index,
172 name=self.name).__finalize__(self)
173
174 def isnull(self):
175 """Null values in a GeoSeries are represented by empty geometric objects"""
176 non_geo_null = super(GeoSeries, self).isnull()
177 val = self.apply(_is_empty)
178 return np.logical_or(non_geo_null, val)
179
180 def fillna(self, value=None, method=None, inplace=False,
181 **kwargs):
182 """Fill NA/NaN values with a geometry (empty polygon by default).
183
184 "method" is currently not implemented for pandas <= 0.12.
185 """
186 if value is None:
187 value = Point()
188 return super(GeoSeries, self).fillna(value=value, method=method,
189 inplace=inplace, **kwargs)
190
191 def align(self, other, join='outer', level=None, copy=True,
192 fill_value=None, **kwargs):
193 if fill_value is None:
194 fill_value = Point()
195 left, right = super(GeoSeries, self).align(other, join=join,
196 level=level, copy=copy,
197 fill_value=fill_value,
198 **kwargs)
199 if isinstance(other, GeoSeries):
200 return GeoSeries(left), GeoSeries(right)
201 else: # It is probably a Series, let's keep it that way
202 return GeoSeries(left), right
203
204
205 def __contains__(self, other):
206 """Allow tests of the form "geom in s"
207
208 Tests whether a GeoSeries contains a geometry.
209
210 Note: This is not the same as the geometric method "contains".
211 """
212 if isinstance(other, BaseGeometry):
213 return np.any(self.geom_equals(other))
214 else:
215 return False
216
217 def plot(self, *args, **kwargs):
218 return plot_series(self, *args, **kwargs)
219
220 plot.__doc__ = plot_series.__doc__
221
222 #
223 # Additional methods
224 #
225
226 def to_crs(self, crs=None, epsg=None):
227 """Transform geometries to a new coordinate reference system
228
229 This method will transform all points in all objects. It has
230 no notion or projecting entire geometries. All segments
231 joining points are assumed to be lines in the current
232 projection, not geodesics. Objects crossing the dateline (or
233 other projection boundary) will have undesirable behavior.
234
235 `to_crs` passes the `crs` argument to the `Proj` function from the
236 `pyproj` library (with the option `preserve_units=True`). It can
237 therefore accept proj4 projections in any format
238 supported by `Proj`, including dictionaries, or proj4 strings.
239
240 """
241 from fiona.crs import from_epsg
242 if self.crs is None:
243 raise ValueError('Cannot transform naive geometries. '
244 'Please set a crs on the object first.')
245 if crs is None:
246 try:
247 crs = from_epsg(epsg)
248 except TypeError:
249 raise TypeError('Must set either crs or epsg for output.')
250 proj_in = pyproj.Proj(self.crs, preserve_units=True)
251 proj_out = pyproj.Proj(crs, preserve_units=True)
252 project = partial(pyproj.transform, proj_in, proj_out)
253 result = self.apply(lambda geom: transform(project, geom))
254 result.__class__ = GeoSeries
255 result.crs = crs
256 result._invalidate_sindex()
257 return result
258
259 def to_json(self, **kwargs):
260 """
261 Returns a GeoJSON string representation of the GeoSeries.
262
263 Parameters
264 ----------
265 *kwargs* that will be passed to json.dumps().
266 """
267 return json.dumps(self.__geo_interface__, **kwargs)
268
269 #
270 # Implement standard operators for GeoSeries
271 #
272
273 def __xor__(self, other):
274 """Implement ^ operator as for builtin set type"""
275 return self.symmetric_difference(other)
276
277 def __or__(self, other):
278 """Implement | operator as for builtin set type"""
279 return self.union(other)
280
281 def __and__(self, other):
282 """Implement & operator as for builtin set type"""
283 return self.intersection(other)
284
285 def __sub__(self, other):
286 """Implement - operator as for builtin set type"""
287 return self.difference(other)
288
289 GeoSeries._create_indexer('cx', _CoordinateIndexer)
290
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/geopandas/geoseries.py b/geopandas/geoseries.py
--- a/geopandas/geoseries.py
+++ b/geopandas/geoseries.py
@@ -184,14 +184,14 @@
"method" is currently not implemented for pandas <= 0.12.
"""
if value is None:
- value = Point()
+ value = BaseGeometry()
return super(GeoSeries, self).fillna(value=value, method=method,
inplace=inplace, **kwargs)
def align(self, other, join='outer', level=None, copy=True,
fill_value=None, **kwargs):
if fill_value is None:
- fill_value = Point()
+ fill_value = BaseGeometry()
left, right = super(GeoSeries, self).align(other, join=join,
level=level, copy=copy,
fill_value=fill_value,
| {"golden_diff": "diff --git a/geopandas/geoseries.py b/geopandas/geoseries.py\n--- a/geopandas/geoseries.py\n+++ b/geopandas/geoseries.py\n@@ -184,14 +184,14 @@\n \"method\" is currently not implemented for pandas <= 0.12.\n \"\"\"\n if value is None:\n- value = Point()\n+ value = BaseGeometry()\n return super(GeoSeries, self).fillna(value=value, method=method,\n inplace=inplace, **kwargs)\n \n def align(self, other, join='outer', level=None, copy=True,\n fill_value=None, **kwargs):\n if fill_value is None:\n- fill_value = Point()\n+ fill_value = BaseGeometry()\n left, right = super(GeoSeries, self).align(other, join=join,\n level=level, copy=copy,\n fill_value=fill_value,\n", "issue": "BUG/COMPAT: missing values handling / fillna no longer working with shapely 1.6\nSee https://github.com/Toblerity/Shapely/issues/510 for more detailed exploration of the problem\r\n\r\nTemporarily pinned our CI to older shapely version (but kept one with the latest shapely in the allowed failures section) in https://github.com/geopandas/geopandas/pull/508\n", "before_files": [{"content": "from functools import partial\nimport json\nfrom warnings import warn\n\nimport numpy as np\nfrom pandas import Series, DataFrame\n\nimport pyproj\nfrom shapely.geometry import shape, Polygon, Point\nfrom shapely.geometry.collection import GeometryCollection\nfrom shapely.geometry.base import BaseGeometry\nfrom shapely.ops import transform\n\nfrom geopandas.plotting import plot_series\nfrom geopandas.base import GeoPandasBase, _series_unary_op, _CoordinateIndexer\n\n\ndef _is_empty(x):\n try:\n return x.is_empty\n except:\n return False\n\n\nclass GeoSeries(GeoPandasBase, Series):\n \"\"\"A Series object designed to store shapely geometry objects.\"\"\"\n _metadata = ['name', 'crs']\n\n def __new__(cls, *args, **kwargs):\n kwargs.pop('crs', None)\n arr = Series.__new__(cls)\n if type(arr) is GeoSeries:\n return arr\n else:\n return arr.view(GeoSeries)\n\n def __init__(self, *args, **kwargs):\n # fix problem for scalar geometries passed\n if len(args) == 1 and isinstance(args[0], BaseGeometry):\n args = ([args[0]],)\n\n crs = kwargs.pop('crs', None)\n\n super(GeoSeries, self).__init__(*args, **kwargs)\n self.crs = crs\n self._invalidate_sindex()\n\n def append(self, *args, **kwargs):\n return self._wrapped_pandas_method('append', *args, **kwargs)\n\n @property\n def geometry(self):\n return self\n\n @property\n def x(self):\n \"\"\"Return the x location of point geometries in a GeoSeries\"\"\"\n if (self.geom_type == \"Point\").all():\n return _series_unary_op(self, 'x', null_value=np.nan)\n else:\n message = \"x attribute access only provided for Point geometries\"\n raise ValueError(message)\n\n @property\n def y(self):\n \"\"\"Return the y location of point geometries in a GeoSeries\"\"\"\n if (self.geom_type == \"Point\").all():\n return _series_unary_op(self, 'y', null_value=np.nan)\n else:\n message = \"y attribute access only provided for Point geometries\"\n raise ValueError(message)\n\n @classmethod\n def from_file(cls, filename, **kwargs):\n \"\"\"\n Alternate constructor to create a GeoSeries from a file\n\n Parameters\n ----------\n\n filename : str\n File path or file handle to read from. Depending on which kwargs\n are included, the content of filename may vary, see:\n http://toblerity.github.io/fiona/README.html#usage\n for usage details.\n kwargs : key-word arguments\n These arguments are passed to fiona.open, and can be used to\n access multi-layer data, data stored within archives (zip files),\n etc.\n\n \"\"\"\n import fiona\n geoms = []\n with fiona.open(filename, **kwargs) as f:\n crs = f.crs\n for rec in f:\n geoms.append(shape(rec['geometry']))\n g = GeoSeries(geoms)\n g.crs = crs\n return g\n\n @property\n def __geo_interface__(self):\n \"\"\"Returns a GeoSeries as a python feature collection\n \"\"\"\n from geopandas import GeoDataFrame\n return GeoDataFrame({'geometry': self}).__geo_interface__\n\n def to_file(self, filename, driver=\"ESRI Shapefile\", **kwargs):\n from geopandas import GeoDataFrame\n data = GeoDataFrame({\"geometry\": self,\n \"id\":self.index.values},\n index=self.index)\n data.crs = self.crs\n data.to_file(filename, driver, **kwargs)\n\n #\n # Implement pandas methods\n #\n\n @property\n def _constructor(self):\n return GeoSeries\n\n def _wrapped_pandas_method(self, mtd, *args, **kwargs):\n \"\"\"Wrap a generic pandas method to ensure it returns a GeoSeries\"\"\"\n val = getattr(super(GeoSeries, self), mtd)(*args, **kwargs)\n if type(val) == Series:\n val.__class__ = GeoSeries\n val.crs = self.crs\n val._invalidate_sindex()\n return val\n\n def __getitem__(self, key):\n return self._wrapped_pandas_method('__getitem__', key)\n\n def sort_index(self, *args, **kwargs):\n return self._wrapped_pandas_method('sort_index', *args, **kwargs)\n\n def take(self, *args, **kwargs):\n return self._wrapped_pandas_method('take', *args, **kwargs)\n\n def select(self, *args, **kwargs):\n return self._wrapped_pandas_method('select', *args, **kwargs)\n\n @property\n def _can_hold_na(self):\n return False\n\n def __finalize__(self, other, method=None, **kwargs):\n \"\"\" propagate metadata from other to self \"\"\"\n # NOTE: backported from pandas master (upcoming v0.13)\n for name in self._metadata:\n object.__setattr__(self, name, getattr(other, name, None))\n return self\n\n def copy(self, order='C'):\n \"\"\"\n Make a copy of this GeoSeries object\n\n Parameters\n ----------\n deep : boolean, default True\n Make a deep copy, i.e. also copy data\n\n Returns\n -------\n copy : GeoSeries\n \"\"\"\n # FIXME: this will likely be unnecessary in pandas >= 0.13\n return GeoSeries(self.values.copy(order), index=self.index,\n name=self.name).__finalize__(self)\n\n def isnull(self):\n \"\"\"Null values in a GeoSeries are represented by empty geometric objects\"\"\"\n non_geo_null = super(GeoSeries, self).isnull()\n val = self.apply(_is_empty)\n return np.logical_or(non_geo_null, val)\n\n def fillna(self, value=None, method=None, inplace=False,\n **kwargs):\n \"\"\"Fill NA/NaN values with a geometry (empty polygon by default).\n\n \"method\" is currently not implemented for pandas <= 0.12.\n \"\"\"\n if value is None:\n value = Point()\n return super(GeoSeries, self).fillna(value=value, method=method,\n inplace=inplace, **kwargs)\n\n def align(self, other, join='outer', level=None, copy=True,\n fill_value=None, **kwargs):\n if fill_value is None:\n fill_value = Point()\n left, right = super(GeoSeries, self).align(other, join=join,\n level=level, copy=copy,\n fill_value=fill_value,\n **kwargs)\n if isinstance(other, GeoSeries):\n return GeoSeries(left), GeoSeries(right)\n else: # It is probably a Series, let's keep it that way\n return GeoSeries(left), right\n\n\n def __contains__(self, other):\n \"\"\"Allow tests of the form \"geom in s\"\n\n Tests whether a GeoSeries contains a geometry.\n\n Note: This is not the same as the geometric method \"contains\".\n \"\"\"\n if isinstance(other, BaseGeometry):\n return np.any(self.geom_equals(other))\n else:\n return False\n\n def plot(self, *args, **kwargs):\n return plot_series(self, *args, **kwargs)\n\n plot.__doc__ = plot_series.__doc__\n\n #\n # Additional methods\n #\n\n def to_crs(self, crs=None, epsg=None):\n \"\"\"Transform geometries to a new coordinate reference system\n\n This method will transform all points in all objects. It has\n no notion or projecting entire geometries. All segments\n joining points are assumed to be lines in the current\n projection, not geodesics. Objects crossing the dateline (or\n other projection boundary) will have undesirable behavior.\n\n `to_crs` passes the `crs` argument to the `Proj` function from the\n `pyproj` library (with the option `preserve_units=True`). It can\n therefore accept proj4 projections in any format\n supported by `Proj`, including dictionaries, or proj4 strings.\n\n \"\"\"\n from fiona.crs import from_epsg\n if self.crs is None:\n raise ValueError('Cannot transform naive geometries. '\n 'Please set a crs on the object first.')\n if crs is None:\n try:\n crs = from_epsg(epsg)\n except TypeError:\n raise TypeError('Must set either crs or epsg for output.')\n proj_in = pyproj.Proj(self.crs, preserve_units=True)\n proj_out = pyproj.Proj(crs, preserve_units=True)\n project = partial(pyproj.transform, proj_in, proj_out)\n result = self.apply(lambda geom: transform(project, geom))\n result.__class__ = GeoSeries\n result.crs = crs\n result._invalidate_sindex()\n return result\n\n def to_json(self, **kwargs):\n \"\"\"\n Returns a GeoJSON string representation of the GeoSeries.\n\n Parameters\n ----------\n *kwargs* that will be passed to json.dumps().\n \"\"\"\n return json.dumps(self.__geo_interface__, **kwargs)\n\n #\n # Implement standard operators for GeoSeries\n #\n\n def __xor__(self, other):\n \"\"\"Implement ^ operator as for builtin set type\"\"\"\n return self.symmetric_difference(other)\n\n def __or__(self, other):\n \"\"\"Implement | operator as for builtin set type\"\"\"\n return self.union(other)\n\n def __and__(self, other):\n \"\"\"Implement & operator as for builtin set type\"\"\"\n return self.intersection(other)\n\n def __sub__(self, other):\n \"\"\"Implement - operator as for builtin set type\"\"\"\n return self.difference(other)\n\nGeoSeries._create_indexer('cx', _CoordinateIndexer)\n", "path": "geopandas/geoseries.py"}], "after_files": [{"content": "from functools import partial\nimport json\nfrom warnings import warn\n\nimport numpy as np\nfrom pandas import Series, DataFrame\n\nimport pyproj\nfrom shapely.geometry import shape, Polygon, Point\nfrom shapely.geometry.collection import GeometryCollection\nfrom shapely.geometry.base import BaseGeometry\nfrom shapely.ops import transform\n\nfrom geopandas.plotting import plot_series\nfrom geopandas.base import GeoPandasBase, _series_unary_op, _CoordinateIndexer\n\n\ndef _is_empty(x):\n try:\n return x.is_empty\n except:\n return False\n\n\nclass GeoSeries(GeoPandasBase, Series):\n \"\"\"A Series object designed to store shapely geometry objects.\"\"\"\n _metadata = ['name', 'crs']\n\n def __new__(cls, *args, **kwargs):\n kwargs.pop('crs', None)\n arr = Series.__new__(cls)\n if type(arr) is GeoSeries:\n return arr\n else:\n return arr.view(GeoSeries)\n\n def __init__(self, *args, **kwargs):\n # fix problem for scalar geometries passed\n if len(args) == 1 and isinstance(args[0], BaseGeometry):\n args = ([args[0]],)\n\n crs = kwargs.pop('crs', None)\n\n super(GeoSeries, self).__init__(*args, **kwargs)\n self.crs = crs\n self._invalidate_sindex()\n\n def append(self, *args, **kwargs):\n return self._wrapped_pandas_method('append', *args, **kwargs)\n\n @property\n def geometry(self):\n return self\n\n @property\n def x(self):\n \"\"\"Return the x location of point geometries in a GeoSeries\"\"\"\n if (self.geom_type == \"Point\").all():\n return _series_unary_op(self, 'x', null_value=np.nan)\n else:\n message = \"x attribute access only provided for Point geometries\"\n raise ValueError(message)\n\n @property\n def y(self):\n \"\"\"Return the y location of point geometries in a GeoSeries\"\"\"\n if (self.geom_type == \"Point\").all():\n return _series_unary_op(self, 'y', null_value=np.nan)\n else:\n message = \"y attribute access only provided for Point geometries\"\n raise ValueError(message)\n\n @classmethod\n def from_file(cls, filename, **kwargs):\n \"\"\"\n Alternate constructor to create a GeoSeries from a file\n\n Parameters\n ----------\n\n filename : str\n File path or file handle to read from. Depending on which kwargs\n are included, the content of filename may vary, see:\n http://toblerity.github.io/fiona/README.html#usage\n for usage details.\n kwargs : key-word arguments\n These arguments are passed to fiona.open, and can be used to\n access multi-layer data, data stored within archives (zip files),\n etc.\n\n \"\"\"\n import fiona\n geoms = []\n with fiona.open(filename, **kwargs) as f:\n crs = f.crs\n for rec in f:\n geoms.append(shape(rec['geometry']))\n g = GeoSeries(geoms)\n g.crs = crs\n return g\n\n @property\n def __geo_interface__(self):\n \"\"\"Returns a GeoSeries as a python feature collection\n \"\"\"\n from geopandas import GeoDataFrame\n return GeoDataFrame({'geometry': self}).__geo_interface__\n\n def to_file(self, filename, driver=\"ESRI Shapefile\", **kwargs):\n from geopandas import GeoDataFrame\n data = GeoDataFrame({\"geometry\": self,\n \"id\":self.index.values},\n index=self.index)\n data.crs = self.crs\n data.to_file(filename, driver, **kwargs)\n\n #\n # Implement pandas methods\n #\n\n @property\n def _constructor(self):\n return GeoSeries\n\n def _wrapped_pandas_method(self, mtd, *args, **kwargs):\n \"\"\"Wrap a generic pandas method to ensure it returns a GeoSeries\"\"\"\n val = getattr(super(GeoSeries, self), mtd)(*args, **kwargs)\n if type(val) == Series:\n val.__class__ = GeoSeries\n val.crs = self.crs\n val._invalidate_sindex()\n return val\n\n def __getitem__(self, key):\n return self._wrapped_pandas_method('__getitem__', key)\n\n def sort_index(self, *args, **kwargs):\n return self._wrapped_pandas_method('sort_index', *args, **kwargs)\n\n def take(self, *args, **kwargs):\n return self._wrapped_pandas_method('take', *args, **kwargs)\n\n def select(self, *args, **kwargs):\n return self._wrapped_pandas_method('select', *args, **kwargs)\n\n @property\n def _can_hold_na(self):\n return False\n\n def __finalize__(self, other, method=None, **kwargs):\n \"\"\" propagate metadata from other to self \"\"\"\n # NOTE: backported from pandas master (upcoming v0.13)\n for name in self._metadata:\n object.__setattr__(self, name, getattr(other, name, None))\n return self\n\n def copy(self, order='C'):\n \"\"\"\n Make a copy of this GeoSeries object\n\n Parameters\n ----------\n deep : boolean, default True\n Make a deep copy, i.e. also copy data\n\n Returns\n -------\n copy : GeoSeries\n \"\"\"\n # FIXME: this will likely be unnecessary in pandas >= 0.13\n return GeoSeries(self.values.copy(order), index=self.index,\n name=self.name).__finalize__(self)\n\n def isnull(self):\n \"\"\"Null values in a GeoSeries are represented by empty geometric objects\"\"\"\n non_geo_null = super(GeoSeries, self).isnull()\n val = self.apply(_is_empty)\n return np.logical_or(non_geo_null, val)\n\n def fillna(self, value=None, method=None, inplace=False,\n **kwargs):\n \"\"\"Fill NA/NaN values with a geometry (empty polygon by default).\n\n \"method\" is currently not implemented for pandas <= 0.12.\n \"\"\"\n if value is None:\n value = BaseGeometry()\n return super(GeoSeries, self).fillna(value=value, method=method,\n inplace=inplace, **kwargs)\n\n def align(self, other, join='outer', level=None, copy=True,\n fill_value=None, **kwargs):\n if fill_value is None:\n fill_value = BaseGeometry()\n left, right = super(GeoSeries, self).align(other, join=join,\n level=level, copy=copy,\n fill_value=fill_value,\n **kwargs)\n if isinstance(other, GeoSeries):\n return GeoSeries(left), GeoSeries(right)\n else: # It is probably a Series, let's keep it that way\n return GeoSeries(left), right\n\n\n def __contains__(self, other):\n \"\"\"Allow tests of the form \"geom in s\"\n\n Tests whether a GeoSeries contains a geometry.\n\n Note: This is not the same as the geometric method \"contains\".\n \"\"\"\n if isinstance(other, BaseGeometry):\n return np.any(self.geom_equals(other))\n else:\n return False\n\n def plot(self, *args, **kwargs):\n return plot_series(self, *args, **kwargs)\n\n plot.__doc__ = plot_series.__doc__\n\n #\n # Additional methods\n #\n\n def to_crs(self, crs=None, epsg=None):\n \"\"\"Transform geometries to a new coordinate reference system\n\n This method will transform all points in all objects. It has\n no notion or projecting entire geometries. All segments\n joining points are assumed to be lines in the current\n projection, not geodesics. Objects crossing the dateline (or\n other projection boundary) will have undesirable behavior.\n\n `to_crs` passes the `crs` argument to the `Proj` function from the\n `pyproj` library (with the option `preserve_units=True`). It can\n therefore accept proj4 projections in any format\n supported by `Proj`, including dictionaries, or proj4 strings.\n\n \"\"\"\n from fiona.crs import from_epsg\n if self.crs is None:\n raise ValueError('Cannot transform naive geometries. '\n 'Please set a crs on the object first.')\n if crs is None:\n try:\n crs = from_epsg(epsg)\n except TypeError:\n raise TypeError('Must set either crs or epsg for output.')\n proj_in = pyproj.Proj(self.crs, preserve_units=True)\n proj_out = pyproj.Proj(crs, preserve_units=True)\n project = partial(pyproj.transform, proj_in, proj_out)\n result = self.apply(lambda geom: transform(project, geom))\n result.__class__ = GeoSeries\n result.crs = crs\n result._invalidate_sindex()\n return result\n\n def to_json(self, **kwargs):\n \"\"\"\n Returns a GeoJSON string representation of the GeoSeries.\n\n Parameters\n ----------\n *kwargs* that will be passed to json.dumps().\n \"\"\"\n return json.dumps(self.__geo_interface__, **kwargs)\n\n #\n # Implement standard operators for GeoSeries\n #\n\n def __xor__(self, other):\n \"\"\"Implement ^ operator as for builtin set type\"\"\"\n return self.symmetric_difference(other)\n\n def __or__(self, other):\n \"\"\"Implement | operator as for builtin set type\"\"\"\n return self.union(other)\n\n def __and__(self, other):\n \"\"\"Implement & operator as for builtin set type\"\"\"\n return self.intersection(other)\n\n def __sub__(self, other):\n \"\"\"Implement - operator as for builtin set type\"\"\"\n return self.difference(other)\n\nGeoSeries._create_indexer('cx', _CoordinateIndexer)\n", "path": "geopandas/geoseries.py"}]} | 3,288 | 202 |
gh_patches_debug_13477 | rasdani/github-patches | git_diff | dj-stripe__dj-stripe-547 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error: Cannot resolve keyword 'customer' into field.
All,
Cannot get past the step
`python manage.py djstripe_init_customers`
in the installation.
Running Python 3.6.0, Django 1.11, and the latest version of dj-stripe (1.0.0).
What combination of Django version and dj-stripe version are folks successfully using at the moment? Thanks!
Here is the traceback:
```
Traceback (most recent call last):
File "manage.py", line 22, in <module>
execute_from_command_line(sys.argv)
File "/Users/jdln/.virtualenvs/djstripe/lib/python3.6/site-packages/django/core/management/__init__.py", line 363, in execute_from_command_line
utility.execute()
File "/Users/jdln/.virtualenvs/djstripe/lib/python3.6/site-packages/django/core/management/__init__.py", line 355, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Users/jdln/.virtualenvs/djstripe/lib/python3.6/site-packages/django/core/management/base.py", line 283, in run_from_argv
self.execute(*args, **cmd_options)
File "/Users/jdln/.virtualenvs/djstripe/lib/python3.6/site-packages/django/core/management/base.py", line 330, in execute
output = self.handle(*args, **options)
File "/Users/jdln/temp/dj-stripe/djstripe/management/commands/djstripe_init_customers.py", line 25, in handle
for subscriber in get_subscriber_model().objects.filter(customer__isnull=True):
File "/Users/jdln/.virtualenvs/djstripe/lib/python3.6/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/Users/jdln/.virtualenvs/djstripe/lib/python3.6/site-packages/django/db/models/query.py", line 781, in filter
return self._filter_or_exclude(False, *args, **kwargs)
File "/Users/jdln/.virtualenvs/djstripe/lib/python3.6/site-packages/django/db/models/query.py", line 799, in _filter_or_exclude
clone.query.add_q(Q(*args, **kwargs))
File "/Users/jdln/.virtualenvs/djstripe/lib/python3.6/site-packages/django/db/models/sql/query.py", line 1260, in add_q
clause, _ = self._add_q(q_object, self.used_aliases)
File "/Users/jdln/.virtualenvs/djstripe/lib/python3.6/site-packages/django/db/models/sql/query.py", line 1286, in _add_q
allow_joins=allow_joins, split_subq=split_subq,
File "/Users/jdln/.virtualenvs/djstripe/lib/python3.6/site-packages/django/db/models/sql/query.py", line 1164, in build_filter
lookups, parts, reffed_expression = self.solve_lookup_type(arg)
File "/Users/jdln/.virtualenvs/djstripe/lib/python3.6/site-packages/django/db/models/sql/query.py", line 1044, in solve_lookup_type
_, field, _, lookup_parts = self.names_to_path(lookup_splitted, self.get_meta())
File "/Users/jdln/.virtualenvs/djstripe/lib/python3.6/site-packages/django/db/models/sql/query.py", line 1362, in names_to_path
"Choices are: %s" % (name, ", ".join(available)))
django.core.exceptions.FieldError: Cannot resolve keyword 'customer' into field. Choices are: date_joined, djstripe_customers, email, first_name, groups, id, is_active, is_staff, is_superuser, last_login, last_name, logentry, password, user_permissions, username
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `djstripe/management/commands/djstripe_init_customers.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 .. module:: djstripe.management.commands.djstripe_init_customers.
4
5 :synopsis: dj-stripe - init_customers command.
6
7 .. moduleauthor:: @kavdev, @pydanny
8
9 """
10 from __future__ import unicode_literals
11
12 from django.core.management.base import BaseCommand
13
14 from ...models import Customer
15 from ...settings import get_subscriber_model
16
17
18 class Command(BaseCommand):
19 """Create customer objects for existing subscribers that don't have one."""
20
21 help = "Create customer objects for existing subscribers that don't have one"
22
23 def handle(self, *args, **options):
24 """Create Customer objects for Subscribers without Customer objects associated."""
25 for subscriber in get_subscriber_model().objects.filter(customer__isnull=True):
26 # use get_or_create in case of race conditions on large subscriber bases
27 Customer.get_or_create(subscriber=subscriber)
28 print("Created subscriber for {0}".format(subscriber.email))
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/djstripe/management/commands/djstripe_init_customers.py b/djstripe/management/commands/djstripe_init_customers.py
--- a/djstripe/management/commands/djstripe_init_customers.py
+++ b/djstripe/management/commands/djstripe_init_customers.py
@@ -22,7 +22,7 @@
def handle(self, *args, **options):
"""Create Customer objects for Subscribers without Customer objects associated."""
- for subscriber in get_subscriber_model().objects.filter(customer__isnull=True):
+ for subscriber in get_subscriber_model().objects.filter(djstripe_customers=None):
# use get_or_create in case of race conditions on large subscriber bases
Customer.get_or_create(subscriber=subscriber)
print("Created subscriber for {0}".format(subscriber.email))
| {"golden_diff": "diff --git a/djstripe/management/commands/djstripe_init_customers.py b/djstripe/management/commands/djstripe_init_customers.py\n--- a/djstripe/management/commands/djstripe_init_customers.py\n+++ b/djstripe/management/commands/djstripe_init_customers.py\n@@ -22,7 +22,7 @@\n \n def handle(self, *args, **options):\n \"\"\"Create Customer objects for Subscribers without Customer objects associated.\"\"\"\n- for subscriber in get_subscriber_model().objects.filter(customer__isnull=True):\n+ for subscriber in get_subscriber_model().objects.filter(djstripe_customers=None):\n # use get_or_create in case of race conditions on large subscriber bases\n Customer.get_or_create(subscriber=subscriber)\n print(\"Created subscriber for {0}\".format(subscriber.email))\n", "issue": "Error: Cannot resolve keyword 'customer' into field. \nAll,\r\n\r\nCannot get past the step\r\n\r\n`python manage.py djstripe_init_customers`\r\n\r\nin the installation. \r\n\r\nRunning Python 3.6.0, Django 1.11, and the latest version of dj-stripe (1.0.0).\r\n\r\nWhat combination of Django version and dj-stripe version are folks successfully using at the moment? Thanks! \r\n\r\nHere is the traceback:\r\n```\r\nTraceback (most recent call last):\r\n File \"manage.py\", line 22, in <module>\r\n execute_from_command_line(sys.argv)\r\n File \"/Users/jdln/.virtualenvs/djstripe/lib/python3.6/site-packages/django/core/management/__init__.py\", line 363, in execute_from_command_line\r\n utility.execute()\r\n File \"/Users/jdln/.virtualenvs/djstripe/lib/python3.6/site-packages/django/core/management/__init__.py\", line 355, in execute\r\n self.fetch_command(subcommand).run_from_argv(self.argv)\r\n File \"/Users/jdln/.virtualenvs/djstripe/lib/python3.6/site-packages/django/core/management/base.py\", line 283, in run_from_argv\r\n self.execute(*args, **cmd_options)\r\n File \"/Users/jdln/.virtualenvs/djstripe/lib/python3.6/site-packages/django/core/management/base.py\", line 330, in execute\r\n output = self.handle(*args, **options)\r\n File \"/Users/jdln/temp/dj-stripe/djstripe/management/commands/djstripe_init_customers.py\", line 25, in handle\r\n for subscriber in get_subscriber_model().objects.filter(customer__isnull=True):\r\n File \"/Users/jdln/.virtualenvs/djstripe/lib/python3.6/site-packages/django/db/models/manager.py\", line 85, in manager_method\r\n return getattr(self.get_queryset(), name)(*args, **kwargs)\r\n File \"/Users/jdln/.virtualenvs/djstripe/lib/python3.6/site-packages/django/db/models/query.py\", line 781, in filter\r\n return self._filter_or_exclude(False, *args, **kwargs)\r\n File \"/Users/jdln/.virtualenvs/djstripe/lib/python3.6/site-packages/django/db/models/query.py\", line 799, in _filter_or_exclude\r\n clone.query.add_q(Q(*args, **kwargs))\r\n File \"/Users/jdln/.virtualenvs/djstripe/lib/python3.6/site-packages/django/db/models/sql/query.py\", line 1260, in add_q\r\n clause, _ = self._add_q(q_object, self.used_aliases)\r\n File \"/Users/jdln/.virtualenvs/djstripe/lib/python3.6/site-packages/django/db/models/sql/query.py\", line 1286, in _add_q\r\n allow_joins=allow_joins, split_subq=split_subq,\r\n File \"/Users/jdln/.virtualenvs/djstripe/lib/python3.6/site-packages/django/db/models/sql/query.py\", line 1164, in build_filter\r\n lookups, parts, reffed_expression = self.solve_lookup_type(arg)\r\n File \"/Users/jdln/.virtualenvs/djstripe/lib/python3.6/site-packages/django/db/models/sql/query.py\", line 1044, in solve_lookup_type\r\n _, field, _, lookup_parts = self.names_to_path(lookup_splitted, self.get_meta())\r\n File \"/Users/jdln/.virtualenvs/djstripe/lib/python3.6/site-packages/django/db/models/sql/query.py\", line 1362, in names_to_path\r\n \"Choices are: %s\" % (name, \", \".join(available)))\r\ndjango.core.exceptions.FieldError: Cannot resolve keyword 'customer' into field. Choices are: date_joined, djstripe_customers, email, first_name, groups, id, is_active, is_staff, is_superuser, last_login, last_name, logentry, password, user_permissions, username\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\n.. module:: djstripe.management.commands.djstripe_init_customers.\n\n :synopsis: dj-stripe - init_customers command.\n\n.. moduleauthor:: @kavdev, @pydanny\n\n\"\"\"\nfrom __future__ import unicode_literals\n\nfrom django.core.management.base import BaseCommand\n\nfrom ...models import Customer\nfrom ...settings import get_subscriber_model\n\n\nclass Command(BaseCommand):\n \"\"\"Create customer objects for existing subscribers that don't have one.\"\"\"\n\n help = \"Create customer objects for existing subscribers that don't have one\"\n\n def handle(self, *args, **options):\n \"\"\"Create Customer objects for Subscribers without Customer objects associated.\"\"\"\n for subscriber in get_subscriber_model().objects.filter(customer__isnull=True):\n # use get_or_create in case of race conditions on large subscriber bases\n Customer.get_or_create(subscriber=subscriber)\n print(\"Created subscriber for {0}\".format(subscriber.email))\n", "path": "djstripe/management/commands/djstripe_init_customers.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\n.. module:: djstripe.management.commands.djstripe_init_customers.\n\n :synopsis: dj-stripe - init_customers command.\n\n.. moduleauthor:: @kavdev, @pydanny\n\n\"\"\"\nfrom __future__ import unicode_literals\n\nfrom django.core.management.base import BaseCommand\n\nfrom ...models import Customer\nfrom ...settings import get_subscriber_model\n\n\nclass Command(BaseCommand):\n \"\"\"Create customer objects for existing subscribers that don't have one.\"\"\"\n\n help = \"Create customer objects for existing subscribers that don't have one\"\n\n def handle(self, *args, **options):\n \"\"\"Create Customer objects for Subscribers without Customer objects associated.\"\"\"\n for subscriber in get_subscriber_model().objects.filter(djstripe_customers=None):\n # use get_or_create in case of race conditions on large subscriber bases\n Customer.get_or_create(subscriber=subscriber)\n print(\"Created subscriber for {0}\".format(subscriber.email))\n", "path": "djstripe/management/commands/djstripe_init_customers.py"}]} | 1,415 | 182 |
gh_patches_debug_12626 | rasdani/github-patches | git_diff | cowrie__cowrie-415 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
^M for carriage return in telnet
Trying to log in via telnet to cowrie results in the following:
$telnet xx.yy.zz.aa
Trying xx.yy.zz.aa
Connected to xx.yy.zz.aa.
Escape character is '^]'.
login: icantpressenter^M^M^M^M^M^M^M^M^M^M^M^]
telnet> q
Connection closed.
It looks like some kind of control character thing. Telnet on windows "kind of works", and telnet on OS X and Linux (Debian 8.6) produce the above ^M issue.
oddly, specifying the destination port in the telnet client (e.g "telnet host 23" ) removes this issue. Probably because control messages aren't sent.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cowrie/telnet/transport.py`
Content:
```
1 # Copyright (C) 2015, 2016 GoSecure Inc.
2 """
3 Telnet Transport and Authentication for the Honeypot
4
5 @author: Olivier Bilodeau <[email protected]>
6 """
7
8 import struct
9 import time
10 import uuid
11 import inspect
12 import random
13
14 from twisted.python import log
15 from twisted.internet import protocol
16 from twisted.conch.telnet import AuthenticatingTelnetProtocol, ECHO, TRAPSIG, \
17 ITelnetProtocol, ProtocolTransportMixin, \
18 SGA, NAWS, MODE, LINEMODE, TelnetTransport, AlreadyNegotiating
19 from twisted.protocols.policies import TimeoutMixin
20
21 from cowrie.core.credentials import UsernamePasswordIP
22
23 class HoneyPotTelnetFactory(protocol.ServerFactory):
24 """
25 This factory creates HoneyPotTelnetAuthProtocol instances
26 They listen directly to the TCP port
27 """
28 tac = None # gets set later
29
30 def __init__(self, cfg):
31 self.cfg = cfg
32
33
34 # TODO logging clarity can be improved: see what SSH does
35 def logDispatch(self, *msg, **args):
36 """
37 Special delivery to the loggers to avoid scope problems
38 """
39 args['sessionno'] = 'T'+str(args['sessionno'])
40 for dblog in self.tac.dbloggers:
41 dblog.logDispatch(*msg, **args)
42 for output in self.tac.output_plugins:
43 output.logDispatch(*msg, **args)
44
45
46 def startFactory(self):
47 """
48 """
49 try:
50 honeyfs = self.portal.realm.cfg.get('honeypot', 'contents_path')
51 issuefile = honeyfs + "/etc/issue.net"
52 self.banner = open(issuefile).read()
53 except IOError:
54 self.banner = ""
55
56 # For use by the uptime command
57 self.starttime = time.time()
58
59 # hook protocol
60 self.protocol = lambda: CowrieTelnetTransport(HoneyPotTelnetAuthProtocol,
61 self.portal)
62 protocol.ServerFactory.startFactory(self)
63 log.msg("Ready to accept Telnet connections")
64
65
66 def stopFactory(self):
67 """
68 Stop output plugins
69 """
70 protocol.ServerFactory.stopFactory(self)
71
72
73 class HoneyPotTelnetAuthProtocol(AuthenticatingTelnetProtocol):
74 """
75 TelnetAuthProtocol that takes care of Authentication. Once authenticated this
76 protocol is replaced with HoneyPotTelnetSession.
77 """
78
79 loginPrompt = 'login: '
80 passwordPrompt = 'Password: '
81 windowSize = [40, 80]
82
83 def connectionMade(self):
84 """
85 """
86 self.transport.negotiationMap[NAWS] = self.telnet_NAWS
87 # Initial option negotation. Want something at least for Mirai
88 for opt in (NAWS,):
89 self.transport.doChain(opt).addErrback(log.err)
90
91 # I need to doubly escape here since my underlying
92 # CowrieTelnetTransport hack would remove it and leave just \n
93 self.transport.write(self.factory.banner.replace('\n', '\r\r\n'))
94 self.transport.write(self.loginPrompt)
95
96
97 def connectionLost(self, reason):
98 """
99 Fires on pre-authentication disconnects
100 """
101 AuthenticatingTelnetProtocol.connectionLost(self, reason)
102
103
104 def telnet_User(self, line):
105 """
106 Overridden to conditionally kill 'WILL ECHO' which confuses clients
107 that don't implement a proper Telnet protocol (most malware)
108 """
109 self.username = line
110 # only send ECHO option if we are chatting with a real Telnet client
111 #if self.transport.options: <-- doesn't work
112 self.transport.willChain(ECHO)
113 # FIXME: this should be configurable or provided via filesystem
114 self.transport.write(self.passwordPrompt)
115 return 'Password'
116
117
118 def telnet_Password(self, line):
119 username, password = self.username, line
120 del self.username
121 def login(ignored):
122 self.src_ip = self.transport.getPeer().host
123 creds = UsernamePasswordIP(username, password, self.src_ip)
124 d = self.portal.login(creds, self.src_ip, ITelnetProtocol)
125 d.addCallback(self._cbLogin)
126 d.addErrback(self._ebLogin)
127
128 # are we dealing with a real Telnet client?
129 if self.transport.options:
130 # stop ECHO
131 # even if ECHO negotiation fails we still want to attempt a login
132 # this allows us to support dumb clients which is common in malware
133 # thus the addBoth: on success and on exception (AlreadyNegotiating)
134 self.transport.wontChain(ECHO).addBoth(login)
135 else:
136 # process login
137 login('')
138
139 return 'Discard'
140
141 def telnet_Command(self, command):
142 self.transport.protocol.dataReceived(command+'\r')
143 return "Command"
144
145 def _cbLogin(self, ial):
146 """
147 Fired on a successful login
148 """
149 interface, protocol, logout = ial
150 protocol.windowSize = self.windowSize
151 self.protocol = protocol
152 self.logout = logout
153 self.state = 'Command'
154
155 # Remove the short timeout of the login prompt. Timeout will be
156 # provided later by the HoneyPotBaseProtocol class.
157 self.transport.setTimeout(None)
158
159 # replace myself with avatar protocol
160 protocol.makeConnection(self.transport)
161 self.transport.protocol = protocol
162
163
164 def _ebLogin(self, failure):
165 # TODO: provide a way to have user configurable strings for wrong password
166 self.transport.wontChain(ECHO)
167 self.transport.write("\nLogin incorrect\n")
168 self.transport.write(self.loginPrompt)
169 self.state = "User"
170
171 # From TelnetBootstrapProtocol in twisted/conch/telnet.py
172 def telnet_NAWS(self, data):
173 if len(data) == 4:
174 width, height = struct.unpack('!HH', b''.join(data))
175 self.windowSize = [height, width]
176 else:
177 log.msg("Wrong number of NAWS bytes")
178
179 def enableLocal(self, opt):
180 if opt == ECHO:
181 return True
182 elif opt == SGA:
183 return True
184 else:
185 return False
186
187
188 def enableRemote(self, opt):
189 if opt == LINEMODE:
190 self.transport.requestNegotiation(LINEMODE, MODE + chr(TRAPSIG))
191 return True
192 elif opt == NAWS:
193 return True
194 elif opt == SGA:
195 return True
196 else:
197 return False
198
199
200
201 class CowrieTelnetTransport(TelnetTransport, TimeoutMixin):
202 """
203 """
204 def connectionMade(self):
205 self.transportId = uuid.uuid4().hex[:8]
206 sessionno = self.transport.sessionno
207 self.startTime = time.time()
208 self.setTimeout(300)
209
210 log.msg(eventid='cowrie.session.connect',
211 format='New connection: %(src_ip)s:%(src_port)s (%(dst_ip)s:%(dst_port)s) [session: T%(sessionno)s]',
212 src_ip=self.transport.getPeer().host, src_port=self.transport.getPeer().port,
213 dst_ip=self.transport.getHost().host, dst_port=self.transport.getHost().port,
214 session=self.transportId, sessionno='T'+str(sessionno))
215 TelnetTransport.connectionMade(self)
216
217
218 def write(self, bytes):
219 """
220 Because of the presence of two ProtocolTransportMixin in the protocol
221 stack once authenticated, I need to override write() and remove a \r
222 otherwise we end up with \r\r\n on the wire.
223
224 It is kind of a hack. I asked for a better solution here:
225 http://stackoverflow.com/questions/35087250/twisted-telnet-server-how-to-avoid-nested-crlf
226 """
227 self.transport.write(bytes.replace('\r\n', '\n'))
228
229
230 def connectionLost(self, reason):
231 """
232 Fires on pre-authentication disconnects
233 """
234 self.setTimeout(None)
235 TelnetTransport.connectionLost(self, reason)
236 duration = time.time() - self.startTime
237 log.msg(eventid='cowrie.session.closed',
238 format='Connection lost after %(duration)d seconds',
239 duration=duration)
240
241 def willChain(self, option):
242 return self._chainNegotiation(None, self.will, option)
243
244 def wontChain(self, option):
245 return self._chainNegotiation(None, self.wont, option)
246
247 def doChain(self, option):
248 return self._chainNegotiation(None, self.do, option)
249
250 def dontChain(self, option):
251 return self._chainNegotiation(None, self.dont, option)
252
253 def _handleNegotiationError(self, f, func, option):
254 if f.type is AlreadyNegotiating:
255 s = self.getOptionState(option)
256 if func in (self.do, self.dont):
257 s.him.onResult.addCallback(self._chainNegotiation, func, option)
258 s.him.onResult.addErrback(self._handleNegotiationError, func, option)
259 if func in (self.will, self.wont):
260 s.us.onResult.addCallback(self._chainNegotiation, func, option)
261 s.us.onResult.addErrback(self._handleNegotiationError, func, option)
262 # We only care about AlreadyNegotiating, everything else can be ignored
263 # Possible other types include OptionRefused, AlreadyDisabled, AlreadyEnabled, ConnectionDone, ConnectionLost
264 elif f.type is AssertionError:
265 log.err('Client tried to illegally refuse to disable an option; ignoring, but undefined behavior may result')
266 # TODO: Is ignoring this violation of the protocol the proper behavior?
267 # Should the connection be terminated instead?
268 # The telnetd package on Ubuntu (netkit-telnet) does all negotiation before sending the login prompt,
269 # but does handle client-initiated negotiation at any time.
270 return None # This Failure has been handled, no need to continue processing errbacks
271
272 def _chainNegotiation(self, res, func, option):
273 return func(option).addErrback(self._handleNegotiationError, func, option)
274
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cowrie/telnet/transport.py b/cowrie/telnet/transport.py
--- a/cowrie/telnet/transport.py
+++ b/cowrie/telnet/transport.py
@@ -180,15 +180,17 @@
if opt == ECHO:
return True
elif opt == SGA:
- return True
+ return False
+ #return True
else:
return False
def enableRemote(self, opt):
if opt == LINEMODE:
- self.transport.requestNegotiation(LINEMODE, MODE + chr(TRAPSIG))
- return True
+ return False
+ #self.transport.requestNegotiation(LINEMODE, MODE + chr(TRAPSIG))
+ #return True
elif opt == NAWS:
return True
elif opt == SGA:
| {"golden_diff": "diff --git a/cowrie/telnet/transport.py b/cowrie/telnet/transport.py\n--- a/cowrie/telnet/transport.py\n+++ b/cowrie/telnet/transport.py\n@@ -180,15 +180,17 @@\n if opt == ECHO:\n return True\n elif opt == SGA:\n- return True\n+ return False\n+ #return True\n else:\n return False\n \n \n def enableRemote(self, opt):\n if opt == LINEMODE:\n- self.transport.requestNegotiation(LINEMODE, MODE + chr(TRAPSIG))\n- return True\n+ return False\n+ #self.transport.requestNegotiation(LINEMODE, MODE + chr(TRAPSIG))\n+ #return True\n elif opt == NAWS:\n return True\n elif opt == SGA:\n", "issue": "^M for carriage return in telnet\nTrying to log in via telnet to cowrie results in the following:\r\n\r\n$telnet xx.yy.zz.aa\r\nTrying xx.yy.zz.aa\r\nConnected to xx.yy.zz.aa.\r\nEscape character is '^]'.\r\nlogin: icantpressenter^M^M^M^M^M^M^M^M^M^M^M^]\r\ntelnet> q\r\nConnection closed.\r\n\r\nIt looks like some kind of control character thing. Telnet on windows \"kind of works\", and telnet on OS X and Linux (Debian 8.6) produce the above ^M issue.\r\n\r\noddly, specifying the destination port in the telnet client (e.g \"telnet host 23\" ) removes this issue. Probably because control messages aren't sent.\n", "before_files": [{"content": "# Copyright (C) 2015, 2016 GoSecure Inc.\n\"\"\"\nTelnet Transport and Authentication for the Honeypot\n\n@author: Olivier Bilodeau <[email protected]>\n\"\"\"\n\nimport struct\nimport time\nimport uuid\nimport inspect\nimport random\n\nfrom twisted.python import log\nfrom twisted.internet import protocol\nfrom twisted.conch.telnet import AuthenticatingTelnetProtocol, ECHO, TRAPSIG, \\\n ITelnetProtocol, ProtocolTransportMixin, \\\n SGA, NAWS, MODE, LINEMODE, TelnetTransport, AlreadyNegotiating\nfrom twisted.protocols.policies import TimeoutMixin\n\nfrom cowrie.core.credentials import UsernamePasswordIP\n\nclass HoneyPotTelnetFactory(protocol.ServerFactory):\n \"\"\"\n This factory creates HoneyPotTelnetAuthProtocol instances\n They listen directly to the TCP port\n \"\"\"\n tac = None # gets set later\n\n def __init__(self, cfg):\n self.cfg = cfg\n\n\n # TODO logging clarity can be improved: see what SSH does\n def logDispatch(self, *msg, **args):\n \"\"\"\n Special delivery to the loggers to avoid scope problems\n \"\"\"\n args['sessionno'] = 'T'+str(args['sessionno'])\n for dblog in self.tac.dbloggers:\n dblog.logDispatch(*msg, **args)\n for output in self.tac.output_plugins:\n output.logDispatch(*msg, **args)\n\n\n def startFactory(self):\n \"\"\"\n \"\"\"\n try:\n honeyfs = self.portal.realm.cfg.get('honeypot', 'contents_path')\n issuefile = honeyfs + \"/etc/issue.net\"\n self.banner = open(issuefile).read()\n except IOError:\n self.banner = \"\"\n\n # For use by the uptime command\n self.starttime = time.time()\n\n # hook protocol\n self.protocol = lambda: CowrieTelnetTransport(HoneyPotTelnetAuthProtocol,\n self.portal)\n protocol.ServerFactory.startFactory(self)\n log.msg(\"Ready to accept Telnet connections\")\n\n\n def stopFactory(self):\n \"\"\"\n Stop output plugins\n \"\"\"\n protocol.ServerFactory.stopFactory(self)\n\n\nclass HoneyPotTelnetAuthProtocol(AuthenticatingTelnetProtocol):\n \"\"\"\n TelnetAuthProtocol that takes care of Authentication. Once authenticated this\n protocol is replaced with HoneyPotTelnetSession.\n \"\"\"\n\n loginPrompt = 'login: '\n passwordPrompt = 'Password: '\n windowSize = [40, 80]\n\n def connectionMade(self):\n \"\"\"\n \"\"\"\n self.transport.negotiationMap[NAWS] = self.telnet_NAWS\n # Initial option negotation. Want something at least for Mirai\n for opt in (NAWS,):\n self.transport.doChain(opt).addErrback(log.err)\n\n # I need to doubly escape here since my underlying\n # CowrieTelnetTransport hack would remove it and leave just \\n\n self.transport.write(self.factory.banner.replace('\\n', '\\r\\r\\n'))\n self.transport.write(self.loginPrompt)\n\n\n def connectionLost(self, reason):\n \"\"\"\n Fires on pre-authentication disconnects\n \"\"\"\n AuthenticatingTelnetProtocol.connectionLost(self, reason)\n\n\n def telnet_User(self, line):\n \"\"\"\n Overridden to conditionally kill 'WILL ECHO' which confuses clients\n that don't implement a proper Telnet protocol (most malware)\n \"\"\"\n self.username = line\n # only send ECHO option if we are chatting with a real Telnet client\n #if self.transport.options: <-- doesn't work\n self.transport.willChain(ECHO)\n # FIXME: this should be configurable or provided via filesystem\n self.transport.write(self.passwordPrompt)\n return 'Password'\n\n\n def telnet_Password(self, line):\n username, password = self.username, line\n del self.username\n def login(ignored):\n self.src_ip = self.transport.getPeer().host\n creds = UsernamePasswordIP(username, password, self.src_ip)\n d = self.portal.login(creds, self.src_ip, ITelnetProtocol)\n d.addCallback(self._cbLogin)\n d.addErrback(self._ebLogin)\n\n # are we dealing with a real Telnet client?\n if self.transport.options:\n # stop ECHO\n # even if ECHO negotiation fails we still want to attempt a login\n # this allows us to support dumb clients which is common in malware\n # thus the addBoth: on success and on exception (AlreadyNegotiating)\n self.transport.wontChain(ECHO).addBoth(login)\n else:\n # process login\n login('')\n\n return 'Discard'\n\n def telnet_Command(self, command):\n self.transport.protocol.dataReceived(command+'\\r')\n return \"Command\"\n\n def _cbLogin(self, ial):\n \"\"\"\n Fired on a successful login\n \"\"\"\n interface, protocol, logout = ial\n protocol.windowSize = self.windowSize\n self.protocol = protocol\n self.logout = logout\n self.state = 'Command'\n\n # Remove the short timeout of the login prompt. Timeout will be\n # provided later by the HoneyPotBaseProtocol class.\n self.transport.setTimeout(None)\n\n # replace myself with avatar protocol\n protocol.makeConnection(self.transport)\n self.transport.protocol = protocol\n\n\n def _ebLogin(self, failure):\n # TODO: provide a way to have user configurable strings for wrong password\n self.transport.wontChain(ECHO)\n self.transport.write(\"\\nLogin incorrect\\n\")\n self.transport.write(self.loginPrompt)\n self.state = \"User\"\n\n # From TelnetBootstrapProtocol in twisted/conch/telnet.py\n def telnet_NAWS(self, data):\n if len(data) == 4:\n width, height = struct.unpack('!HH', b''.join(data))\n self.windowSize = [height, width]\n else:\n log.msg(\"Wrong number of NAWS bytes\")\n\n def enableLocal(self, opt):\n if opt == ECHO:\n return True\n elif opt == SGA:\n return True\n else:\n return False\n\n\n def enableRemote(self, opt):\n if opt == LINEMODE:\n self.transport.requestNegotiation(LINEMODE, MODE + chr(TRAPSIG))\n return True\n elif opt == NAWS:\n return True\n elif opt == SGA:\n return True\n else:\n return False\n\n\n\nclass CowrieTelnetTransport(TelnetTransport, TimeoutMixin):\n \"\"\"\n \"\"\"\n def connectionMade(self):\n self.transportId = uuid.uuid4().hex[:8]\n sessionno = self.transport.sessionno\n self.startTime = time.time()\n self.setTimeout(300)\n\n log.msg(eventid='cowrie.session.connect',\n format='New connection: %(src_ip)s:%(src_port)s (%(dst_ip)s:%(dst_port)s) [session: T%(sessionno)s]',\n src_ip=self.transport.getPeer().host, src_port=self.transport.getPeer().port,\n dst_ip=self.transport.getHost().host, dst_port=self.transport.getHost().port,\n session=self.transportId, sessionno='T'+str(sessionno))\n TelnetTransport.connectionMade(self)\n\n\n def write(self, bytes):\n \"\"\"\n Because of the presence of two ProtocolTransportMixin in the protocol\n stack once authenticated, I need to override write() and remove a \\r\n otherwise we end up with \\r\\r\\n on the wire.\n\n It is kind of a hack. I asked for a better solution here:\n http://stackoverflow.com/questions/35087250/twisted-telnet-server-how-to-avoid-nested-crlf\n \"\"\"\n self.transport.write(bytes.replace('\\r\\n', '\\n'))\n\n\n def connectionLost(self, reason):\n \"\"\"\n Fires on pre-authentication disconnects\n \"\"\"\n self.setTimeout(None)\n TelnetTransport.connectionLost(self, reason)\n duration = time.time() - self.startTime\n log.msg(eventid='cowrie.session.closed',\n format='Connection lost after %(duration)d seconds',\n duration=duration)\n\n def willChain(self, option):\n return self._chainNegotiation(None, self.will, option)\n\n def wontChain(self, option):\n return self._chainNegotiation(None, self.wont, option)\n\n def doChain(self, option):\n return self._chainNegotiation(None, self.do, option)\n\n def dontChain(self, option):\n return self._chainNegotiation(None, self.dont, option)\n\n def _handleNegotiationError(self, f, func, option):\n if f.type is AlreadyNegotiating:\n s = self.getOptionState(option)\n if func in (self.do, self.dont):\n s.him.onResult.addCallback(self._chainNegotiation, func, option)\n s.him.onResult.addErrback(self._handleNegotiationError, func, option)\n if func in (self.will, self.wont):\n s.us.onResult.addCallback(self._chainNegotiation, func, option)\n s.us.onResult.addErrback(self._handleNegotiationError, func, option)\n # We only care about AlreadyNegotiating, everything else can be ignored\n # Possible other types include OptionRefused, AlreadyDisabled, AlreadyEnabled, ConnectionDone, ConnectionLost\n elif f.type is AssertionError:\n log.err('Client tried to illegally refuse to disable an option; ignoring, but undefined behavior may result')\n # TODO: Is ignoring this violation of the protocol the proper behavior?\n # Should the connection be terminated instead?\n # The telnetd package on Ubuntu (netkit-telnet) does all negotiation before sending the login prompt,\n # but does handle client-initiated negotiation at any time.\n return None # This Failure has been handled, no need to continue processing errbacks\n\n def _chainNegotiation(self, res, func, option):\n return func(option).addErrback(self._handleNegotiationError, func, option)\n", "path": "cowrie/telnet/transport.py"}], "after_files": [{"content": "# Copyright (C) 2015, 2016 GoSecure Inc.\n\"\"\"\nTelnet Transport and Authentication for the Honeypot\n\n@author: Olivier Bilodeau <[email protected]>\n\"\"\"\n\nimport struct\nimport time\nimport uuid\nimport inspect\nimport random\n\nfrom twisted.python import log\nfrom twisted.internet import protocol\nfrom twisted.conch.telnet import AuthenticatingTelnetProtocol, ECHO, TRAPSIG, \\\n ITelnetProtocol, ProtocolTransportMixin, \\\n SGA, NAWS, MODE, LINEMODE, TelnetTransport, AlreadyNegotiating\nfrom twisted.protocols.policies import TimeoutMixin\n\nfrom cowrie.core.credentials import UsernamePasswordIP\n\nclass HoneyPotTelnetFactory(protocol.ServerFactory):\n \"\"\"\n This factory creates HoneyPotTelnetAuthProtocol instances\n They listen directly to the TCP port\n \"\"\"\n tac = None # gets set later\n\n def __init__(self, cfg):\n self.cfg = cfg\n\n\n # TODO logging clarity can be improved: see what SSH does\n def logDispatch(self, *msg, **args):\n \"\"\"\n Special delivery to the loggers to avoid scope problems\n \"\"\"\n args['sessionno'] = 'T'+str(args['sessionno'])\n for dblog in self.tac.dbloggers:\n dblog.logDispatch(*msg, **args)\n for output in self.tac.output_plugins:\n output.logDispatch(*msg, **args)\n\n\n def startFactory(self):\n \"\"\"\n \"\"\"\n try:\n honeyfs = self.portal.realm.cfg.get('honeypot', 'contents_path')\n issuefile = honeyfs + \"/etc/issue.net\"\n self.banner = open(issuefile).read()\n except IOError:\n self.banner = \"\"\n\n # For use by the uptime command\n self.starttime = time.time()\n\n # hook protocol\n self.protocol = lambda: CowrieTelnetTransport(HoneyPotTelnetAuthProtocol,\n self.portal)\n protocol.ServerFactory.startFactory(self)\n log.msg(\"Ready to accept Telnet connections\")\n\n\n def stopFactory(self):\n \"\"\"\n Stop output plugins\n \"\"\"\n protocol.ServerFactory.stopFactory(self)\n\n\nclass HoneyPotTelnetAuthProtocol(AuthenticatingTelnetProtocol):\n \"\"\"\n TelnetAuthProtocol that takes care of Authentication. Once authenticated this\n protocol is replaced with HoneyPotTelnetSession.\n \"\"\"\n\n loginPrompt = 'login: '\n passwordPrompt = 'Password: '\n windowSize = [40, 80]\n\n def connectionMade(self):\n \"\"\"\n \"\"\"\n self.transport.negotiationMap[NAWS] = self.telnet_NAWS\n # Initial option negotation. Want something at least for Mirai\n for opt in (NAWS,):\n self.transport.doChain(opt).addErrback(log.err)\n\n # I need to doubly escape here since my underlying\n # CowrieTelnetTransport hack would remove it and leave just \\n\n self.transport.write(self.factory.banner.replace('\\n', '\\r\\r\\n'))\n self.transport.write(self.loginPrompt)\n\n\n def connectionLost(self, reason):\n \"\"\"\n Fires on pre-authentication disconnects\n \"\"\"\n AuthenticatingTelnetProtocol.connectionLost(self, reason)\n\n\n def telnet_User(self, line):\n \"\"\"\n Overridden to conditionally kill 'WILL ECHO' which confuses clients\n that don't implement a proper Telnet protocol (most malware)\n \"\"\"\n self.username = line\n # only send ECHO option if we are chatting with a real Telnet client\n #if self.transport.options: <-- doesn't work\n self.transport.willChain(ECHO)\n # FIXME: this should be configurable or provided via filesystem\n self.transport.write(self.passwordPrompt)\n return 'Password'\n\n\n def telnet_Password(self, line):\n username, password = self.username, line\n del self.username\n def login(ignored):\n self.src_ip = self.transport.getPeer().host\n creds = UsernamePasswordIP(username, password, self.src_ip)\n d = self.portal.login(creds, self.src_ip, ITelnetProtocol)\n d.addCallback(self._cbLogin)\n d.addErrback(self._ebLogin)\n\n # are we dealing with a real Telnet client?\n if self.transport.options:\n # stop ECHO\n # even if ECHO negotiation fails we still want to attempt a login\n # this allows us to support dumb clients which is common in malware\n # thus the addBoth: on success and on exception (AlreadyNegotiating)\n self.transport.wontChain(ECHO).addBoth(login)\n else:\n # process login\n login('')\n\n return 'Discard'\n\n def telnet_Command(self, command):\n self.transport.protocol.dataReceived(command+'\\r')\n return \"Command\"\n\n def _cbLogin(self, ial):\n \"\"\"\n Fired on a successful login\n \"\"\"\n interface, protocol, logout = ial\n protocol.windowSize = self.windowSize\n self.protocol = protocol\n self.logout = logout\n self.state = 'Command'\n\n # Remove the short timeout of the login prompt. Timeout will be\n # provided later by the HoneyPotBaseProtocol class.\n self.transport.setTimeout(None)\n\n # replace myself with avatar protocol\n protocol.makeConnection(self.transport)\n self.transport.protocol = protocol\n\n\n def _ebLogin(self, failure):\n # TODO: provide a way to have user configurable strings for wrong password\n self.transport.wontChain(ECHO)\n self.transport.write(\"\\nLogin incorrect\\n\")\n self.transport.write(self.loginPrompt)\n self.state = \"User\"\n\n # From TelnetBootstrapProtocol in twisted/conch/telnet.py\n def telnet_NAWS(self, data):\n if len(data) == 4:\n width, height = struct.unpack('!HH', b''.join(data))\n self.windowSize = [height, width]\n else:\n log.msg(\"Wrong number of NAWS bytes\")\n\n def enableLocal(self, opt):\n if opt == ECHO:\n return True\n elif opt == SGA:\n return False\n #return True\n else:\n return False\n\n\n def enableRemote(self, opt):\n if opt == LINEMODE:\n return False\n #self.transport.requestNegotiation(LINEMODE, MODE + chr(TRAPSIG))\n #return True\n elif opt == NAWS:\n return True\n elif opt == SGA:\n return True\n else:\n return False\n\n\n\nclass CowrieTelnetTransport(TelnetTransport, TimeoutMixin):\n \"\"\"\n \"\"\"\n def connectionMade(self):\n self.transportId = uuid.uuid4().hex[:8]\n sessionno = self.transport.sessionno\n self.startTime = time.time()\n self.setTimeout(300)\n\n log.msg(eventid='cowrie.session.connect',\n format='New connection: %(src_ip)s:%(src_port)s (%(dst_ip)s:%(dst_port)s) [session: T%(sessionno)s]',\n src_ip=self.transport.getPeer().host, src_port=self.transport.getPeer().port,\n dst_ip=self.transport.getHost().host, dst_port=self.transport.getHost().port,\n session=self.transportId, sessionno='T'+str(sessionno))\n TelnetTransport.connectionMade(self)\n\n\n def write(self, bytes):\n \"\"\"\n Because of the presence of two ProtocolTransportMixin in the protocol\n stack once authenticated, I need to override write() and remove a \\r\n otherwise we end up with \\r\\r\\n on the wire.\n\n It is kind of a hack. I asked for a better solution here:\n http://stackoverflow.com/questions/35087250/twisted-telnet-server-how-to-avoid-nested-crlf\n \"\"\"\n self.transport.write(bytes.replace('\\r\\n', '\\n'))\n\n\n def connectionLost(self, reason):\n \"\"\"\n Fires on pre-authentication disconnects\n \"\"\"\n self.setTimeout(None)\n TelnetTransport.connectionLost(self, reason)\n duration = time.time() - self.startTime\n log.msg(eventid='cowrie.session.closed',\n format='Connection lost after %(duration)d seconds',\n duration=duration)\n\n def willChain(self, option):\n return self._chainNegotiation(None, self.will, option)\n\n def wontChain(self, option):\n return self._chainNegotiation(None, self.wont, option)\n\n def doChain(self, option):\n return self._chainNegotiation(None, self.do, option)\n\n def dontChain(self, option):\n return self._chainNegotiation(None, self.dont, option)\n\n def _handleNegotiationError(self, f, func, option):\n if f.type is AlreadyNegotiating:\n s = self.getOptionState(option)\n if func in (self.do, self.dont):\n s.him.onResult.addCallback(self._chainNegotiation, func, option)\n s.him.onResult.addErrback(self._handleNegotiationError, func, option)\n if func in (self.will, self.wont):\n s.us.onResult.addCallback(self._chainNegotiation, func, option)\n s.us.onResult.addErrback(self._handleNegotiationError, func, option)\n # We only care about AlreadyNegotiating, everything else can be ignored\n # Possible other types include OptionRefused, AlreadyDisabled, AlreadyEnabled, ConnectionDone, ConnectionLost\n elif f.type is AssertionError:\n log.err('Client tried to illegally refuse to disable an option; ignoring, but undefined behavior may result')\n # TODO: Is ignoring this violation of the protocol the proper behavior?\n # Should the connection be terminated instead?\n # The telnetd package on Ubuntu (netkit-telnet) does all negotiation before sending the login prompt,\n # but does handle client-initiated negotiation at any time.\n return None # This Failure has been handled, no need to continue processing errbacks\n\n def _chainNegotiation(self, res, func, option):\n return func(option).addErrback(self._handleNegotiationError, func, option)\n", "path": "cowrie/telnet/transport.py"}]} | 3,346 | 197 |
gh_patches_debug_48141 | rasdani/github-patches | git_diff | google__flax-270 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`typing._ClassVar` cannot be accessed in the iPython shell – dataclasses package seems to mess up on Python 3.7
### Problem you have encountered:
I just installed flax and tried to import it from the iPython shell. But it raises an `AttributeError`.
```
In [1]: import flax
*snip*
~/.virtualenvs/flax2/lib/python3.7/site-packages/dataclasses.py in _is_classvar(a_type, typing)
548 # This test uses a typing internal class, but it's the best way to
549 # test if this is a ClassVar.
--> 550 return type(a_type) is typing._ClassVar
551
552
AttributeError: module 'typing' has no attribute '_ClassVar'
```
This does not happen in the normal interpreter, where everything goes fine.
### What you expected to happen:
I expected the import to work the same in iPython and the normal python shell.
### Logs, error messages, etc:
Full traceback in this gist: https://gist.github.com/bayerj/96f096c7fb09a7c9b758dabdbca32671
### Steps to reproduce:
On Mac OS X with Python 3.7.6, not anaconda, virtuelenvwrapper installed.
```
❯❯❯ mkvirtualenv flax2
❯❯❯ pip install jaxlib
*snip*
❯❯❯ pip install flax
*snip*
❯❯❯ ipython
*snip*
In [1]: import flax
```
### Workaround
The problem seems to be in the `dataclasses` package–not python's own one–from PyPI. If I uninstall it...
```
❯❯❯ pip uninstall dataclasses
Found existing installation: dataclasses 0.6
Uninstalling dataclasses-0.6:
Would remove:
/Users/bayerj/.virtualenvs/debug2/lib/python3.7/site-packages/dataclasses-0.6.dist-info/*
/Users/bayerj/.virtualenvs/debug2/lib/python3.7/site-packages/dataclasses.py
Proceed (y/n)? y
Successfully uninstalled dataclasses-0.6
❯❯❯ ipython
/usr/local/lib/python3.7/site-packages/IPython/core/interactiveshell.py:931: UserWarning: Attempting to work in a virtualenv. If you encounter problems, please install IPython inside the virtualenv.
warn("Attempting to work in a virtualenv. If you encounter problems, please "
Python 3.7.6 (default, Dec 30 2019, 19:38:28)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.9.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import flax
```
... this goes fine.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2020 The Flax Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """setup.py for Flax."""
16
17 import os
18 from setuptools import find_packages
19 from setuptools import setup
20
21 version = "0.1.0"
22
23 here = os.path.abspath(os.path.dirname(__file__))
24 try:
25 README = open(os.path.join(here, "README.md"), encoding='utf-8').read()
26 except IOError:
27 README = ""
28
29 install_requires = [
30 "numpy>=1.12",
31 "jax>=0.1.59",
32 "matplotlib", # only needed for tensorboard export
33 "dataclasses", # will only install on py3.6
34 "msgpack",
35 ]
36
37 tests_require = [
38 "jaxlib",
39 "pytest",
40 "pytest-cov",
41 "pytest-xdist",
42 "tensorflow",
43 "tensorflow_datasets",
44 ]
45
46 setup(
47 name="flax",
48 version=version,
49 description="Flax: A neural network library for JAX designed for flexibility",
50 long_description="\n\n".join([README]),
51 long_description_content_type='text/markdown',
52 classifiers=[
53 "Development Status :: 3 - Alpha",
54 "Intended Audience :: Developers",
55 "Intended Audience :: Science/Research",
56 "License :: OSI Approved :: MIT License",
57 "Programming Language :: Python :: 3.7",
58 "Topic :: Scientific/Engineering :: Artificial Intelligence",
59 ],
60 keywords="",
61 author="Flax team",
62 author_email="[email protected]",
63 url="https://github.com/google/flax",
64 license="Apache",
65 packages=find_packages(),
66 include_package_data=False,
67 zip_safe=False,
68 install_requires=install_requires,
69 extras_require={
70 "testing": tests_require,
71 },
72 )
73
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -30,7 +30,7 @@
"numpy>=1.12",
"jax>=0.1.59",
"matplotlib", # only needed for tensorboard export
- "dataclasses", # will only install on py3.6
+ "dataclasses;python_version<'3.7'", # will only install on py3.6
"msgpack",
]
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -30,7 +30,7 @@\n \"numpy>=1.12\",\n \"jax>=0.1.59\",\n \"matplotlib\", # only needed for tensorboard export\n- \"dataclasses\", # will only install on py3.6\n+ \"dataclasses;python_version<'3.7'\", # will only install on py3.6\n \"msgpack\",\n ]\n", "issue": "`typing._ClassVar` cannot be accessed in the iPython shell \u2013 dataclasses package seems to mess up on Python 3.7\n### Problem you have encountered:\r\n\r\nI just installed flax and tried to import it from the iPython shell. But it raises an `AttributeError`.\r\n\r\n```\r\nIn [1]: import flax\r\n\r\n*snip*\r\n\r\n~/.virtualenvs/flax2/lib/python3.7/site-packages/dataclasses.py in _is_classvar(a_type, typing)\r\n 548 # This test uses a typing internal class, but it's the best way to\r\n 549 # test if this is a ClassVar.\r\n--> 550 return type(a_type) is typing._ClassVar\r\n 551\r\n 552\r\n\r\nAttributeError: module 'typing' has no attribute '_ClassVar'\r\n```\r\nThis does not happen in the normal interpreter, where everything goes fine. \r\n\r\n### What you expected to happen:\r\n\r\nI expected the import to work the same in iPython and the normal python shell.\r\n\r\n### Logs, error messages, etc:\r\n\r\nFull traceback in this gist: https://gist.github.com/bayerj/96f096c7fb09a7c9b758dabdbca32671\r\n\r\n### Steps to reproduce:\r\n\r\nOn Mac OS X with Python 3.7.6, not anaconda, virtuelenvwrapper installed.\r\n\r\n```\r\n\u276f\u276f\u276f mkvirtualenv flax2\r\n\u276f\u276f\u276f pip install jaxlib\r\n*snip*\r\n\u276f\u276f\u276f pip install flax\r\n*snip*\r\n\u276f\u276f\u276f ipython\r\n\r\n*snip*\r\n\r\nIn [1]: import flax\r\n```\r\n\r\n### Workaround\r\n\r\nThe problem seems to be in the `dataclasses` package\u2013not python's own one\u2013from PyPI. If I uninstall it...\r\n\r\n```\r\n\u276f\u276f\u276f pip uninstall dataclasses\r\nFound existing installation: dataclasses 0.6\r\nUninstalling dataclasses-0.6:\r\n Would remove:\r\n /Users/bayerj/.virtualenvs/debug2/lib/python3.7/site-packages/dataclasses-0.6.dist-info/*\r\n /Users/bayerj/.virtualenvs/debug2/lib/python3.7/site-packages/dataclasses.py\r\nProceed (y/n)? y\r\n Successfully uninstalled dataclasses-0.6\r\n\u276f\u276f\u276f ipython\r\n/usr/local/lib/python3.7/site-packages/IPython/core/interactiveshell.py:931: UserWarning: Attempting to work in a virtualenv. If you encounter problems, please install IPython inside the virtualenv.\r\n warn(\"Attempting to work in a virtualenv. If you encounter problems, please \"\r\nPython 3.7.6 (default, Dec 30 2019, 19:38:28)\r\nType 'copyright', 'credits' or 'license' for more information\r\nIPython 7.9.0 -- An enhanced Interactive Python. Type '?' for help.\r\n\r\nIn [1]: import flax\r\n```\r\n... this goes fine.\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright 2020 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"setup.py for Flax.\"\"\"\n\nimport os\nfrom setuptools import find_packages\nfrom setuptools import setup\n\nversion = \"0.1.0\"\n\nhere = os.path.abspath(os.path.dirname(__file__))\ntry:\n README = open(os.path.join(here, \"README.md\"), encoding='utf-8').read()\nexcept IOError:\n README = \"\"\n\ninstall_requires = [\n \"numpy>=1.12\",\n \"jax>=0.1.59\",\n \"matplotlib\", # only needed for tensorboard export\n \"dataclasses\", # will only install on py3.6\n \"msgpack\",\n]\n\ntests_require = [\n \"jaxlib\",\n \"pytest\",\n \"pytest-cov\",\n \"pytest-xdist\",\n \"tensorflow\",\n \"tensorflow_datasets\",\n]\n\nsetup(\n name=\"flax\",\n version=version,\n description=\"Flax: A neural network library for JAX designed for flexibility\",\n long_description=\"\\n\\n\".join([README]),\n long_description_content_type='text/markdown',\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: MIT License\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n keywords=\"\",\n author=\"Flax team\",\n author_email=\"[email protected]\",\n url=\"https://github.com/google/flax\",\n license=\"Apache\",\n packages=find_packages(),\n include_package_data=False,\n zip_safe=False,\n install_requires=install_requires,\n extras_require={\n \"testing\": tests_require,\n },\n )\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2020 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"setup.py for Flax.\"\"\"\n\nimport os\nfrom setuptools import find_packages\nfrom setuptools import setup\n\nversion = \"0.1.0\"\n\nhere = os.path.abspath(os.path.dirname(__file__))\ntry:\n README = open(os.path.join(here, \"README.md\"), encoding='utf-8').read()\nexcept IOError:\n README = \"\"\n\ninstall_requires = [\n \"numpy>=1.12\",\n \"jax>=0.1.59\",\n \"matplotlib\", # only needed for tensorboard export\n \"dataclasses;python_version<'3.7'\", # will only install on py3.6\n \"msgpack\",\n]\n\ntests_require = [\n \"jaxlib\",\n \"pytest\",\n \"pytest-cov\",\n \"pytest-xdist\",\n \"tensorflow\",\n \"tensorflow_datasets\",\n]\n\nsetup(\n name=\"flax\",\n version=version,\n description=\"Flax: A neural network library for JAX designed for flexibility\",\n long_description=\"\\n\\n\".join([README]),\n long_description_content_type='text/markdown',\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: MIT License\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n keywords=\"\",\n author=\"Flax team\",\n author_email=\"[email protected]\",\n url=\"https://github.com/google/flax\",\n license=\"Apache\",\n packages=find_packages(),\n include_package_data=False,\n zip_safe=False,\n install_requires=install_requires,\n extras_require={\n \"testing\": tests_require,\n },\n )\n", "path": "setup.py"}]} | 1,546 | 111 |
gh_patches_debug_58693 | rasdani/github-patches | git_diff | google-parfait__tensorflow-federated-1334 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Keras model in federated_learning_for_image_classification.ipynb throws warning
**Describe the bug**
Keras Sequential Model in [federated_learning_for_image_classification.ipynb](https://github.com/tensorflow/federated/blob/master/docs/tutorials/federated_learning_for_image_classification.ipynb) throws warning.
The model in the notebook is
```python
def create_keras_model():
return tf.keras.models.Sequential([
tf.keras.layers.Input(shape=(784,)),
tf.keras.layers.Dense(10, kernel_initializer='zeros'),
tf.keras.layers.Softmax(),
])
```
Warning thrown:
```python
WARNING:tensorflow:Please add `keras.layers.InputLayer` instead of `keras.Input` to Sequential model. `keras.Input` is intended to be used by Functional model.
<tensorflow.python.keras.engine.sequential.Sequential at 0x7f66178a46d0>
```
Easily fixed using the correct layer type:
```python
def create_keras_model():
return tf.keras.models.Sequential([
tf.keras.layers.InputLayer(input_shape=(784,)),
tf.keras.layers.Dense(10, kernel_initializer='zeros'),
tf.keras.layers.Softmax(),
])
```
[colab](https://colab.research.google.com/drive/1LFgBiu9xUa-k92IW24fiSX_kVp7lb0SB?usp=sharing) notebook that reproduces the bug.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tensorflow_federated/python/examples/remote_execution/remote_executor_example.py`
Content:
```
1 # Copyright 2018, The TensorFlow Federated Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Example showing how to run a multi-machine simulation.
15
16 In order to run this example, you must have a running instance of the
17 Executor Service, either locally or on Kubernetes.
18
19 The model trains EMNIST for a small number of rounds, but uses a RemoteExecutor
20 to distribute the work to the ExecutorService.
21 """
22
23 import collections
24 import warnings
25
26 from absl import app
27 from absl import flags
28 import grpc
29 import numpy as np
30 import tensorflow as tf
31 import tensorflow_federated as tff
32
33 FLAGS = flags.FLAGS
34
35 flags.DEFINE_string('host', None, 'The host to connect to.')
36 flags.mark_flag_as_required('host')
37 flags.DEFINE_string('port', '8000', 'The port to connect to.')
38 flags.DEFINE_integer('n_clients', 10, 'Number of clients.')
39 flags.DEFINE_integer('n_rounds', 3, 'Number of rounds.')
40
41
42 def preprocess(dataset):
43
44 def element_fn(element):
45 return collections.OrderedDict([
46 ('x', tf.reshape(element['pixels'], [-1])),
47 ('y', tf.reshape(element['label'], [1])),
48 ])
49
50 return dataset.repeat(NUM_EPOCHS).map(element_fn).batch(BATCH_SIZE)
51
52
53 def make_federated_data(client_data, client_ids):
54 return [
55 preprocess(client_data.create_tf_dataset_for_client(x))
56 for x in client_ids
57 ]
58
59
60 NUM_EPOCHS = 10
61 BATCH_SIZE = 20
62
63
64 def make_remote_executor(inferred_cardinalities):
65 """Make remote executor."""
66
67 def create_worker_stack(ex):
68 ex = tff.framework.ThreadDelegatingExecutor(ex)
69 return tff.framework.ReferenceResolvingExecutor(ex)
70
71 client_ex = []
72 num_clients = inferred_cardinalities.get(tff.CLIENTS, None)
73 if num_clients:
74 print('Inferred that there are {} clients'.format(num_clients))
75 else:
76 print('No CLIENTS placement provided')
77
78 for _ in range(num_clients or 0):
79 channel = grpc.insecure_channel('{}:{}'.format(FLAGS.host, FLAGS.port))
80 remote_ex = tff.framework.RemoteExecutor(channel)
81 worker_stack = create_worker_stack(remote_ex)
82 client_ex.append(worker_stack)
83
84 federating_strategy_factory = tff.framework.FederatedResolvingStrategy.factory(
85 {
86 tff.SERVER: create_worker_stack(tff.framework.EagerTFExecutor()),
87 tff.CLIENTS: client_ex,
88 })
89 unplaced_ex = create_worker_stack(tff.framework.EagerTFExecutor())
90 federating_ex = tff.framework.FederatingExecutor(federating_strategy_factory,
91 unplaced_ex)
92 return tff.framework.ReferenceResolvingExecutor(federating_ex)
93
94
95 def main(argv):
96 if len(argv) > 1:
97 raise app.UsageError('Too many command-line arguments.')
98
99 warnings.simplefilter('ignore')
100
101 np.random.seed(0)
102
103 emnist_train, _ = tff.simulation.datasets.emnist.load_data()
104
105 sample_clients = emnist_train.client_ids[0:FLAGS.n_clients]
106
107 federated_train_data = make_federated_data(emnist_train, sample_clients)
108
109 example_dataset = emnist_train.create_tf_dataset_for_client(
110 emnist_train.client_ids[0])
111
112 preprocessed_example_dataset = preprocess(example_dataset)
113 input_spec = preprocessed_example_dataset.element_spec
114
115 def model_fn():
116 model = tf.keras.models.Sequential([
117 tf.keras.layers.Input(shape=(784,)),
118 tf.keras.layers.Dense(10, kernel_initializer='zeros'),
119 tf.keras.layers.Softmax(),
120 ])
121 return tff.learning.from_keras_model(
122 model,
123 input_spec=input_spec,
124 loss=tf.keras.losses.SparseCategoricalCrossentropy(),
125 metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
126
127 iterative_process = tff.learning.build_federated_averaging_process(
128 model_fn,
129 client_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=0.02))
130
131 factory = tff.framework.ResourceManagingExecutorFactory(make_remote_executor)
132 context = tff.framework.ExecutionContext(factory)
133 tff.framework.set_default_context(context)
134
135 state = iterative_process.initialize()
136
137 state, metrics = iterative_process.next(state, federated_train_data)
138 print('round 1, metrics={}'.format(metrics))
139
140 for round_num in range(2, FLAGS.n_rounds + 1):
141 state, metrics = iterative_process.next(state, federated_train_data)
142 print('round {:2d}, metrics={}'.format(round_num, metrics))
143
144
145 if __name__ == '__main__':
146 app.run(main)
147
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tensorflow_federated/python/examples/remote_execution/remote_executor_example.py b/tensorflow_federated/python/examples/remote_execution/remote_executor_example.py
--- a/tensorflow_federated/python/examples/remote_execution/remote_executor_example.py
+++ b/tensorflow_federated/python/examples/remote_execution/remote_executor_example.py
@@ -114,7 +114,7 @@
def model_fn():
model = tf.keras.models.Sequential([
- tf.keras.layers.Input(shape=(784,)),
+ tf.keras.layers.InputLayer(input_shape=(784,)),
tf.keras.layers.Dense(10, kernel_initializer='zeros'),
tf.keras.layers.Softmax(),
])
| {"golden_diff": "diff --git a/tensorflow_federated/python/examples/remote_execution/remote_executor_example.py b/tensorflow_federated/python/examples/remote_execution/remote_executor_example.py\n--- a/tensorflow_federated/python/examples/remote_execution/remote_executor_example.py\n+++ b/tensorflow_federated/python/examples/remote_execution/remote_executor_example.py\n@@ -114,7 +114,7 @@\n \n def model_fn():\n model = tf.keras.models.Sequential([\n- tf.keras.layers.Input(shape=(784,)),\n+ tf.keras.layers.InputLayer(input_shape=(784,)),\n tf.keras.layers.Dense(10, kernel_initializer='zeros'),\n tf.keras.layers.Softmax(),\n ])\n", "issue": "Keras model in federated_learning_for_image_classification.ipynb throws warning\n**Describe the bug**\r\nKeras Sequential Model in [federated_learning_for_image_classification.ipynb](https://github.com/tensorflow/federated/blob/master/docs/tutorials/federated_learning_for_image_classification.ipynb) throws warning.\r\nThe model in the notebook is\r\n```python\r\ndef create_keras_model():\r\n return tf.keras.models.Sequential([\r\n tf.keras.layers.Input(shape=(784,)),\r\n tf.keras.layers.Dense(10, kernel_initializer='zeros'),\r\n tf.keras.layers.Softmax(),\r\n ])\r\n```\r\nWarning thrown:\r\n```python\r\nWARNING:tensorflow:Please add `keras.layers.InputLayer` instead of `keras.Input` to Sequential model. `keras.Input` is intended to be used by Functional model.\r\n<tensorflow.python.keras.engine.sequential.Sequential at 0x7f66178a46d0>\r\n```\r\n\r\nEasily fixed using the correct layer type:\r\n```python\r\ndef create_keras_model():\r\n return tf.keras.models.Sequential([\r\n tf.keras.layers.InputLayer(input_shape=(784,)),\r\n tf.keras.layers.Dense(10, kernel_initializer='zeros'),\r\n tf.keras.layers.Softmax(),\r\n ])\r\n```\r\n\r\n[colab](https://colab.research.google.com/drive/1LFgBiu9xUa-k92IW24fiSX_kVp7lb0SB?usp=sharing) notebook that reproduces the bug.\r\n\n", "before_files": [{"content": "# Copyright 2018, The TensorFlow Federated Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Example showing how to run a multi-machine simulation.\n\nIn order to run this example, you must have a running instance of the\nExecutor Service, either locally or on Kubernetes.\n\nThe model trains EMNIST for a small number of rounds, but uses a RemoteExecutor\nto distribute the work to the ExecutorService.\n\"\"\"\n\nimport collections\nimport warnings\n\nfrom absl import app\nfrom absl import flags\nimport grpc\nimport numpy as np\nimport tensorflow as tf\nimport tensorflow_federated as tff\n\nFLAGS = flags.FLAGS\n\nflags.DEFINE_string('host', None, 'The host to connect to.')\nflags.mark_flag_as_required('host')\nflags.DEFINE_string('port', '8000', 'The port to connect to.')\nflags.DEFINE_integer('n_clients', 10, 'Number of clients.')\nflags.DEFINE_integer('n_rounds', 3, 'Number of rounds.')\n\n\ndef preprocess(dataset):\n\n def element_fn(element):\n return collections.OrderedDict([\n ('x', tf.reshape(element['pixels'], [-1])),\n ('y', tf.reshape(element['label'], [1])),\n ])\n\n return dataset.repeat(NUM_EPOCHS).map(element_fn).batch(BATCH_SIZE)\n\n\ndef make_federated_data(client_data, client_ids):\n return [\n preprocess(client_data.create_tf_dataset_for_client(x))\n for x in client_ids\n ]\n\n\nNUM_EPOCHS = 10\nBATCH_SIZE = 20\n\n\ndef make_remote_executor(inferred_cardinalities):\n \"\"\"Make remote executor.\"\"\"\n\n def create_worker_stack(ex):\n ex = tff.framework.ThreadDelegatingExecutor(ex)\n return tff.framework.ReferenceResolvingExecutor(ex)\n\n client_ex = []\n num_clients = inferred_cardinalities.get(tff.CLIENTS, None)\n if num_clients:\n print('Inferred that there are {} clients'.format(num_clients))\n else:\n print('No CLIENTS placement provided')\n\n for _ in range(num_clients or 0):\n channel = grpc.insecure_channel('{}:{}'.format(FLAGS.host, FLAGS.port))\n remote_ex = tff.framework.RemoteExecutor(channel)\n worker_stack = create_worker_stack(remote_ex)\n client_ex.append(worker_stack)\n\n federating_strategy_factory = tff.framework.FederatedResolvingStrategy.factory(\n {\n tff.SERVER: create_worker_stack(tff.framework.EagerTFExecutor()),\n tff.CLIENTS: client_ex,\n })\n unplaced_ex = create_worker_stack(tff.framework.EagerTFExecutor())\n federating_ex = tff.framework.FederatingExecutor(federating_strategy_factory,\n unplaced_ex)\n return tff.framework.ReferenceResolvingExecutor(federating_ex)\n\n\ndef main(argv):\n if len(argv) > 1:\n raise app.UsageError('Too many command-line arguments.')\n\n warnings.simplefilter('ignore')\n\n np.random.seed(0)\n\n emnist_train, _ = tff.simulation.datasets.emnist.load_data()\n\n sample_clients = emnist_train.client_ids[0:FLAGS.n_clients]\n\n federated_train_data = make_federated_data(emnist_train, sample_clients)\n\n example_dataset = emnist_train.create_tf_dataset_for_client(\n emnist_train.client_ids[0])\n\n preprocessed_example_dataset = preprocess(example_dataset)\n input_spec = preprocessed_example_dataset.element_spec\n\n def model_fn():\n model = tf.keras.models.Sequential([\n tf.keras.layers.Input(shape=(784,)),\n tf.keras.layers.Dense(10, kernel_initializer='zeros'),\n tf.keras.layers.Softmax(),\n ])\n return tff.learning.from_keras_model(\n model,\n input_spec=input_spec,\n loss=tf.keras.losses.SparseCategoricalCrossentropy(),\n metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])\n\n iterative_process = tff.learning.build_federated_averaging_process(\n model_fn,\n client_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=0.02))\n\n factory = tff.framework.ResourceManagingExecutorFactory(make_remote_executor)\n context = tff.framework.ExecutionContext(factory)\n tff.framework.set_default_context(context)\n\n state = iterative_process.initialize()\n\n state, metrics = iterative_process.next(state, federated_train_data)\n print('round 1, metrics={}'.format(metrics))\n\n for round_num in range(2, FLAGS.n_rounds + 1):\n state, metrics = iterative_process.next(state, federated_train_data)\n print('round {:2d}, metrics={}'.format(round_num, metrics))\n\n\nif __name__ == '__main__':\n app.run(main)\n", "path": "tensorflow_federated/python/examples/remote_execution/remote_executor_example.py"}], "after_files": [{"content": "# Copyright 2018, The TensorFlow Federated Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Example showing how to run a multi-machine simulation.\n\nIn order to run this example, you must have a running instance of the\nExecutor Service, either locally or on Kubernetes.\n\nThe model trains EMNIST for a small number of rounds, but uses a RemoteExecutor\nto distribute the work to the ExecutorService.\n\"\"\"\n\nimport collections\nimport warnings\n\nfrom absl import app\nfrom absl import flags\nimport grpc\nimport numpy as np\nimport tensorflow as tf\nimport tensorflow_federated as tff\n\nFLAGS = flags.FLAGS\n\nflags.DEFINE_string('host', None, 'The host to connect to.')\nflags.mark_flag_as_required('host')\nflags.DEFINE_string('port', '8000', 'The port to connect to.')\nflags.DEFINE_integer('n_clients', 10, 'Number of clients.')\nflags.DEFINE_integer('n_rounds', 3, 'Number of rounds.')\n\n\ndef preprocess(dataset):\n\n def element_fn(element):\n return collections.OrderedDict([\n ('x', tf.reshape(element['pixels'], [-1])),\n ('y', tf.reshape(element['label'], [1])),\n ])\n\n return dataset.repeat(NUM_EPOCHS).map(element_fn).batch(BATCH_SIZE)\n\n\ndef make_federated_data(client_data, client_ids):\n return [\n preprocess(client_data.create_tf_dataset_for_client(x))\n for x in client_ids\n ]\n\n\nNUM_EPOCHS = 10\nBATCH_SIZE = 20\n\n\ndef make_remote_executor(inferred_cardinalities):\n \"\"\"Make remote executor.\"\"\"\n\n def create_worker_stack(ex):\n ex = tff.framework.ThreadDelegatingExecutor(ex)\n return tff.framework.ReferenceResolvingExecutor(ex)\n\n client_ex = []\n num_clients = inferred_cardinalities.get(tff.CLIENTS, None)\n if num_clients:\n print('Inferred that there are {} clients'.format(num_clients))\n else:\n print('No CLIENTS placement provided')\n\n for _ in range(num_clients or 0):\n channel = grpc.insecure_channel('{}:{}'.format(FLAGS.host, FLAGS.port))\n remote_ex = tff.framework.RemoteExecutor(channel)\n worker_stack = create_worker_stack(remote_ex)\n client_ex.append(worker_stack)\n\n federating_strategy_factory = tff.framework.FederatedResolvingStrategy.factory(\n {\n tff.SERVER: create_worker_stack(tff.framework.EagerTFExecutor()),\n tff.CLIENTS: client_ex,\n })\n unplaced_ex = create_worker_stack(tff.framework.EagerTFExecutor())\n federating_ex = tff.framework.FederatingExecutor(federating_strategy_factory,\n unplaced_ex)\n return tff.framework.ReferenceResolvingExecutor(federating_ex)\n\n\ndef main(argv):\n if len(argv) > 1:\n raise app.UsageError('Too many command-line arguments.')\n\n warnings.simplefilter('ignore')\n\n np.random.seed(0)\n\n emnist_train, _ = tff.simulation.datasets.emnist.load_data()\n\n sample_clients = emnist_train.client_ids[0:FLAGS.n_clients]\n\n federated_train_data = make_federated_data(emnist_train, sample_clients)\n\n example_dataset = emnist_train.create_tf_dataset_for_client(\n emnist_train.client_ids[0])\n\n preprocessed_example_dataset = preprocess(example_dataset)\n input_spec = preprocessed_example_dataset.element_spec\n\n def model_fn():\n model = tf.keras.models.Sequential([\n tf.keras.layers.InputLayer(input_shape=(784,)),\n tf.keras.layers.Dense(10, kernel_initializer='zeros'),\n tf.keras.layers.Softmax(),\n ])\n return tff.learning.from_keras_model(\n model,\n input_spec=input_spec,\n loss=tf.keras.losses.SparseCategoricalCrossentropy(),\n metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])\n\n iterative_process = tff.learning.build_federated_averaging_process(\n model_fn,\n client_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=0.02))\n\n factory = tff.framework.ResourceManagingExecutorFactory(make_remote_executor)\n context = tff.framework.ExecutionContext(factory)\n tff.framework.set_default_context(context)\n\n state = iterative_process.initialize()\n\n state, metrics = iterative_process.next(state, federated_train_data)\n print('round 1, metrics={}'.format(metrics))\n\n for round_num in range(2, FLAGS.n_rounds + 1):\n state, metrics = iterative_process.next(state, federated_train_data)\n print('round {:2d}, metrics={}'.format(round_num, metrics))\n\n\nif __name__ == '__main__':\n app.run(main)\n", "path": "tensorflow_federated/python/examples/remote_execution/remote_executor_example.py"}]} | 2,040 | 156 |
gh_patches_debug_32843 | rasdani/github-patches | git_diff | nextcloud__appstore-693 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Password reset sends outdated reset token
When requesting a password reset link from apps.nextcloud.com, the token in the link of the reset e-mail does not work, the website reports:
> Bad API Token
>
> The password reset link was invalid, possibly because it has already been used. Please request a new password reset.
When requesting a new password reset, an e-mail with the very same token is sent. In consequence it is impossible to change the password.
## Details
* Firefox 72.0.2
## Steps to reproduce
1. "Forget" your password
2. Request password reset link
3. Receive e-mail and open reset link
4. Observe error, follow suggestion and request new link
5. Receive e-mail with exactly the same token
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nextcloudappstore/user/forms.py`
Content:
```
1 from allauth.account.utils import filter_users_by_email, user_username, \
2 user_pk_to_url_str
3 from django import forms
4 from django.contrib.auth import get_user_model
5 from django.forms import EmailField, CharField, PasswordInput
6 from django.utils.translation import ugettext_lazy as _
7 from snowpenguin.django.recaptcha2.fields import ReCaptchaField
8 from snowpenguin.django.recaptcha2.widgets import ReCaptchaWidget
9
10
11 class SignupFormRecaptcha(forms.Form):
12 """integrate a recaptcha field."""
13 recaptcha = ReCaptchaField(widget=ReCaptchaWidget())
14 first_name = CharField(max_length=30, label=_('First name'))
15 last_name = CharField(max_length=30, label=_('Last name'))
16
17 def signup(self, request, user):
18 user.first_name = self.cleaned_data['first_name']
19 user.last_name = self.cleaned_data['last_name']
20 user.save()
21
22
23 class DeleteAccountForm(forms.Form):
24 email = EmailField(required=True, label=_('Your e-mail address'))
25
26 def __init__(self, *args, **kwargs):
27 self.user = kwargs.pop('user', None)
28 super().__init__(*args, **kwargs)
29
30 def clean_email(self):
31 email = self.cleaned_data.get('email')
32 if self.user and self.user.email == email:
33 return email
34 else:
35 raise forms.ValidationError(_(
36 'The given e-mail address does not match your e-mail address'))
37
38
39 class AccountForm(forms.ModelForm):
40 passwd = CharField(widget=PasswordInput(), label=_('Confirm password'),
41 help_text=_('Password is required to prevent '
42 'unauthorized users from changing your '
43 'email address and resetting your '
44 'password. This field does not update your '
45 'password!'))
46
47 class Meta:
48 model = get_user_model()
49 fields = ('first_name', 'last_name', 'email')
50
51 def clean_email(self):
52 value = self.cleaned_data['email']
53 users = filter_users_by_email(value)
54 if [u for u in users if u.pk != self.instance.pk]:
55 msg = _(
56 'This e-mail address is already associated with another '
57 'account.')
58 raise forms.ValidationError(msg)
59 return value
60
61 def clean_passwd(self):
62 value = self.cleaned_data['passwd']
63 if self.instance.check_password(value):
64 return value
65 else:
66 raise forms.ValidationError(_('Invalid password'))
67
68
69 class CustomResetPasswordForm(forms.Form):
70 # remove this class once issue #1307 is resolved django-allauth
71 email = forms.EmailField(
72 label=_("E-mail"),
73 required=True,
74 widget=forms.TextInput(attrs={
75 "type": "email",
76 "size": "30",
77 "placeholder": _("E-mail address"),
78 })
79 )
80
81 def clean_email(self):
82 email = self.cleaned_data["email"]
83 from allauth.account.adapter import get_adapter
84 email = get_adapter().clean_email(email)
85 self.users = filter_users_by_email(email)
86
87 return self.cleaned_data["email"]
88
89 def save(self, request, **kwargs):
90 from django.contrib.sites.shortcuts import get_current_site
91 current_site = get_current_site(request)
92 email = self.cleaned_data["email"]
93 from django.contrib.auth.tokens import default_token_generator
94 token_generator = kwargs.get("token_generator",
95 default_token_generator)
96
97 for user in self.users:
98 temp_key = token_generator.make_token(user)
99
100 # save it to the password reset model
101 # password_reset = PasswordReset(user=user, temp_key=temp_key)
102 # password_reset.save()
103
104 # send the password reset email
105 from django.urls import reverse
106 path = reverse("account_reset_password_from_key",
107 kwargs=dict(uidb36=user_pk_to_url_str(user),
108 key=temp_key))
109 from allauth.utils import build_absolute_uri
110 url = build_absolute_uri(
111 request, path)
112
113 context = {"current_site": current_site,
114 "user": user,
115 "password_reset_url": url,
116 "request": request}
117
118 from allauth.account import app_settings
119
120 if app_settings.AUTHENTICATION_METHOD \
121 != app_settings.AuthenticationMethod.EMAIL:
122 context['username'] = user_username(user)
123 from allauth.account.adapter import get_adapter
124 get_adapter(request).send_mail(
125 'account/email/password_reset_key',
126 email,
127 context)
128 return self.cleaned_data["email"]
129
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nextcloudappstore/user/forms.py b/nextcloudappstore/user/forms.py
--- a/nextcloudappstore/user/forms.py
+++ b/nextcloudappstore/user/forms.py
@@ -1,3 +1,4 @@
+from allauth.account.forms import EmailAwarePasswordResetTokenGenerator
from allauth.account.utils import filter_users_by_email, user_username, \
user_pk_to_url_str
from django import forms
@@ -82,7 +83,7 @@
email = self.cleaned_data["email"]
from allauth.account.adapter import get_adapter
email = get_adapter().clean_email(email)
- self.users = filter_users_by_email(email)
+ self.users = filter_users_by_email(email, is_active=True)
return self.cleaned_data["email"]
@@ -90,9 +91,7 @@
from django.contrib.sites.shortcuts import get_current_site
current_site = get_current_site(request)
email = self.cleaned_data["email"]
- from django.contrib.auth.tokens import default_token_generator
- token_generator = kwargs.get("token_generator",
- default_token_generator)
+ token_generator = EmailAwarePasswordResetTokenGenerator()
for user in self.users:
temp_key = token_generator.make_token(user)
@@ -118,7 +117,7 @@
from allauth.account import app_settings
if app_settings.AUTHENTICATION_METHOD \
- != app_settings.AuthenticationMethod.EMAIL:
+ != app_settings.AuthenticationMethod.EMAIL:
context['username'] = user_username(user)
from allauth.account.adapter import get_adapter
get_adapter(request).send_mail(
| {"golden_diff": "diff --git a/nextcloudappstore/user/forms.py b/nextcloudappstore/user/forms.py\n--- a/nextcloudappstore/user/forms.py\n+++ b/nextcloudappstore/user/forms.py\n@@ -1,3 +1,4 @@\n+from allauth.account.forms import EmailAwarePasswordResetTokenGenerator\n from allauth.account.utils import filter_users_by_email, user_username, \\\n user_pk_to_url_str\n from django import forms\n@@ -82,7 +83,7 @@\n email = self.cleaned_data[\"email\"]\n from allauth.account.adapter import get_adapter\n email = get_adapter().clean_email(email)\n- self.users = filter_users_by_email(email)\n+ self.users = filter_users_by_email(email, is_active=True)\n \n return self.cleaned_data[\"email\"]\n \n@@ -90,9 +91,7 @@\n from django.contrib.sites.shortcuts import get_current_site\n current_site = get_current_site(request)\n email = self.cleaned_data[\"email\"]\n- from django.contrib.auth.tokens import default_token_generator\n- token_generator = kwargs.get(\"token_generator\",\n- default_token_generator)\n+ token_generator = EmailAwarePasswordResetTokenGenerator()\n \n for user in self.users:\n temp_key = token_generator.make_token(user)\n@@ -118,7 +117,7 @@\n from allauth.account import app_settings\n \n if app_settings.AUTHENTICATION_METHOD \\\n- != app_settings.AuthenticationMethod.EMAIL:\n+ != app_settings.AuthenticationMethod.EMAIL:\n context['username'] = user_username(user)\n from allauth.account.adapter import get_adapter\n get_adapter(request).send_mail(\n", "issue": "Password reset sends outdated reset token\nWhen requesting a password reset link from apps.nextcloud.com, the token in the link of the reset e-mail does not work, the website reports:\r\n\r\n> Bad API Token\r\n> \r\n> The password reset link was invalid, possibly because it has already been used. Please request a new password reset.\r\n\r\nWhen requesting a new password reset, an e-mail with the very same token is sent. In consequence it is impossible to change the password.\r\n\r\n## Details\r\n\r\n* Firefox 72.0.2\r\n\r\n## Steps to reproduce\r\n\r\n1. \"Forget\" your password\r\n2. Request password reset link\r\n3. Receive e-mail and open reset link\r\n4. Observe error, follow suggestion and request new link\r\n5. Receive e-mail with exactly the same token\r\n\n", "before_files": [{"content": "from allauth.account.utils import filter_users_by_email, user_username, \\\n user_pk_to_url_str\nfrom django import forms\nfrom django.contrib.auth import get_user_model\nfrom django.forms import EmailField, CharField, PasswordInput\nfrom django.utils.translation import ugettext_lazy as _\nfrom snowpenguin.django.recaptcha2.fields import ReCaptchaField\nfrom snowpenguin.django.recaptcha2.widgets import ReCaptchaWidget\n\n\nclass SignupFormRecaptcha(forms.Form):\n \"\"\"integrate a recaptcha field.\"\"\"\n recaptcha = ReCaptchaField(widget=ReCaptchaWidget())\n first_name = CharField(max_length=30, label=_('First name'))\n last_name = CharField(max_length=30, label=_('Last name'))\n\n def signup(self, request, user):\n user.first_name = self.cleaned_data['first_name']\n user.last_name = self.cleaned_data['last_name']\n user.save()\n\n\nclass DeleteAccountForm(forms.Form):\n email = EmailField(required=True, label=_('Your e-mail address'))\n\n def __init__(self, *args, **kwargs):\n self.user = kwargs.pop('user', None)\n super().__init__(*args, **kwargs)\n\n def clean_email(self):\n email = self.cleaned_data.get('email')\n if self.user and self.user.email == email:\n return email\n else:\n raise forms.ValidationError(_(\n 'The given e-mail address does not match your e-mail address'))\n\n\nclass AccountForm(forms.ModelForm):\n passwd = CharField(widget=PasswordInput(), label=_('Confirm password'),\n help_text=_('Password is required to prevent '\n 'unauthorized users from changing your '\n 'email address and resetting your '\n 'password. This field does not update your '\n 'password!'))\n\n class Meta:\n model = get_user_model()\n fields = ('first_name', 'last_name', 'email')\n\n def clean_email(self):\n value = self.cleaned_data['email']\n users = filter_users_by_email(value)\n if [u for u in users if u.pk != self.instance.pk]:\n msg = _(\n 'This e-mail address is already associated with another '\n 'account.')\n raise forms.ValidationError(msg)\n return value\n\n def clean_passwd(self):\n value = self.cleaned_data['passwd']\n if self.instance.check_password(value):\n return value\n else:\n raise forms.ValidationError(_('Invalid password'))\n\n\nclass CustomResetPasswordForm(forms.Form):\n # remove this class once issue #1307 is resolved django-allauth\n email = forms.EmailField(\n label=_(\"E-mail\"),\n required=True,\n widget=forms.TextInput(attrs={\n \"type\": \"email\",\n \"size\": \"30\",\n \"placeholder\": _(\"E-mail address\"),\n })\n )\n\n def clean_email(self):\n email = self.cleaned_data[\"email\"]\n from allauth.account.adapter import get_adapter\n email = get_adapter().clean_email(email)\n self.users = filter_users_by_email(email)\n\n return self.cleaned_data[\"email\"]\n\n def save(self, request, **kwargs):\n from django.contrib.sites.shortcuts import get_current_site\n current_site = get_current_site(request)\n email = self.cleaned_data[\"email\"]\n from django.contrib.auth.tokens import default_token_generator\n token_generator = kwargs.get(\"token_generator\",\n default_token_generator)\n\n for user in self.users:\n temp_key = token_generator.make_token(user)\n\n # save it to the password reset model\n # password_reset = PasswordReset(user=user, temp_key=temp_key)\n # password_reset.save()\n\n # send the password reset email\n from django.urls import reverse\n path = reverse(\"account_reset_password_from_key\",\n kwargs=dict(uidb36=user_pk_to_url_str(user),\n key=temp_key))\n from allauth.utils import build_absolute_uri\n url = build_absolute_uri(\n request, path)\n\n context = {\"current_site\": current_site,\n \"user\": user,\n \"password_reset_url\": url,\n \"request\": request}\n\n from allauth.account import app_settings\n\n if app_settings.AUTHENTICATION_METHOD \\\n != app_settings.AuthenticationMethod.EMAIL:\n context['username'] = user_username(user)\n from allauth.account.adapter import get_adapter\n get_adapter(request).send_mail(\n 'account/email/password_reset_key',\n email,\n context)\n return self.cleaned_data[\"email\"]\n", "path": "nextcloudappstore/user/forms.py"}], "after_files": [{"content": "from allauth.account.forms import EmailAwarePasswordResetTokenGenerator\nfrom allauth.account.utils import filter_users_by_email, user_username, \\\n user_pk_to_url_str\nfrom django import forms\nfrom django.contrib.auth import get_user_model\nfrom django.forms import EmailField, CharField, PasswordInput\nfrom django.utils.translation import ugettext_lazy as _\nfrom snowpenguin.django.recaptcha2.fields import ReCaptchaField\nfrom snowpenguin.django.recaptcha2.widgets import ReCaptchaWidget\n\n\nclass SignupFormRecaptcha(forms.Form):\n \"\"\"integrate a recaptcha field.\"\"\"\n recaptcha = ReCaptchaField(widget=ReCaptchaWidget())\n first_name = CharField(max_length=30, label=_('First name'))\n last_name = CharField(max_length=30, label=_('Last name'))\n\n def signup(self, request, user):\n user.first_name = self.cleaned_data['first_name']\n user.last_name = self.cleaned_data['last_name']\n user.save()\n\n\nclass DeleteAccountForm(forms.Form):\n email = EmailField(required=True, label=_('Your e-mail address'))\n\n def __init__(self, *args, **kwargs):\n self.user = kwargs.pop('user', None)\n super().__init__(*args, **kwargs)\n\n def clean_email(self):\n email = self.cleaned_data.get('email')\n if self.user and self.user.email == email:\n return email\n else:\n raise forms.ValidationError(_(\n 'The given e-mail address does not match your e-mail address'))\n\n\nclass AccountForm(forms.ModelForm):\n passwd = CharField(widget=PasswordInput(), label=_('Confirm password'),\n help_text=_('Password is required to prevent '\n 'unauthorized users from changing your '\n 'email address and resetting your '\n 'password. This field does not update your '\n 'password!'))\n\n class Meta:\n model = get_user_model()\n fields = ('first_name', 'last_name', 'email')\n\n def clean_email(self):\n value = self.cleaned_data['email']\n users = filter_users_by_email(value)\n if [u for u in users if u.pk != self.instance.pk]:\n msg = _(\n 'This e-mail address is already associated with another '\n 'account.')\n raise forms.ValidationError(msg)\n return value\n\n def clean_passwd(self):\n value = self.cleaned_data['passwd']\n if self.instance.check_password(value):\n return value\n else:\n raise forms.ValidationError(_('Invalid password'))\n\n\nclass CustomResetPasswordForm(forms.Form):\n # remove this class once issue #1307 is resolved django-allauth\n email = forms.EmailField(\n label=_(\"E-mail\"),\n required=True,\n widget=forms.TextInput(attrs={\n \"type\": \"email\",\n \"size\": \"30\",\n \"placeholder\": _(\"E-mail address\"),\n })\n )\n\n def clean_email(self):\n email = self.cleaned_data[\"email\"]\n from allauth.account.adapter import get_adapter\n email = get_adapter().clean_email(email)\n self.users = filter_users_by_email(email, is_active=True)\n\n return self.cleaned_data[\"email\"]\n\n def save(self, request, **kwargs):\n from django.contrib.sites.shortcuts import get_current_site\n current_site = get_current_site(request)\n email = self.cleaned_data[\"email\"]\n token_generator = EmailAwarePasswordResetTokenGenerator()\n\n for user in self.users:\n temp_key = token_generator.make_token(user)\n\n # save it to the password reset model\n # password_reset = PasswordReset(user=user, temp_key=temp_key)\n # password_reset.save()\n\n # send the password reset email\n from django.urls import reverse\n path = reverse(\"account_reset_password_from_key\",\n kwargs=dict(uidb36=user_pk_to_url_str(user),\n key=temp_key))\n from allauth.utils import build_absolute_uri\n url = build_absolute_uri(\n request, path)\n\n context = {\"current_site\": current_site,\n \"user\": user,\n \"password_reset_url\": url,\n \"request\": request}\n\n from allauth.account import app_settings\n\n if app_settings.AUTHENTICATION_METHOD \\\n != app_settings.AuthenticationMethod.EMAIL:\n context['username'] = user_username(user)\n from allauth.account.adapter import get_adapter\n get_adapter(request).send_mail(\n 'account/email/password_reset_key',\n email,\n context)\n return self.cleaned_data[\"email\"]\n", "path": "nextcloudappstore/user/forms.py"}]} | 1,646 | 350 |
gh_patches_debug_24199 | rasdani/github-patches | git_diff | LibraryOfCongress__concordia-782 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove Latest page URL, in nav bar, and redirect
Community managers want to consolidate `/latest` in to the `/about`. Most of the communication and everyday updates happen in History Hub, there is less usage for a dedicated latest updates page.
Once the URL is removed, the CMs will move the content from `/latest` to `/about`.
Acceptance criteria:
- [x] Remove URL
- [x] Any link to `/latest` to be redirected to `/about`
- [x] Move content (CMs will do this using the static page editor)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `concordia/urls.py`
Content:
```
1 from django.conf import settings
2 from django.conf.urls import url
3 from django.contrib import admin
4 from django.http import Http404, HttpResponseForbidden
5 from django.urls import include, path
6 from django.views.defaults import page_not_found, permission_denied, server_error
7
8 from exporter import views as exporter_views
9
10 from . import views
11
12 tx_urlpatterns = (
13 [
14 path("", views.CampaignListView.as_view(), name="campaign-list"),
15 path(
16 "<slug:slug>/", views.CampaignDetailView.as_view(), name="campaign-detail"
17 ),
18 path(
19 "<slug:campaign_slug>/export/csv/",
20 exporter_views.ExportCampaignToCSV.as_view(),
21 name="campaign-export-csv",
22 ),
23 path(
24 "<slug:campaign_slug>/export/bagit/",
25 exporter_views.ExportCampaignToBagit.as_view(),
26 name="campaign-export-bagit",
27 ),
28 path(
29 "<slug:campaign_slug>/<slug:project_slug>/export/bagit/",
30 exporter_views.ExportProjectToBagIt.as_view(),
31 name="project-export-bagit",
32 ),
33 path(
34 "<slug:campaign_slug>/<slug:project_slug>/<slug:item_id>/export/bagit/",
35 exporter_views.ExportItemToBagIt.as_view(),
36 name="item-export-bagit",
37 ),
38 path(
39 "<slug:campaign_slug>/report/",
40 views.ReportCampaignView.as_view(),
41 name="campaign-report",
42 ),
43 path(
44 "<slug:campaign_slug>/<slug:project_slug>/<slug:item_id>/<slug:slug>/",
45 views.AssetDetailView.as_view(),
46 name="asset-detail",
47 ),
48 # n.b. this must be above project-detail to avoid being seen as a project slug:
49 path(
50 "<slug:campaign_slug>/next-transcribable-asset/",
51 views.redirect_to_next_transcribable_asset,
52 name="redirect-to-next-transcribable-asset",
53 ),
54 path(
55 "<slug:campaign_slug>/<slug:slug>/",
56 views.ProjectDetailView.as_view(),
57 name="project-detail",
58 ),
59 path(
60 "<slug:campaign_slug>/<slug:project_slug>/<slug:item_id>/",
61 views.ItemDetailView.as_view(),
62 name="item-detail",
63 ),
64 ],
65 "transcriptions",
66 )
67
68 urlpatterns = [
69 path("", views.HomeView.as_view(), name="homepage"),
70 path("healthz", views.healthz, name="health-check"),
71 path("about/", views.simple_page, name="about"),
72 path("help-center/", views.simple_page, name="help-center"),
73 path("help-center/welcome-guide/", views.simple_page, name="welcome-guide"),
74 path("help-center/how-to-transcribe/", views.simple_page, name="how-to-transcribe"),
75 path("help-center/how-to-review/", views.simple_page, name="how-to-review"),
76 path("help-center/how-to-tag/", views.simple_page, name="how-to-tag"),
77 path("for-educators/", views.simple_page, name="for-educators"),
78 path("latest/", views.simple_page, name="latest"),
79 path("questions/", views.simple_page, name="questions"),
80 path("contact/", views.ContactUsView.as_view(), name="contact"),
81 path("campaigns/", include(tx_urlpatterns, namespace="transcriptions")),
82 path(
83 "reserve-asset-for-transcription/<int:asset_pk>/",
84 views.reserve_asset_transcription,
85 name="reserve-asset-for-transcription",
86 ),
87 path(
88 "assets/<int:asset_pk>/transcriptions/save/",
89 views.save_transcription,
90 name="save-transcription",
91 ),
92 path(
93 "transcriptions/<int:pk>/submit/",
94 views.submit_transcription,
95 name="submit-transcription",
96 ),
97 path(
98 "transcriptions/<int:pk>/review/",
99 views.review_transcription,
100 name="review-transcription",
101 ),
102 path("assets/<int:asset_pk>/tags/submit/", views.submit_tags, name="submit-tags"),
103 path("account/ajax-status/", views.ajax_session_status, name="ajax-session-status"),
104 path("account/ajax-messages/", views.ajax_messages, name="ajax-messages"),
105 path(
106 "account/register/",
107 views.ConcordiaRegistrationView.as_view(),
108 name="registration_register",
109 ),
110 path(
111 "account/login/", views.ConcordiaLoginView.as_view(), name="registration_login"
112 ),
113 path("account/profile/", views.AccountProfileView.as_view(), name="user-profile"),
114 path("account/", include("django_registration.backends.activation.urls")),
115 path("account/", include("django.contrib.auth.urls")),
116 path("captcha/ajax/", views.ajax_captcha, name="ajax-captcha"),
117 path("captcha/", include("captcha.urls")),
118 path("admin/", admin.site.urls),
119 # Internal support assists:
120 path("maintenance-mode/", include("maintenance_mode.urls")),
121 path("error/500/", server_error),
122 path("error/404/", page_not_found, {"exception": Http404()}),
123 path("error/429/", views.ratelimit_view),
124 path("error/403/", permission_denied, {"exception": HttpResponseForbidden()}),
125 url("", include("django_prometheus_metrics.urls")),
126 path("robots.txt", include("robots.urls")),
127 ]
128
129 if settings.DEBUG:
130 import debug_toolbar
131 from django.conf.urls.static import static
132
133 urlpatterns = [path("__debug__/", include(debug_toolbar.urls))] + urlpatterns
134
135 urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
136
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/concordia/urls.py b/concordia/urls.py
--- a/concordia/urls.py
+++ b/concordia/urls.py
@@ -4,6 +4,7 @@
from django.http import Http404, HttpResponseForbidden
from django.urls import include, path
from django.views.defaults import page_not_found, permission_denied, server_error
+from django.views.generic import RedirectView
from exporter import views as exporter_views
@@ -75,7 +76,10 @@
path("help-center/how-to-review/", views.simple_page, name="how-to-review"),
path("help-center/how-to-tag/", views.simple_page, name="how-to-tag"),
path("for-educators/", views.simple_page, name="for-educators"),
- path("latest/", views.simple_page, name="latest"),
+ path(
+ "latest/",
+ RedirectView.as_view(pattern_name="about", permanent=True, query_string=True),
+ ),
path("questions/", views.simple_page, name="questions"),
path("contact/", views.ContactUsView.as_view(), name="contact"),
path("campaigns/", include(tx_urlpatterns, namespace="transcriptions")),
| {"golden_diff": "diff --git a/concordia/urls.py b/concordia/urls.py\n--- a/concordia/urls.py\n+++ b/concordia/urls.py\n@@ -4,6 +4,7 @@\n from django.http import Http404, HttpResponseForbidden\n from django.urls import include, path\n from django.views.defaults import page_not_found, permission_denied, server_error\n+from django.views.generic import RedirectView\n \n from exporter import views as exporter_views\n \n@@ -75,7 +76,10 @@\n path(\"help-center/how-to-review/\", views.simple_page, name=\"how-to-review\"),\n path(\"help-center/how-to-tag/\", views.simple_page, name=\"how-to-tag\"),\n path(\"for-educators/\", views.simple_page, name=\"for-educators\"),\n- path(\"latest/\", views.simple_page, name=\"latest\"),\n+ path(\n+ \"latest/\",\n+ RedirectView.as_view(pattern_name=\"about\", permanent=True, query_string=True),\n+ ),\n path(\"questions/\", views.simple_page, name=\"questions\"),\n path(\"contact/\", views.ContactUsView.as_view(), name=\"contact\"),\n path(\"campaigns/\", include(tx_urlpatterns, namespace=\"transcriptions\")),\n", "issue": "Remove Latest page URL, in nav bar, and redirect\nCommunity managers want to consolidate `/latest` in to the `/about`. Most of the communication and everyday updates happen in History Hub, there is less usage for a dedicated latest updates page. \r\n\r\nOnce the URL is removed, the CMs will move the content from `/latest` to `/about`. \r\n\r\nAcceptance criteria: \r\n- [x] Remove URL\r\n- [x] Any link to `/latest` to be redirected to `/about` \r\n- [x] Move content (CMs will do this using the static page editor) \n", "before_files": [{"content": "from django.conf import settings\nfrom django.conf.urls import url\nfrom django.contrib import admin\nfrom django.http import Http404, HttpResponseForbidden\nfrom django.urls import include, path\nfrom django.views.defaults import page_not_found, permission_denied, server_error\n\nfrom exporter import views as exporter_views\n\nfrom . import views\n\ntx_urlpatterns = (\n [\n path(\"\", views.CampaignListView.as_view(), name=\"campaign-list\"),\n path(\n \"<slug:slug>/\", views.CampaignDetailView.as_view(), name=\"campaign-detail\"\n ),\n path(\n \"<slug:campaign_slug>/export/csv/\",\n exporter_views.ExportCampaignToCSV.as_view(),\n name=\"campaign-export-csv\",\n ),\n path(\n \"<slug:campaign_slug>/export/bagit/\",\n exporter_views.ExportCampaignToBagit.as_view(),\n name=\"campaign-export-bagit\",\n ),\n path(\n \"<slug:campaign_slug>/<slug:project_slug>/export/bagit/\",\n exporter_views.ExportProjectToBagIt.as_view(),\n name=\"project-export-bagit\",\n ),\n path(\n \"<slug:campaign_slug>/<slug:project_slug>/<slug:item_id>/export/bagit/\",\n exporter_views.ExportItemToBagIt.as_view(),\n name=\"item-export-bagit\",\n ),\n path(\n \"<slug:campaign_slug>/report/\",\n views.ReportCampaignView.as_view(),\n name=\"campaign-report\",\n ),\n path(\n \"<slug:campaign_slug>/<slug:project_slug>/<slug:item_id>/<slug:slug>/\",\n views.AssetDetailView.as_view(),\n name=\"asset-detail\",\n ),\n # n.b. this must be above project-detail to avoid being seen as a project slug:\n path(\n \"<slug:campaign_slug>/next-transcribable-asset/\",\n views.redirect_to_next_transcribable_asset,\n name=\"redirect-to-next-transcribable-asset\",\n ),\n path(\n \"<slug:campaign_slug>/<slug:slug>/\",\n views.ProjectDetailView.as_view(),\n name=\"project-detail\",\n ),\n path(\n \"<slug:campaign_slug>/<slug:project_slug>/<slug:item_id>/\",\n views.ItemDetailView.as_view(),\n name=\"item-detail\",\n ),\n ],\n \"transcriptions\",\n)\n\nurlpatterns = [\n path(\"\", views.HomeView.as_view(), name=\"homepage\"),\n path(\"healthz\", views.healthz, name=\"health-check\"),\n path(\"about/\", views.simple_page, name=\"about\"),\n path(\"help-center/\", views.simple_page, name=\"help-center\"),\n path(\"help-center/welcome-guide/\", views.simple_page, name=\"welcome-guide\"),\n path(\"help-center/how-to-transcribe/\", views.simple_page, name=\"how-to-transcribe\"),\n path(\"help-center/how-to-review/\", views.simple_page, name=\"how-to-review\"),\n path(\"help-center/how-to-tag/\", views.simple_page, name=\"how-to-tag\"),\n path(\"for-educators/\", views.simple_page, name=\"for-educators\"),\n path(\"latest/\", views.simple_page, name=\"latest\"),\n path(\"questions/\", views.simple_page, name=\"questions\"),\n path(\"contact/\", views.ContactUsView.as_view(), name=\"contact\"),\n path(\"campaigns/\", include(tx_urlpatterns, namespace=\"transcriptions\")),\n path(\n \"reserve-asset-for-transcription/<int:asset_pk>/\",\n views.reserve_asset_transcription,\n name=\"reserve-asset-for-transcription\",\n ),\n path(\n \"assets/<int:asset_pk>/transcriptions/save/\",\n views.save_transcription,\n name=\"save-transcription\",\n ),\n path(\n \"transcriptions/<int:pk>/submit/\",\n views.submit_transcription,\n name=\"submit-transcription\",\n ),\n path(\n \"transcriptions/<int:pk>/review/\",\n views.review_transcription,\n name=\"review-transcription\",\n ),\n path(\"assets/<int:asset_pk>/tags/submit/\", views.submit_tags, name=\"submit-tags\"),\n path(\"account/ajax-status/\", views.ajax_session_status, name=\"ajax-session-status\"),\n path(\"account/ajax-messages/\", views.ajax_messages, name=\"ajax-messages\"),\n path(\n \"account/register/\",\n views.ConcordiaRegistrationView.as_view(),\n name=\"registration_register\",\n ),\n path(\n \"account/login/\", views.ConcordiaLoginView.as_view(), name=\"registration_login\"\n ),\n path(\"account/profile/\", views.AccountProfileView.as_view(), name=\"user-profile\"),\n path(\"account/\", include(\"django_registration.backends.activation.urls\")),\n path(\"account/\", include(\"django.contrib.auth.urls\")),\n path(\"captcha/ajax/\", views.ajax_captcha, name=\"ajax-captcha\"),\n path(\"captcha/\", include(\"captcha.urls\")),\n path(\"admin/\", admin.site.urls),\n # Internal support assists:\n path(\"maintenance-mode/\", include(\"maintenance_mode.urls\")),\n path(\"error/500/\", server_error),\n path(\"error/404/\", page_not_found, {\"exception\": Http404()}),\n path(\"error/429/\", views.ratelimit_view),\n path(\"error/403/\", permission_denied, {\"exception\": HttpResponseForbidden()}),\n url(\"\", include(\"django_prometheus_metrics.urls\")),\n path(\"robots.txt\", include(\"robots.urls\")),\n]\n\nif settings.DEBUG:\n import debug_toolbar\n from django.conf.urls.static import static\n\n urlpatterns = [path(\"__debug__/\", include(debug_toolbar.urls))] + urlpatterns\n\n urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)\n", "path": "concordia/urls.py"}], "after_files": [{"content": "from django.conf import settings\nfrom django.conf.urls import url\nfrom django.contrib import admin\nfrom django.http import Http404, HttpResponseForbidden\nfrom django.urls import include, path\nfrom django.views.defaults import page_not_found, permission_denied, server_error\nfrom django.views.generic import RedirectView\n\nfrom exporter import views as exporter_views\n\nfrom . import views\n\ntx_urlpatterns = (\n [\n path(\"\", views.CampaignListView.as_view(), name=\"campaign-list\"),\n path(\n \"<slug:slug>/\", views.CampaignDetailView.as_view(), name=\"campaign-detail\"\n ),\n path(\n \"<slug:campaign_slug>/export/csv/\",\n exporter_views.ExportCampaignToCSV.as_view(),\n name=\"campaign-export-csv\",\n ),\n path(\n \"<slug:campaign_slug>/export/bagit/\",\n exporter_views.ExportCampaignToBagit.as_view(),\n name=\"campaign-export-bagit\",\n ),\n path(\n \"<slug:campaign_slug>/<slug:project_slug>/export/bagit/\",\n exporter_views.ExportProjectToBagIt.as_view(),\n name=\"project-export-bagit\",\n ),\n path(\n \"<slug:campaign_slug>/<slug:project_slug>/<slug:item_id>/export/bagit/\",\n exporter_views.ExportItemToBagIt.as_view(),\n name=\"item-export-bagit\",\n ),\n path(\n \"<slug:campaign_slug>/report/\",\n views.ReportCampaignView.as_view(),\n name=\"campaign-report\",\n ),\n path(\n \"<slug:campaign_slug>/<slug:project_slug>/<slug:item_id>/<slug:slug>/\",\n views.AssetDetailView.as_view(),\n name=\"asset-detail\",\n ),\n # n.b. this must be above project-detail to avoid being seen as a project slug:\n path(\n \"<slug:campaign_slug>/next-transcribable-asset/\",\n views.redirect_to_next_transcribable_asset,\n name=\"redirect-to-next-transcribable-asset\",\n ),\n path(\n \"<slug:campaign_slug>/<slug:slug>/\",\n views.ProjectDetailView.as_view(),\n name=\"project-detail\",\n ),\n path(\n \"<slug:campaign_slug>/<slug:project_slug>/<slug:item_id>/\",\n views.ItemDetailView.as_view(),\n name=\"item-detail\",\n ),\n ],\n \"transcriptions\",\n)\n\nurlpatterns = [\n path(\"\", views.HomeView.as_view(), name=\"homepage\"),\n path(\"healthz\", views.healthz, name=\"health-check\"),\n path(\"about/\", views.simple_page, name=\"about\"),\n path(\"help-center/\", views.simple_page, name=\"help-center\"),\n path(\"help-center/welcome-guide/\", views.simple_page, name=\"welcome-guide\"),\n path(\"help-center/how-to-transcribe/\", views.simple_page, name=\"how-to-transcribe\"),\n path(\"help-center/how-to-review/\", views.simple_page, name=\"how-to-review\"),\n path(\"help-center/how-to-tag/\", views.simple_page, name=\"how-to-tag\"),\n path(\"for-educators/\", views.simple_page, name=\"for-educators\"),\n path(\n \"latest/\",\n RedirectView.as_view(pattern_name=\"about\", permanent=True, query_string=True),\n ),\n path(\"questions/\", views.simple_page, name=\"questions\"),\n path(\"contact/\", views.ContactUsView.as_view(), name=\"contact\"),\n path(\"campaigns/\", include(tx_urlpatterns, namespace=\"transcriptions\")),\n path(\n \"reserve-asset-for-transcription/<int:asset_pk>/\",\n views.reserve_asset_transcription,\n name=\"reserve-asset-for-transcription\",\n ),\n path(\n \"assets/<int:asset_pk>/transcriptions/save/\",\n views.save_transcription,\n name=\"save-transcription\",\n ),\n path(\n \"transcriptions/<int:pk>/submit/\",\n views.submit_transcription,\n name=\"submit-transcription\",\n ),\n path(\n \"transcriptions/<int:pk>/review/\",\n views.review_transcription,\n name=\"review-transcription\",\n ),\n path(\"assets/<int:asset_pk>/tags/submit/\", views.submit_tags, name=\"submit-tags\"),\n path(\"account/ajax-status/\", views.ajax_session_status, name=\"ajax-session-status\"),\n path(\"account/ajax-messages/\", views.ajax_messages, name=\"ajax-messages\"),\n path(\n \"account/register/\",\n views.ConcordiaRegistrationView.as_view(),\n name=\"registration_register\",\n ),\n path(\n \"account/login/\", views.ConcordiaLoginView.as_view(), name=\"registration_login\"\n ),\n path(\"account/profile/\", views.AccountProfileView.as_view(), name=\"user-profile\"),\n path(\"account/\", include(\"django_registration.backends.activation.urls\")),\n path(\"account/\", include(\"django.contrib.auth.urls\")),\n path(\"captcha/ajax/\", views.ajax_captcha, name=\"ajax-captcha\"),\n path(\"captcha/\", include(\"captcha.urls\")),\n path(\"admin/\", admin.site.urls),\n # Internal support assists:\n path(\"maintenance-mode/\", include(\"maintenance_mode.urls\")),\n path(\"error/500/\", server_error),\n path(\"error/404/\", page_not_found, {\"exception\": Http404()}),\n path(\"error/429/\", views.ratelimit_view),\n path(\"error/403/\", permission_denied, {\"exception\": HttpResponseForbidden()}),\n url(\"\", include(\"django_prometheus_metrics.urls\")),\n path(\"robots.txt\", include(\"robots.urls\")),\n]\n\nif settings.DEBUG:\n import debug_toolbar\n from django.conf.urls.static import static\n\n urlpatterns = [path(\"__debug__/\", include(debug_toolbar.urls))] + urlpatterns\n\n urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)\n", "path": "concordia/urls.py"}]} | 1,836 | 258 |
gh_patches_debug_34612 | rasdani/github-patches | git_diff | Textualize__textual-2305 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
It's possible to somehow break the working of `TabbedContent`
This one isn't exactly easy to explain, but hopefully easy to see and recreate with the following code (which is a distilled version of what I'm doing in a bigger application, where I found it).
Worth noting: the motivation here is that (in the application I'm working on) the `TabbedContent` is acting as a sidebar, where each pane has content that can be focused, and I want folk to be people to switch tabs without needing to navigate to the tab bar and back into the pane again. As such there's some bindings in place that call into the `Tabs` and uses their prev/next tab actions.
```python
from textual.app import App, ComposeResult
from textual.binding import Binding
from textual.containers import Vertical
from textual.widgets import Header, Footer, TabbedContent, TabPane, Tabs, DirectoryTree
class SelfFocusPane(TabPane):
DEFAULT_CSS = """
SelfFocusPane {
height: 100% !important;
}
DirectoryTree {
width: 100%;
height: 100% !important;
}
"""
def compose( self ) -> ComposeResult:
"""Compose the child widgets."""
yield DirectoryTree(".")
def on_show( self ) -> None:
self.query_one( DirectoryTree ).focus()
class TabbedContentIssueApp( App[ None ] ):
CSS = """
Screen {
align: center middle;
}
Screen > Vertical {
width: 42;
}
TabbedContent {
border: round red;
max-width: 40;
height: 100%;
}
ContentSwitcher {
height: 1fr !important;
}
"""
BINDINGS = [
Binding( "shift+left", "previous", "Previous" ),
Binding( "shift+right", "next", "Next" ),
]
def compose( self ) -> ComposeResult:
yield Header()
with Vertical():
with TabbedContent():
for n in range( 6 ):
yield SelfFocusPane( f"Tab {n}")
yield Footer()
def on_mount(self) -> None:
self.query_one(Tabs).focus()
def action_previous(self) -> None:
self.query_one(Tabs).action_previous_tab()
def action_next(self) -> None:
self.query_one(Tabs).action_next_tab()
if __name__ == "__main__":
TabbedContentIssueApp().run()
```
In experimenting, it looks like the `SelfFocusPane.on_show` setting focus to the child of the pane is key here; remove that and I can't recreate the issue.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/textual/widgets/_tabbed_content.py`
Content:
```
1 from __future__ import annotations
2
3 from itertools import zip_longest
4
5 from rich.repr import Result
6 from rich.text import Text, TextType
7
8 from ..app import ComposeResult
9 from ..message import Message
10 from ..reactive import reactive
11 from ..widget import Widget
12 from ._content_switcher import ContentSwitcher
13 from ._tabs import Tab, Tabs
14
15 __all__ = [
16 "ContentTab",
17 "TabbedContent",
18 "TabPane",
19 ]
20
21
22 class ContentTab(Tab):
23 """A Tab with an associated content id."""
24
25 def __init__(self, label: Text, content_id: str):
26 """Initialize a ContentTab.
27
28 Args:
29 label: The label to be displayed within the tab.
30 content_id: The id of the content associated with the tab.
31 """
32 super().__init__(label, id=content_id)
33
34
35 class TabPane(Widget):
36 """A container for switchable content, with additional title.
37
38 This widget is intended to be used with [TabbedContent][textual.widgets.TabbedContent].
39
40 """
41
42 DEFAULT_CSS = """
43 TabPane {
44 height: auto;
45 padding: 1 2;
46 }
47 """
48
49 def __init__(
50 self,
51 title: TextType,
52 *children: Widget,
53 name: str | None = None,
54 id: str | None = None,
55 classes: str | None = None,
56 disabled: bool = False,
57 ):
58 """Initialize a TabPane.
59
60 Args:
61 title: Title of the TabPane (will be displayed in a tab label).
62 *children: Widget to go inside the TabPane.
63 name: Optional name for the TabPane.
64 id: Optional ID for the TabPane.
65 classes: Optional initial classes for the widget.
66 disabled: Whether the TabPane is disabled or not.
67 """
68 self._title = self.render_str(title)
69 super().__init__(
70 *children, name=name, id=id, classes=classes, disabled=disabled
71 )
72
73
74 class TabbedContent(Widget):
75 """A container with associated tabs to toggle content visibility."""
76
77 DEFAULT_CSS = """
78 TabbedContent {
79 height: auto;
80 }
81 TabbedContent > ContentSwitcher {
82 height: auto;
83 }
84 """
85
86 active: reactive[str] = reactive("", init=False)
87 """The ID of the active tab, or empty string if none are active."""
88
89 class TabActivated(Message):
90 """Posted when the active tab changes."""
91
92 def __init__(self, tabbed_content: TabbedContent, tab: Tab) -> None:
93 """Initialize message.
94
95 Args:
96 tabbed_content: The TabbedContent widget.
97 tab: The Tab widget that was selected (contains the tab label).
98 """
99 self.tabbed_content = tabbed_content
100 self.tab = tab
101 super().__init__()
102
103 def __rich_repr__(self) -> Result:
104 yield self.tabbed_content
105 yield self.tab
106
107 def __init__(self, *titles: TextType, initial: str = "") -> None:
108 """Initialize a TabbedContent widgets.
109
110 Args:
111 *titles: Positional argument will be used as title.
112 initial: The id of the initial tab, or empty string to select the first tab.
113 """
114 self.titles = [self.render_str(title) for title in titles]
115 self._tab_content: list[Widget] = []
116 self._initial = initial
117 super().__init__()
118
119 def validate_active(self, active: str) -> str:
120 """It doesn't make sense for `active` to be an empty string.
121
122 Args:
123 active: Attribute to be validated.
124
125 Returns:
126 Value of `active`.
127
128 Raises:
129 ValueError: If the active attribute is set to empty string.
130 """
131 if not active:
132 raise ValueError("'active' tab must not be empty string.")
133 return active
134
135 def compose(self) -> ComposeResult:
136 """Compose the tabbed content."""
137
138 def set_id(content: TabPane, new_id: str) -> TabPane:
139 """Set an id on the content, if not already present.
140
141 Args:
142 content: a TabPane.
143 new_id: New `is` attribute, if it is not already set.
144
145 Returns:
146 The same TabPane.
147 """
148 if content.id is None:
149 content.id = new_id
150 return content
151
152 # Wrap content in a `TabPane` if required.
153 pane_content = [
154 (
155 set_id(content, f"tab-{index}")
156 if isinstance(content, TabPane)
157 else TabPane(
158 title or self.render_str(f"Tab {index}"), content, id=f"tab-{index}"
159 )
160 )
161 for index, (title, content) in enumerate(
162 zip_longest(self.titles, self._tab_content), 1
163 )
164 ]
165 # Get a tab for each pane
166 tabs = [
167 ContentTab(content._title, content.id or "") for content in pane_content
168 ]
169 # Yield the tabs
170 yield Tabs(*tabs, active=self._initial or None)
171 # Yield the content switcher and panes
172 with ContentSwitcher(initial=self._initial or None):
173 yield from pane_content
174
175 def compose_add_child(self, widget: Widget) -> None:
176 """When using the context manager compose syntax, we want to attach nodes to the switcher.
177
178 Args:
179 widget: A Widget to add.
180 """
181 self._tab_content.append(widget)
182
183 def _on_tabs_tab_activated(self, event: Tabs.TabActivated) -> None:
184 """User clicked a tab."""
185 event.stop()
186 switcher = self.get_child_by_type(ContentSwitcher)
187 assert isinstance(event.tab, ContentTab)
188 switcher.current = event.tab.id
189 self.active = event.tab.id
190 self.post_message(
191 TabbedContent.TabActivated(
192 tabbed_content=self,
193 tab=event.tab,
194 )
195 )
196
197 def _on_tabs_cleared(self, event: Tabs.Cleared) -> None:
198 """All tabs were removed."""
199 event.stop()
200
201 def watch_active(self, active: str) -> None:
202 """Switch tabs when the active attributes changes."""
203 self.get_child_by_type(Tabs).active = active
204
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/textual/widgets/_tabbed_content.py b/src/textual/widgets/_tabbed_content.py
--- a/src/textual/widgets/_tabbed_content.py
+++ b/src/textual/widgets/_tabbed_content.py
@@ -83,9 +83,6 @@
}
"""
- active: reactive[str] = reactive("", init=False)
- """The ID of the active tab, or empty string if none are active."""
-
class TabActivated(Message):
"""Posted when the active tab changes."""
@@ -116,21 +113,16 @@
self._initial = initial
super().__init__()
- def validate_active(self, active: str) -> str:
- """It doesn't make sense for `active` to be an empty string.
-
- Args:
- active: Attribute to be validated.
-
- Returns:
- Value of `active`.
+ @property
+ def active(self) -> str:
+ """The ID of the active tab, or empty string if none are active."""
+ return self.get_child_by_type(Tabs).active
- Raises:
- ValueError: If the active attribute is set to empty string.
- """
+ @active.setter
+ def active(self, active: str) -> None:
if not active:
raise ValueError("'active' tab must not be empty string.")
- return active
+ self.get_child_by_type(Tabs).active = active
def compose(self) -> ComposeResult:
"""Compose the tabbed content."""
@@ -186,7 +178,6 @@
switcher = self.get_child_by_type(ContentSwitcher)
assert isinstance(event.tab, ContentTab)
switcher.current = event.tab.id
- self.active = event.tab.id
self.post_message(
TabbedContent.TabActivated(
tabbed_content=self,
@@ -197,7 +188,3 @@
def _on_tabs_cleared(self, event: Tabs.Cleared) -> None:
"""All tabs were removed."""
event.stop()
-
- def watch_active(self, active: str) -> None:
- """Switch tabs when the active attributes changes."""
- self.get_child_by_type(Tabs).active = active
| {"golden_diff": "diff --git a/src/textual/widgets/_tabbed_content.py b/src/textual/widgets/_tabbed_content.py\n--- a/src/textual/widgets/_tabbed_content.py\n+++ b/src/textual/widgets/_tabbed_content.py\n@@ -83,9 +83,6 @@\n }\n \"\"\"\n \n- active: reactive[str] = reactive(\"\", init=False)\n- \"\"\"The ID of the active tab, or empty string if none are active.\"\"\"\n-\n class TabActivated(Message):\n \"\"\"Posted when the active tab changes.\"\"\"\n \n@@ -116,21 +113,16 @@\n self._initial = initial\n super().__init__()\n \n- def validate_active(self, active: str) -> str:\n- \"\"\"It doesn't make sense for `active` to be an empty string.\n-\n- Args:\n- active: Attribute to be validated.\n-\n- Returns:\n- Value of `active`.\n+ @property\n+ def active(self) -> str:\n+ \"\"\"The ID of the active tab, or empty string if none are active.\"\"\"\n+ return self.get_child_by_type(Tabs).active\n \n- Raises:\n- ValueError: If the active attribute is set to empty string.\n- \"\"\"\n+ @active.setter\n+ def active(self, active: str) -> None:\n if not active:\n raise ValueError(\"'active' tab must not be empty string.\")\n- return active\n+ self.get_child_by_type(Tabs).active = active\n \n def compose(self) -> ComposeResult:\n \"\"\"Compose the tabbed content.\"\"\"\n@@ -186,7 +178,6 @@\n switcher = self.get_child_by_type(ContentSwitcher)\n assert isinstance(event.tab, ContentTab)\n switcher.current = event.tab.id\n- self.active = event.tab.id\n self.post_message(\n TabbedContent.TabActivated(\n tabbed_content=self,\n@@ -197,7 +188,3 @@\n def _on_tabs_cleared(self, event: Tabs.Cleared) -> None:\n \"\"\"All tabs were removed.\"\"\"\n event.stop()\n-\n- def watch_active(self, active: str) -> None:\n- \"\"\"Switch tabs when the active attributes changes.\"\"\"\n- self.get_child_by_type(Tabs).active = active\n", "issue": "It's possible to somehow break the working of `TabbedContent`\nThis one isn't exactly easy to explain, but hopefully easy to see and recreate with the following code (which is a distilled version of what I'm doing in a bigger application, where I found it).\r\n\r\nWorth noting: the motivation here is that (in the application I'm working on) the `TabbedContent` is acting as a sidebar, where each pane has content that can be focused, and I want folk to be people to switch tabs without needing to navigate to the tab bar and back into the pane again. As such there's some bindings in place that call into the `Tabs` and uses their prev/next tab actions.\r\n\r\n```python\r\nfrom textual.app import App, ComposeResult\r\nfrom textual.binding import Binding\r\nfrom textual.containers import Vertical\r\nfrom textual.widgets import Header, Footer, TabbedContent, TabPane, Tabs, DirectoryTree\r\n\r\nclass SelfFocusPane(TabPane):\r\n\r\n DEFAULT_CSS = \"\"\"\r\n SelfFocusPane {\r\n height: 100% !important;\r\n }\r\n DirectoryTree {\r\n width: 100%;\r\n height: 100% !important;\r\n }\r\n \"\"\"\r\n\r\n def compose( self ) -> ComposeResult:\r\n \"\"\"Compose the child widgets.\"\"\"\r\n yield DirectoryTree(\".\")\r\n\r\n def on_show( self ) -> None:\r\n self.query_one( DirectoryTree ).focus()\r\n\r\nclass TabbedContentIssueApp( App[ None ] ):\r\n\r\n CSS = \"\"\"\r\n Screen {\r\n align: center middle;\r\n }\r\n\r\n Screen > Vertical {\r\n width: 42;\r\n }\r\n\r\n TabbedContent {\r\n border: round red;\r\n max-width: 40;\r\n height: 100%;\r\n }\r\n\r\n ContentSwitcher {\r\n height: 1fr !important;\r\n }\r\n \"\"\"\r\n\r\n BINDINGS = [\r\n Binding( \"shift+left\", \"previous\", \"Previous\" ),\r\n Binding( \"shift+right\", \"next\", \"Next\" ),\r\n ]\r\n\r\n def compose( self ) -> ComposeResult:\r\n yield Header()\r\n with Vertical():\r\n with TabbedContent():\r\n for n in range( 6 ):\r\n yield SelfFocusPane( f\"Tab {n}\")\r\n yield Footer()\r\n\r\n def on_mount(self) -> None:\r\n self.query_one(Tabs).focus()\r\n\r\n def action_previous(self) -> None:\r\n self.query_one(Tabs).action_previous_tab()\r\n\r\n def action_next(self) -> None:\r\n self.query_one(Tabs).action_next_tab()\r\n\r\nif __name__ == \"__main__\":\r\n TabbedContentIssueApp().run()\r\n```\r\n\r\nIn experimenting, it looks like the `SelfFocusPane.on_show` setting focus to the child of the pane is key here; remove that and I can't recreate the issue.\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom itertools import zip_longest\n\nfrom rich.repr import Result\nfrom rich.text import Text, TextType\n\nfrom ..app import ComposeResult\nfrom ..message import Message\nfrom ..reactive import reactive\nfrom ..widget import Widget\nfrom ._content_switcher import ContentSwitcher\nfrom ._tabs import Tab, Tabs\n\n__all__ = [\n \"ContentTab\",\n \"TabbedContent\",\n \"TabPane\",\n]\n\n\nclass ContentTab(Tab):\n \"\"\"A Tab with an associated content id.\"\"\"\n\n def __init__(self, label: Text, content_id: str):\n \"\"\"Initialize a ContentTab.\n\n Args:\n label: The label to be displayed within the tab.\n content_id: The id of the content associated with the tab.\n \"\"\"\n super().__init__(label, id=content_id)\n\n\nclass TabPane(Widget):\n \"\"\"A container for switchable content, with additional title.\n\n This widget is intended to be used with [TabbedContent][textual.widgets.TabbedContent].\n\n \"\"\"\n\n DEFAULT_CSS = \"\"\"\n TabPane {\n height: auto;\n padding: 1 2;\n }\n \"\"\"\n\n def __init__(\n self,\n title: TextType,\n *children: Widget,\n name: str | None = None,\n id: str | None = None,\n classes: str | None = None,\n disabled: bool = False,\n ):\n \"\"\"Initialize a TabPane.\n\n Args:\n title: Title of the TabPane (will be displayed in a tab label).\n *children: Widget to go inside the TabPane.\n name: Optional name for the TabPane.\n id: Optional ID for the TabPane.\n classes: Optional initial classes for the widget.\n disabled: Whether the TabPane is disabled or not.\n \"\"\"\n self._title = self.render_str(title)\n super().__init__(\n *children, name=name, id=id, classes=classes, disabled=disabled\n )\n\n\nclass TabbedContent(Widget):\n \"\"\"A container with associated tabs to toggle content visibility.\"\"\"\n\n DEFAULT_CSS = \"\"\"\n TabbedContent {\n height: auto;\n }\n TabbedContent > ContentSwitcher {\n height: auto;\n }\n \"\"\"\n\n active: reactive[str] = reactive(\"\", init=False)\n \"\"\"The ID of the active tab, or empty string if none are active.\"\"\"\n\n class TabActivated(Message):\n \"\"\"Posted when the active tab changes.\"\"\"\n\n def __init__(self, tabbed_content: TabbedContent, tab: Tab) -> None:\n \"\"\"Initialize message.\n\n Args:\n tabbed_content: The TabbedContent widget.\n tab: The Tab widget that was selected (contains the tab label).\n \"\"\"\n self.tabbed_content = tabbed_content\n self.tab = tab\n super().__init__()\n\n def __rich_repr__(self) -> Result:\n yield self.tabbed_content\n yield self.tab\n\n def __init__(self, *titles: TextType, initial: str = \"\") -> None:\n \"\"\"Initialize a TabbedContent widgets.\n\n Args:\n *titles: Positional argument will be used as title.\n initial: The id of the initial tab, or empty string to select the first tab.\n \"\"\"\n self.titles = [self.render_str(title) for title in titles]\n self._tab_content: list[Widget] = []\n self._initial = initial\n super().__init__()\n\n def validate_active(self, active: str) -> str:\n \"\"\"It doesn't make sense for `active` to be an empty string.\n\n Args:\n active: Attribute to be validated.\n\n Returns:\n Value of `active`.\n\n Raises:\n ValueError: If the active attribute is set to empty string.\n \"\"\"\n if not active:\n raise ValueError(\"'active' tab must not be empty string.\")\n return active\n\n def compose(self) -> ComposeResult:\n \"\"\"Compose the tabbed content.\"\"\"\n\n def set_id(content: TabPane, new_id: str) -> TabPane:\n \"\"\"Set an id on the content, if not already present.\n\n Args:\n content: a TabPane.\n new_id: New `is` attribute, if it is not already set.\n\n Returns:\n The same TabPane.\n \"\"\"\n if content.id is None:\n content.id = new_id\n return content\n\n # Wrap content in a `TabPane` if required.\n pane_content = [\n (\n set_id(content, f\"tab-{index}\")\n if isinstance(content, TabPane)\n else TabPane(\n title or self.render_str(f\"Tab {index}\"), content, id=f\"tab-{index}\"\n )\n )\n for index, (title, content) in enumerate(\n zip_longest(self.titles, self._tab_content), 1\n )\n ]\n # Get a tab for each pane\n tabs = [\n ContentTab(content._title, content.id or \"\") for content in pane_content\n ]\n # Yield the tabs\n yield Tabs(*tabs, active=self._initial or None)\n # Yield the content switcher and panes\n with ContentSwitcher(initial=self._initial or None):\n yield from pane_content\n\n def compose_add_child(self, widget: Widget) -> None:\n \"\"\"When using the context manager compose syntax, we want to attach nodes to the switcher.\n\n Args:\n widget: A Widget to add.\n \"\"\"\n self._tab_content.append(widget)\n\n def _on_tabs_tab_activated(self, event: Tabs.TabActivated) -> None:\n \"\"\"User clicked a tab.\"\"\"\n event.stop()\n switcher = self.get_child_by_type(ContentSwitcher)\n assert isinstance(event.tab, ContentTab)\n switcher.current = event.tab.id\n self.active = event.tab.id\n self.post_message(\n TabbedContent.TabActivated(\n tabbed_content=self,\n tab=event.tab,\n )\n )\n\n def _on_tabs_cleared(self, event: Tabs.Cleared) -> None:\n \"\"\"All tabs were removed.\"\"\"\n event.stop()\n\n def watch_active(self, active: str) -> None:\n \"\"\"Switch tabs when the active attributes changes.\"\"\"\n self.get_child_by_type(Tabs).active = active\n", "path": "src/textual/widgets/_tabbed_content.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom itertools import zip_longest\n\nfrom rich.repr import Result\nfrom rich.text import Text, TextType\n\nfrom ..app import ComposeResult\nfrom ..message import Message\nfrom ..reactive import reactive\nfrom ..widget import Widget\nfrom ._content_switcher import ContentSwitcher\nfrom ._tabs import Tab, Tabs\n\n__all__ = [\n \"ContentTab\",\n \"TabbedContent\",\n \"TabPane\",\n]\n\n\nclass ContentTab(Tab):\n \"\"\"A Tab with an associated content id.\"\"\"\n\n def __init__(self, label: Text, content_id: str):\n \"\"\"Initialize a ContentTab.\n\n Args:\n label: The label to be displayed within the tab.\n content_id: The id of the content associated with the tab.\n \"\"\"\n super().__init__(label, id=content_id)\n\n\nclass TabPane(Widget):\n \"\"\"A container for switchable content, with additional title.\n\n This widget is intended to be used with [TabbedContent][textual.widgets.TabbedContent].\n\n \"\"\"\n\n DEFAULT_CSS = \"\"\"\n TabPane {\n height: auto;\n padding: 1 2;\n }\n \"\"\"\n\n def __init__(\n self,\n title: TextType,\n *children: Widget,\n name: str | None = None,\n id: str | None = None,\n classes: str | None = None,\n disabled: bool = False,\n ):\n \"\"\"Initialize a TabPane.\n\n Args:\n title: Title of the TabPane (will be displayed in a tab label).\n *children: Widget to go inside the TabPane.\n name: Optional name for the TabPane.\n id: Optional ID for the TabPane.\n classes: Optional initial classes for the widget.\n disabled: Whether the TabPane is disabled or not.\n \"\"\"\n self._title = self.render_str(title)\n super().__init__(\n *children, name=name, id=id, classes=classes, disabled=disabled\n )\n\n\nclass TabbedContent(Widget):\n \"\"\"A container with associated tabs to toggle content visibility.\"\"\"\n\n DEFAULT_CSS = \"\"\"\n TabbedContent {\n height: auto;\n }\n TabbedContent > ContentSwitcher {\n height: auto;\n }\n \"\"\"\n\n class TabActivated(Message):\n \"\"\"Posted when the active tab changes.\"\"\"\n\n def __init__(self, tabbed_content: TabbedContent, tab: Tab) -> None:\n \"\"\"Initialize message.\n\n Args:\n tabbed_content: The TabbedContent widget.\n tab: The Tab widget that was selected (contains the tab label).\n \"\"\"\n self.tabbed_content = tabbed_content\n self.tab = tab\n super().__init__()\n\n def __rich_repr__(self) -> Result:\n yield self.tabbed_content\n yield self.tab\n\n def __init__(self, *titles: TextType, initial: str = \"\") -> None:\n \"\"\"Initialize a TabbedContent widgets.\n\n Args:\n *titles: Positional argument will be used as title.\n initial: The id of the initial tab, or empty string to select the first tab.\n \"\"\"\n self.titles = [self.render_str(title) for title in titles]\n self._tab_content: list[Widget] = []\n self._initial = initial\n super().__init__()\n\n @property\n def active(self) -> str:\n \"\"\"The ID of the active tab, or empty string if none are active.\"\"\"\n return self.get_child_by_type(Tabs).active\n\n @active.setter\n def active(self, active: str) -> None:\n if not active:\n raise ValueError(\"'active' tab must not be empty string.\")\n self.get_child_by_type(Tabs).active = active\n\n def compose(self) -> ComposeResult:\n \"\"\"Compose the tabbed content.\"\"\"\n\n def set_id(content: TabPane, new_id: str) -> TabPane:\n \"\"\"Set an id on the content, if not already present.\n\n Args:\n content: a TabPane.\n new_id: New `is` attribute, if it is not already set.\n\n Returns:\n The same TabPane.\n \"\"\"\n if content.id is None:\n content.id = new_id\n return content\n\n # Wrap content in a `TabPane` if required.\n pane_content = [\n (\n set_id(content, f\"tab-{index}\")\n if isinstance(content, TabPane)\n else TabPane(\n title or self.render_str(f\"Tab {index}\"), content, id=f\"tab-{index}\"\n )\n )\n for index, (title, content) in enumerate(\n zip_longest(self.titles, self._tab_content), 1\n )\n ]\n # Get a tab for each pane\n tabs = [\n ContentTab(content._title, content.id or \"\") for content in pane_content\n ]\n # Yield the tabs\n yield Tabs(*tabs, active=self._initial or None)\n # Yield the content switcher and panes\n with ContentSwitcher(initial=self._initial or None):\n yield from pane_content\n\n def compose_add_child(self, widget: Widget) -> None:\n \"\"\"When using the context manager compose syntax, we want to attach nodes to the switcher.\n\n Args:\n widget: A Widget to add.\n \"\"\"\n self._tab_content.append(widget)\n\n def _on_tabs_tab_activated(self, event: Tabs.TabActivated) -> None:\n \"\"\"User clicked a tab.\"\"\"\n event.stop()\n switcher = self.get_child_by_type(ContentSwitcher)\n assert isinstance(event.tab, ContentTab)\n switcher.current = event.tab.id\n self.post_message(\n TabbedContent.TabActivated(\n tabbed_content=self,\n tab=event.tab,\n )\n )\n\n def _on_tabs_cleared(self, event: Tabs.Cleared) -> None:\n \"\"\"All tabs were removed.\"\"\"\n event.stop()\n", "path": "src/textual/widgets/_tabbed_content.py"}]} | 2,717 | 495 |
gh_patches_debug_27308 | rasdani/github-patches | git_diff | pytorch__text-208 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
text/test/translation.py fails for custom paths
`text/test/translation.py` currently fails on the last section:
```python
train, val = datasets.TranslationDataset.splits(
path='.data/multi30k/', train='train',
validation='val', exts=('.de', '.en'),
fields=(DE, EN))
```
because `splits` expects TranslationDataset.name to be defined, but it isn't.
Possible fix: add `name = ''` to `TranslationDataset`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchtext/datasets/translation.py`
Content:
```
1 import os
2 import xml.etree.ElementTree as ET
3 import glob
4 import io
5
6 from .. import data
7
8
9 class TranslationDataset(data.Dataset):
10 """Defines a dataset for machine translation."""
11
12 @staticmethod
13 def sort_key(ex):
14 return data.interleave_keys(len(ex.src), len(ex.trg))
15
16 def __init__(self, path, exts, fields, **kwargs):
17 """Create a TranslationDataset given paths and fields.
18
19 Arguments:
20 path: Common prefix of paths to the data files for both languages.
21 exts: A tuple containing the extension to path for each language.
22 fields: A tuple containing the fields that will be used for data
23 in each language.
24 Remaining keyword arguments: Passed to the constructor of
25 data.Dataset.
26 """
27 if not isinstance(fields[0], (tuple, list)):
28 fields = [('src', fields[0]), ('trg', fields[1])]
29
30 src_path, trg_path = tuple(os.path.expanduser(path + x) for x in exts)
31
32 examples = []
33 with open(src_path) as src_file, open(trg_path) as trg_file:
34 for src_line, trg_line in zip(src_file, trg_file):
35 src_line, trg_line = src_line.strip(), trg_line.strip()
36 if src_line != '' and trg_line != '':
37 examples.append(data.Example.fromlist(
38 [src_line, trg_line], fields))
39
40 super(TranslationDataset, self).__init__(examples, fields, **kwargs)
41
42 @classmethod
43 def splits(cls, exts, fields, root='.data',
44 train='train', validation='val', test='test', **kwargs):
45 """Create dataset objects for splits of a TranslationDataset.
46
47 Arguments:
48
49 root: Root dataset storage directory. Default is '.data'.
50 exts: A tuple containing the extension to path for each language.
51 fields: A tuple containing the fields that will be used for data
52 in each language.
53 train: The prefix of the train data. Default: 'train'.
54 validation: The prefix of the validation data. Default: 'val'.
55 test: The prefix of the test data. Default: 'test'.
56 Remaining keyword arguments: Passed to the splits method of
57 Dataset.
58 """
59 path = cls.download(root)
60
61 train_data = None if train is None else cls(
62 os.path.join(path, train), exts, fields, **kwargs)
63 val_data = None if validation is None else cls(
64 os.path.join(path, validation), exts, fields, **kwargs)
65 test_data = None if test is None else cls(
66 os.path.join(path, test), exts, fields, **kwargs)
67 return tuple(d for d in (train_data, val_data, test_data)
68 if d is not None)
69
70
71 class Multi30k(TranslationDataset):
72 """The small-dataset WMT 2016 multimodal task, also known as Flickr30k"""
73
74 urls = ['http://www.quest.dcs.shef.ac.uk/wmt16_files_mmt/training.tar.gz',
75 'http://www.quest.dcs.shef.ac.uk/wmt16_files_mmt/validation.tar.gz',
76 'http://www.quest.dcs.shef.ac.uk/'
77 'wmt17_files_mmt/mmt_task1_test2016.tar.gz']
78 name = 'multi30k'
79 dirname = ''
80
81 @classmethod
82 def splits(cls, exts, fields, root='.data',
83 train='train', validation='val', test='test2016', **kwargs):
84 """Create dataset objects for splits of the Multi30k dataset.
85
86 Arguments:
87
88 root: Root dataset storage directory. Default is '.data'.
89 exts: A tuple containing the extension to path for each language.
90 fields: A tuple containing the fields that will be used for data
91 in each language.
92 train: The prefix of the train data. Default: 'train'.
93 validation: The prefix of the validation data. Default: 'val'.
94 test: The prefix of the test data. Default: 'test'.
95 Remaining keyword arguments: Passed to the splits method of
96 Dataset.
97 """
98 return super(Multi30k, cls).splits(
99 exts, fields, root, train, validation, test, **kwargs)
100
101
102 class IWSLT(TranslationDataset):
103 """The IWSLT 2016 TED talk translation task"""
104
105 base_url = 'https://wit3.fbk.eu/archive/2016-01//texts/{}/{}/{}.tgz'
106 name = 'iwslt'
107 base_dirname = '{}-{}'
108
109 @classmethod
110 def splits(cls, exts, fields, root='.data',
111 train='train', validation='IWSLT16.TED.tst2013',
112 test='IWSLT16.TED.tst2014', **kwargs):
113 """Create dataset objects for splits of the IWSLT dataset.
114
115 Arguments:
116
117 root: Root dataset storage directory. Default is '.data'.
118 exts: A tuple containing the extension to path for each language.
119 fields: A tuple containing the fields that will be used for data
120 in each language.
121 train: The prefix of the train data. Default: 'train'.
122 validation: The prefix of the validation data. Default: 'val'.
123 test: The prefix of the test data. Default: 'test'.
124 Remaining keyword arguments: Passed to the splits method of
125 Dataset.
126 """
127 cls.dirname = cls.base_dirname.format(exts[0][1:], exts[1][1:])
128 cls.urls = [cls.base_url.format(exts[0][1:], exts[1][1:], cls.dirname)]
129 check = os.path.join(root, cls.name, cls.dirname)
130 path = cls.download(root, check=check)
131
132 train = '.'.join([train, cls.dirname])
133 validation = '.'.join([validation, cls.dirname])
134 if test is not None:
135 test = '.'.join([test, cls.dirname])
136
137 if not os.path.exists(os.path.join(path, train) + exts[0]):
138 cls.clean(path)
139
140 train_data = None if train is None else cls(
141 os.path.join(path, train), exts, fields, **kwargs)
142 val_data = None if validation is None else cls(
143 os.path.join(path, validation), exts, fields, **kwargs)
144 test_data = None if test is None else cls(
145 os.path.join(path, test), exts, fields, **kwargs)
146 return tuple(d for d in (train_data, val_data, test_data)
147 if d is not None)
148
149 @staticmethod
150 def clean(path):
151 for f_xml in glob.iglob(os.path.join(path, '*.xml')):
152 print(f_xml)
153 f_txt = os.path.splitext(f_xml)[0]
154 with io.open(f_txt, mode='w', encoding='utf-8') as fd_txt:
155 root = ET.parse(f_xml).getroot()[0]
156 for doc in root.findall('doc'):
157 for e in doc.findall('seg'):
158 fd_txt.write(e.text.strip() + '\n')
159
160 xml_tags = ['<url', '<keywords', '<talkid', '<description',
161 '<reviewer', '<translator', '<title', '<speaker']
162 for f_orig in glob.iglob(os.path.join(path, 'train.tags*')):
163 print(f_orig)
164 f_txt = f_orig.replace('.tags', '')
165 with io.open(f_txt, mode='w', encoding='utf-8') as fd_txt, \
166 io.open(f_orig, mode='r', encoding='utf-8') as fd_orig:
167 for l in fd_orig:
168 if not any(tag in l for tag in xml_tags):
169 fd_txt.write(l.strip() + '\n')
170
171
172 class WMT14(TranslationDataset):
173 """The WMT 2014 English-German dataset, as preprocessed by Google Brain.
174
175 Though this download contains test sets from 2015 and 2016, the train set
176 differs slightly from WMT 2015 and 2016 and significantly from WMT 2017."""
177
178 urls = [('https://drive.google.com/uc?export=download&'
179 'id=0B_bZck-ksdkpM25jRUN2X2UxMm8', 'wmt16_en_de.tar.gz')]
180 name = 'wmt14'
181 dirname = ''
182
183 @classmethod
184 def splits(cls, exts, fields, root='.data',
185 train='train.tok.clean.bpe.32000',
186 validation='newstest2013.tok.bpe.32000',
187 test='newstest2014.tok.bpe.32000', **kwargs):
188 """Create dataset objects for splits of the WMT 2014 dataset.
189
190 Arguments:
191
192 root: Root dataset storage directory. Default is '.data'.
193 exts: A tuple containing the extensions for each language. Must be
194 either ('.en', '.de') or the reverse.
195 fields: A tuple containing the fields that will be used for data
196 in each language.
197 train: The prefix of the train data. Default:
198 'train.tok.clean.bpe.32000'.
199 validation: The prefix of the validation data. Default:
200 'newstest2013.tok.bpe.32000'.
201 test: The prefix of the test data. Default:
202 'newstest2014.tok.bpe.32000'.
203 Remaining keyword arguments: Passed to the splits method of
204 Dataset.
205 """
206 return super(WMT14, cls).splits(
207 exts, fields, root, train, validation, test, **kwargs)
208
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/torchtext/datasets/translation.py b/torchtext/datasets/translation.py
--- a/torchtext/datasets/translation.py
+++ b/torchtext/datasets/translation.py
@@ -40,12 +40,13 @@
super(TranslationDataset, self).__init__(examples, fields, **kwargs)
@classmethod
- def splits(cls, exts, fields, root='.data',
+ def splits(cls, exts, fields, path=None, root='.data',
train='train', validation='val', test='test', **kwargs):
"""Create dataset objects for splits of a TranslationDataset.
Arguments:
-
+ path (str): Common prefix of the splits' file paths, or None to use
+ the result of cls.download(root).
root: Root dataset storage directory. Default is '.data'.
exts: A tuple containing the extension to path for each language.
fields: A tuple containing the fields that will be used for data
@@ -56,7 +57,8 @@
Remaining keyword arguments: Passed to the splits method of
Dataset.
"""
- path = cls.download(root)
+ if path is None:
+ path = cls.download(root)
train_data = None if train is None else cls(
os.path.join(path, train), exts, fields, **kwargs)
| {"golden_diff": "diff --git a/torchtext/datasets/translation.py b/torchtext/datasets/translation.py\n--- a/torchtext/datasets/translation.py\n+++ b/torchtext/datasets/translation.py\n@@ -40,12 +40,13 @@\n super(TranslationDataset, self).__init__(examples, fields, **kwargs)\n \n @classmethod\n- def splits(cls, exts, fields, root='.data',\n+ def splits(cls, exts, fields, path=None, root='.data',\n train='train', validation='val', test='test', **kwargs):\n \"\"\"Create dataset objects for splits of a TranslationDataset.\n \n Arguments:\n-\n+ path (str): Common prefix of the splits' file paths, or None to use\n+ the result of cls.download(root).\n root: Root dataset storage directory. Default is '.data'.\n exts: A tuple containing the extension to path for each language.\n fields: A tuple containing the fields that will be used for data\n@@ -56,7 +57,8 @@\n Remaining keyword arguments: Passed to the splits method of\n Dataset.\n \"\"\"\n- path = cls.download(root)\n+ if path is None:\n+ path = cls.download(root)\n \n train_data = None if train is None else cls(\n os.path.join(path, train), exts, fields, **kwargs)\n", "issue": "text/test/translation.py fails for custom paths\n`text/test/translation.py` currently fails on the last section:\r\n\r\n```python\r\ntrain, val = datasets.TranslationDataset.splits(\r\n path='.data/multi30k/', train='train',\r\n validation='val', exts=('.de', '.en'),\r\n fields=(DE, EN))\r\n```\r\n\r\nbecause `splits` expects TranslationDataset.name to be defined, but it isn't.\r\nPossible fix: add `name = ''` to `TranslationDataset`\n", "before_files": [{"content": "import os\nimport xml.etree.ElementTree as ET\nimport glob\nimport io\n\nfrom .. import data\n\n\nclass TranslationDataset(data.Dataset):\n \"\"\"Defines a dataset for machine translation.\"\"\"\n\n @staticmethod\n def sort_key(ex):\n return data.interleave_keys(len(ex.src), len(ex.trg))\n\n def __init__(self, path, exts, fields, **kwargs):\n \"\"\"Create a TranslationDataset given paths and fields.\n\n Arguments:\n path: Common prefix of paths to the data files for both languages.\n exts: A tuple containing the extension to path for each language.\n fields: A tuple containing the fields that will be used for data\n in each language.\n Remaining keyword arguments: Passed to the constructor of\n data.Dataset.\n \"\"\"\n if not isinstance(fields[0], (tuple, list)):\n fields = [('src', fields[0]), ('trg', fields[1])]\n\n src_path, trg_path = tuple(os.path.expanduser(path + x) for x in exts)\n\n examples = []\n with open(src_path) as src_file, open(trg_path) as trg_file:\n for src_line, trg_line in zip(src_file, trg_file):\n src_line, trg_line = src_line.strip(), trg_line.strip()\n if src_line != '' and trg_line != '':\n examples.append(data.Example.fromlist(\n [src_line, trg_line], fields))\n\n super(TranslationDataset, self).__init__(examples, fields, **kwargs)\n\n @classmethod\n def splits(cls, exts, fields, root='.data',\n train='train', validation='val', test='test', **kwargs):\n \"\"\"Create dataset objects for splits of a TranslationDataset.\n\n Arguments:\n\n root: Root dataset storage directory. Default is '.data'.\n exts: A tuple containing the extension to path for each language.\n fields: A tuple containing the fields that will be used for data\n in each language.\n train: The prefix of the train data. Default: 'train'.\n validation: The prefix of the validation data. Default: 'val'.\n test: The prefix of the test data. Default: 'test'.\n Remaining keyword arguments: Passed to the splits method of\n Dataset.\n \"\"\"\n path = cls.download(root)\n\n train_data = None if train is None else cls(\n os.path.join(path, train), exts, fields, **kwargs)\n val_data = None if validation is None else cls(\n os.path.join(path, validation), exts, fields, **kwargs)\n test_data = None if test is None else cls(\n os.path.join(path, test), exts, fields, **kwargs)\n return tuple(d for d in (train_data, val_data, test_data)\n if d is not None)\n\n\nclass Multi30k(TranslationDataset):\n \"\"\"The small-dataset WMT 2016 multimodal task, also known as Flickr30k\"\"\"\n\n urls = ['http://www.quest.dcs.shef.ac.uk/wmt16_files_mmt/training.tar.gz',\n 'http://www.quest.dcs.shef.ac.uk/wmt16_files_mmt/validation.tar.gz',\n 'http://www.quest.dcs.shef.ac.uk/'\n 'wmt17_files_mmt/mmt_task1_test2016.tar.gz']\n name = 'multi30k'\n dirname = ''\n\n @classmethod\n def splits(cls, exts, fields, root='.data',\n train='train', validation='val', test='test2016', **kwargs):\n \"\"\"Create dataset objects for splits of the Multi30k dataset.\n\n Arguments:\n\n root: Root dataset storage directory. Default is '.data'.\n exts: A tuple containing the extension to path for each language.\n fields: A tuple containing the fields that will be used for data\n in each language.\n train: The prefix of the train data. Default: 'train'.\n validation: The prefix of the validation data. Default: 'val'.\n test: The prefix of the test data. Default: 'test'.\n Remaining keyword arguments: Passed to the splits method of\n Dataset.\n \"\"\"\n return super(Multi30k, cls).splits(\n exts, fields, root, train, validation, test, **kwargs)\n\n\nclass IWSLT(TranslationDataset):\n \"\"\"The IWSLT 2016 TED talk translation task\"\"\"\n\n base_url = 'https://wit3.fbk.eu/archive/2016-01//texts/{}/{}/{}.tgz'\n name = 'iwslt'\n base_dirname = '{}-{}'\n\n @classmethod\n def splits(cls, exts, fields, root='.data',\n train='train', validation='IWSLT16.TED.tst2013',\n test='IWSLT16.TED.tst2014', **kwargs):\n \"\"\"Create dataset objects for splits of the IWSLT dataset.\n\n Arguments:\n\n root: Root dataset storage directory. Default is '.data'.\n exts: A tuple containing the extension to path for each language.\n fields: A tuple containing the fields that will be used for data\n in each language.\n train: The prefix of the train data. Default: 'train'.\n validation: The prefix of the validation data. Default: 'val'.\n test: The prefix of the test data. Default: 'test'.\n Remaining keyword arguments: Passed to the splits method of\n Dataset.\n \"\"\"\n cls.dirname = cls.base_dirname.format(exts[0][1:], exts[1][1:])\n cls.urls = [cls.base_url.format(exts[0][1:], exts[1][1:], cls.dirname)]\n check = os.path.join(root, cls.name, cls.dirname)\n path = cls.download(root, check=check)\n\n train = '.'.join([train, cls.dirname])\n validation = '.'.join([validation, cls.dirname])\n if test is not None:\n test = '.'.join([test, cls.dirname])\n\n if not os.path.exists(os.path.join(path, train) + exts[0]):\n cls.clean(path)\n\n train_data = None if train is None else cls(\n os.path.join(path, train), exts, fields, **kwargs)\n val_data = None if validation is None else cls(\n os.path.join(path, validation), exts, fields, **kwargs)\n test_data = None if test is None else cls(\n os.path.join(path, test), exts, fields, **kwargs)\n return tuple(d for d in (train_data, val_data, test_data)\n if d is not None)\n\n @staticmethod\n def clean(path):\n for f_xml in glob.iglob(os.path.join(path, '*.xml')):\n print(f_xml)\n f_txt = os.path.splitext(f_xml)[0]\n with io.open(f_txt, mode='w', encoding='utf-8') as fd_txt:\n root = ET.parse(f_xml).getroot()[0]\n for doc in root.findall('doc'):\n for e in doc.findall('seg'):\n fd_txt.write(e.text.strip() + '\\n')\n\n xml_tags = ['<url', '<keywords', '<talkid', '<description',\n '<reviewer', '<translator', '<title', '<speaker']\n for f_orig in glob.iglob(os.path.join(path, 'train.tags*')):\n print(f_orig)\n f_txt = f_orig.replace('.tags', '')\n with io.open(f_txt, mode='w', encoding='utf-8') as fd_txt, \\\n io.open(f_orig, mode='r', encoding='utf-8') as fd_orig:\n for l in fd_orig:\n if not any(tag in l for tag in xml_tags):\n fd_txt.write(l.strip() + '\\n')\n\n\nclass WMT14(TranslationDataset):\n \"\"\"The WMT 2014 English-German dataset, as preprocessed by Google Brain.\n\n Though this download contains test sets from 2015 and 2016, the train set\n differs slightly from WMT 2015 and 2016 and significantly from WMT 2017.\"\"\"\n\n urls = [('https://drive.google.com/uc?export=download&'\n 'id=0B_bZck-ksdkpM25jRUN2X2UxMm8', 'wmt16_en_de.tar.gz')]\n name = 'wmt14'\n dirname = ''\n\n @classmethod\n def splits(cls, exts, fields, root='.data',\n train='train.tok.clean.bpe.32000',\n validation='newstest2013.tok.bpe.32000',\n test='newstest2014.tok.bpe.32000', **kwargs):\n \"\"\"Create dataset objects for splits of the WMT 2014 dataset.\n\n Arguments:\n\n root: Root dataset storage directory. Default is '.data'.\n exts: A tuple containing the extensions for each language. Must be\n either ('.en', '.de') or the reverse.\n fields: A tuple containing the fields that will be used for data\n in each language.\n train: The prefix of the train data. Default:\n 'train.tok.clean.bpe.32000'.\n validation: The prefix of the validation data. Default:\n 'newstest2013.tok.bpe.32000'.\n test: The prefix of the test data. Default:\n 'newstest2014.tok.bpe.32000'.\n Remaining keyword arguments: Passed to the splits method of\n Dataset.\n \"\"\"\n return super(WMT14, cls).splits(\n exts, fields, root, train, validation, test, **kwargs)\n", "path": "torchtext/datasets/translation.py"}], "after_files": [{"content": "import os\nimport xml.etree.ElementTree as ET\nimport glob\nimport io\n\nfrom .. import data\n\n\nclass TranslationDataset(data.Dataset):\n \"\"\"Defines a dataset for machine translation.\"\"\"\n\n @staticmethod\n def sort_key(ex):\n return data.interleave_keys(len(ex.src), len(ex.trg))\n\n def __init__(self, path, exts, fields, **kwargs):\n \"\"\"Create a TranslationDataset given paths and fields.\n\n Arguments:\n path: Common prefix of paths to the data files for both languages.\n exts: A tuple containing the extension to path for each language.\n fields: A tuple containing the fields that will be used for data\n in each language.\n Remaining keyword arguments: Passed to the constructor of\n data.Dataset.\n \"\"\"\n if not isinstance(fields[0], (tuple, list)):\n fields = [('src', fields[0]), ('trg', fields[1])]\n\n src_path, trg_path = tuple(os.path.expanduser(path + x) for x in exts)\n\n examples = []\n with open(src_path) as src_file, open(trg_path) as trg_file:\n for src_line, trg_line in zip(src_file, trg_file):\n src_line, trg_line = src_line.strip(), trg_line.strip()\n if src_line != '' and trg_line != '':\n examples.append(data.Example.fromlist(\n [src_line, trg_line], fields))\n\n super(TranslationDataset, self).__init__(examples, fields, **kwargs)\n\n @classmethod\n def splits(cls, exts, fields, path=None, root='.data',\n train='train', validation='val', test='test', **kwargs):\n \"\"\"Create dataset objects for splits of a TranslationDataset.\n\n Arguments:\n path (str): Common prefix of the splits' file paths, or None to use\n the result of cls.download(root).\n root: Root dataset storage directory. Default is '.data'.\n exts: A tuple containing the extension to path for each language.\n fields: A tuple containing the fields that will be used for data\n in each language.\n train: The prefix of the train data. Default: 'train'.\n validation: The prefix of the validation data. Default: 'val'.\n test: The prefix of the test data. Default: 'test'.\n Remaining keyword arguments: Passed to the splits method of\n Dataset.\n \"\"\"\n if path is None:\n path = cls.download(root)\n\n train_data = None if train is None else cls(\n os.path.join(path, train), exts, fields, **kwargs)\n val_data = None if validation is None else cls(\n os.path.join(path, validation), exts, fields, **kwargs)\n test_data = None if test is None else cls(\n os.path.join(path, test), exts, fields, **kwargs)\n return tuple(d for d in (train_data, val_data, test_data)\n if d is not None)\n\n\nclass Multi30k(TranslationDataset):\n \"\"\"The small-dataset WMT 2016 multimodal task, also known as Flickr30k\"\"\"\n\n urls = ['http://www.quest.dcs.shef.ac.uk/wmt16_files_mmt/training.tar.gz',\n 'http://www.quest.dcs.shef.ac.uk/wmt16_files_mmt/validation.tar.gz',\n 'http://www.quest.dcs.shef.ac.uk/'\n 'wmt17_files_mmt/mmt_task1_test2016.tar.gz']\n name = 'multi30k'\n dirname = ''\n\n @classmethod\n def splits(cls, exts, fields, root='.data',\n train='train', validation='val', test='test2016', **kwargs):\n \"\"\"Create dataset objects for splits of the Multi30k dataset.\n\n Arguments:\n\n root: Root dataset storage directory. Default is '.data'.\n exts: A tuple containing the extension to path for each language.\n fields: A tuple containing the fields that will be used for data\n in each language.\n train: The prefix of the train data. Default: 'train'.\n validation: The prefix of the validation data. Default: 'val'.\n test: The prefix of the test data. Default: 'test'.\n Remaining keyword arguments: Passed to the splits method of\n Dataset.\n \"\"\"\n return super(Multi30k, cls).splits(\n exts, fields, root, train, validation, test, **kwargs)\n\n\nclass IWSLT(TranslationDataset):\n \"\"\"The IWSLT 2016 TED talk translation task\"\"\"\n\n base_url = 'https://wit3.fbk.eu/archive/2016-01//texts/{}/{}/{}.tgz'\n name = 'iwslt'\n base_dirname = '{}-{}'\n\n @classmethod\n def splits(cls, exts, fields, root='.data',\n train='train', validation='IWSLT16.TED.tst2013',\n test='IWSLT16.TED.tst2014', **kwargs):\n \"\"\"Create dataset objects for splits of the IWSLT dataset.\n\n Arguments:\n\n root: Root dataset storage directory. Default is '.data'.\n exts: A tuple containing the extension to path for each language.\n fields: A tuple containing the fields that will be used for data\n in each language.\n train: The prefix of the train data. Default: 'train'.\n validation: The prefix of the validation data. Default: 'val'.\n test: The prefix of the test data. Default: 'test'.\n Remaining keyword arguments: Passed to the splits method of\n Dataset.\n \"\"\"\n cls.dirname = cls.base_dirname.format(exts[0][1:], exts[1][1:])\n cls.urls = [cls.base_url.format(exts[0][1:], exts[1][1:], cls.dirname)]\n check = os.path.join(root, cls.name, cls.dirname)\n path = cls.download(root, check=check)\n\n train = '.'.join([train, cls.dirname])\n validation = '.'.join([validation, cls.dirname])\n if test is not None:\n test = '.'.join([test, cls.dirname])\n\n if not os.path.exists(os.path.join(path, train) + exts[0]):\n cls.clean(path)\n\n train_data = None if train is None else cls(\n os.path.join(path, train), exts, fields, **kwargs)\n val_data = None if validation is None else cls(\n os.path.join(path, validation), exts, fields, **kwargs)\n test_data = None if test is None else cls(\n os.path.join(path, test), exts, fields, **kwargs)\n return tuple(d for d in (train_data, val_data, test_data)\n if d is not None)\n\n @staticmethod\n def clean(path):\n for f_xml in glob.iglob(os.path.join(path, '*.xml')):\n print(f_xml)\n f_txt = os.path.splitext(f_xml)[0]\n with io.open(f_txt, mode='w', encoding='utf-8') as fd_txt:\n root = ET.parse(f_xml).getroot()[0]\n for doc in root.findall('doc'):\n for e in doc.findall('seg'):\n fd_txt.write(e.text.strip() + '\\n')\n\n xml_tags = ['<url', '<keywords', '<talkid', '<description',\n '<reviewer', '<translator', '<title', '<speaker']\n for f_orig in glob.iglob(os.path.join(path, 'train.tags*')):\n print(f_orig)\n f_txt = f_orig.replace('.tags', '')\n with io.open(f_txt, mode='w', encoding='utf-8') as fd_txt, \\\n io.open(f_orig, mode='r', encoding='utf-8') as fd_orig:\n for l in fd_orig:\n if not any(tag in l for tag in xml_tags):\n fd_txt.write(l.strip() + '\\n')\n\n\nclass WMT14(TranslationDataset):\n \"\"\"The WMT 2014 English-German dataset, as preprocessed by Google Brain.\n\n Though this download contains test sets from 2015 and 2016, the train set\n differs slightly from WMT 2015 and 2016 and significantly from WMT 2017.\"\"\"\n\n urls = [('https://drive.google.com/uc?export=download&'\n 'id=0B_bZck-ksdkpM25jRUN2X2UxMm8', 'wmt16_en_de.tar.gz')]\n name = 'wmt14'\n dirname = ''\n\n @classmethod\n def splits(cls, exts, fields, root='.data',\n train='train.tok.clean.bpe.32000',\n validation='newstest2013.tok.bpe.32000',\n test='newstest2014.tok.bpe.32000', **kwargs):\n \"\"\"Create dataset objects for splits of the WMT 2014 dataset.\n\n Arguments:\n\n root: Root dataset storage directory. Default is '.data'.\n exts: A tuple containing the extensions for each language. Must be\n either ('.en', '.de') or the reverse.\n fields: A tuple containing the fields that will be used for data\n in each language.\n train: The prefix of the train data. Default:\n 'train.tok.clean.bpe.32000'.\n validation: The prefix of the validation data. Default:\n 'newstest2013.tok.bpe.32000'.\n test: The prefix of the test data. Default:\n 'newstest2014.tok.bpe.32000'.\n Remaining keyword arguments: Passed to the splits method of\n Dataset.\n \"\"\"\n return super(WMT14, cls).splits(\n exts, fields, root, train, validation, test, **kwargs)\n", "path": "torchtext/datasets/translation.py"}]} | 3,041 | 299 |
gh_patches_debug_35444 | rasdani/github-patches | git_diff | InstaPy__InstaPy-831 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot focus element error
I'm running the script on mac and after some time i get this error:
Message: unknown error: cannot focus element
(Session info: chrome=61.0.3163.100)
(Driver info: chromedriver=2.32.498537 (cb2f855cbc7b82e20387eaf9a43f6b99b6105061),platform=Mac OS X 10.12.3 x86_64)
Now I'm trying to update chromedriver with all of the packages to check whether they are the reason, but does anybody else get this error?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `instapy/comment_util.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """Module which handles the commenting features"""
3 from random import choice
4 from .time_util import sleep
5 import emoji
6
7
8 def comment_image(browser, comments):
9 """Checks if it should comment on the image"""
10 rand_comment = (choice(comments))
11 rand_comment = emoji.demojize(rand_comment)
12 rand_comment = emoji.emojize(rand_comment, use_aliases=True)
13
14 comment_input = browser.find_elements_by_xpath(
15 '//textarea[@placeholder = "Add a comment…"]')
16 if len(comment_input) <= 0:
17 comment_input = browser.find_elements_by_xpath(
18 '//input[@placeholder = "Add a comment…"]')
19
20 if len(comment_input) > 0:
21 browser.execute_script(
22 "arguments[0].value = '" + rand_comment + " ';", comment_input[0])
23 # An extra space is added here and then deleted.
24 # This forces the input box to update the reactJS core
25 comment_input[0].send_keys("\b")
26 comment_input[0].submit()
27 else:
28 print('--> Warning: Comment Action Likely Failed:'
29 ' Comment Element not found')
30
31 print("--> Commented: {}".format(rand_comment.encode('utf-8')))
32 sleep(2)
33
34 return 1
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/instapy/comment_util.py b/instapy/comment_util.py
--- a/instapy/comment_util.py
+++ b/instapy/comment_util.py
@@ -2,8 +2,31 @@
"""Module which handles the commenting features"""
from random import choice
from .time_util import sleep
+from selenium.common.exceptions import WebDriverException
import emoji
+def get_comment_input(browser):
+ comment_input = browser.find_elements_by_xpath(
+ '//textarea[@placeholder = "Add a comment…"]')
+ if len(comment_input) <= 0:
+ comment_input = browser.find_elements_by_xpath(
+ '//input[@placeholder = "Add a comment…"]')
+ return comment_input
+
+def open_comment_section(browser):
+ missing_comment_elem_warning = (
+ '--> Warning: Comment Button Not Found:'
+ ' May cause issues with browser windows of smaller widths')
+ comment_elem = browser.find_elements_by_xpath(
+ "//a[@role='button']/span[text()='Comment']/..")
+ if len(comment_elem) > 0:
+ try:
+ browser.execute_script(
+ "arguments[0].click();", comment_elem[0])
+ except WebDriverException:
+ print(missing_comment_elem_warning)
+ else:
+ print(missing_comment_elem_warning)
def comment_image(browser, comments):
"""Checks if it should comment on the image"""
@@ -11,18 +34,19 @@
rand_comment = emoji.demojize(rand_comment)
rand_comment = emoji.emojize(rand_comment, use_aliases=True)
- comment_input = browser.find_elements_by_xpath(
- '//textarea[@placeholder = "Add a comment…"]')
- if len(comment_input) <= 0:
- comment_input = browser.find_elements_by_xpath(
- '//input[@placeholder = "Add a comment…"]')
+ open_comment_section(browser)
+ comment_input = get_comment_input(browser)
if len(comment_input) > 0:
+ comment_input[0].clear()
+ comment_input = get_comment_input(browser)
+
browser.execute_script(
"arguments[0].value = '" + rand_comment + " ';", comment_input[0])
# An extra space is added here and then deleted.
# This forces the input box to update the reactJS core
comment_input[0].send_keys("\b")
+ comment_input = get_comment_input(browser)
comment_input[0].submit()
else:
print('--> Warning: Comment Action Likely Failed:'
| {"golden_diff": "diff --git a/instapy/comment_util.py b/instapy/comment_util.py\n--- a/instapy/comment_util.py\n+++ b/instapy/comment_util.py\n@@ -2,8 +2,31 @@\n \"\"\"Module which handles the commenting features\"\"\"\n from random import choice\n from .time_util import sleep\n+from selenium.common.exceptions import WebDriverException\n import emoji\n \n+def get_comment_input(browser):\n+ comment_input = browser.find_elements_by_xpath(\n+ '//textarea[@placeholder = \"Add a comment\u2026\"]')\n+ if len(comment_input) <= 0:\n+ comment_input = browser.find_elements_by_xpath(\n+ '//input[@placeholder = \"Add a comment\u2026\"]')\n+ return comment_input\n+\n+def open_comment_section(browser):\n+ missing_comment_elem_warning = (\n+ '--> Warning: Comment Button Not Found:'\n+ ' May cause issues with browser windows of smaller widths')\n+ comment_elem = browser.find_elements_by_xpath(\n+ \"//a[@role='button']/span[text()='Comment']/..\")\n+ if len(comment_elem) > 0:\n+ try:\n+ browser.execute_script(\n+ \"arguments[0].click();\", comment_elem[0])\n+ except WebDriverException:\n+ print(missing_comment_elem_warning)\n+ else:\n+ print(missing_comment_elem_warning)\n \n def comment_image(browser, comments):\n \"\"\"Checks if it should comment on the image\"\"\"\n@@ -11,18 +34,19 @@\n rand_comment = emoji.demojize(rand_comment)\n rand_comment = emoji.emojize(rand_comment, use_aliases=True)\n \n- comment_input = browser.find_elements_by_xpath(\n- '//textarea[@placeholder = \"Add a comment\u2026\"]')\n- if len(comment_input) <= 0:\n- comment_input = browser.find_elements_by_xpath(\n- '//input[@placeholder = \"Add a comment\u2026\"]')\n+ open_comment_section(browser)\n+ comment_input = get_comment_input(browser)\n \n if len(comment_input) > 0:\n+ comment_input[0].clear()\n+ comment_input = get_comment_input(browser)\n+\n browser.execute_script(\n \"arguments[0].value = '\" + rand_comment + \" ';\", comment_input[0])\n # An extra space is added here and then deleted.\n # This forces the input box to update the reactJS core\n comment_input[0].send_keys(\"\\b\")\n+ comment_input = get_comment_input(browser)\n comment_input[0].submit()\n else:\n print('--> Warning: Comment Action Likely Failed:'\n", "issue": "Cannot focus element error\nI'm running the script on mac and after some time i get this error:\r\n\r\nMessage: unknown error: cannot focus element\r\n (Session info: chrome=61.0.3163.100)\r\n (Driver info: chromedriver=2.32.498537 (cb2f855cbc7b82e20387eaf9a43f6b99b6105061),platform=Mac OS X 10.12.3 x86_64)\r\n\r\nNow I'm trying to update chromedriver with all of the packages to check whether they are the reason, but does anybody else get this error?\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Module which handles the commenting features\"\"\"\nfrom random import choice\nfrom .time_util import sleep\nimport emoji\n\n\ndef comment_image(browser, comments):\n \"\"\"Checks if it should comment on the image\"\"\"\n rand_comment = (choice(comments))\n rand_comment = emoji.demojize(rand_comment)\n rand_comment = emoji.emojize(rand_comment, use_aliases=True)\n\n comment_input = browser.find_elements_by_xpath(\n '//textarea[@placeholder = \"Add a comment\u2026\"]')\n if len(comment_input) <= 0:\n comment_input = browser.find_elements_by_xpath(\n '//input[@placeholder = \"Add a comment\u2026\"]')\n\n if len(comment_input) > 0:\n browser.execute_script(\n \"arguments[0].value = '\" + rand_comment + \" ';\", comment_input[0])\n # An extra space is added here and then deleted.\n # This forces the input box to update the reactJS core\n comment_input[0].send_keys(\"\\b\")\n comment_input[0].submit()\n else:\n print('--> Warning: Comment Action Likely Failed:'\n ' Comment Element not found')\n\n print(\"--> Commented: {}\".format(rand_comment.encode('utf-8')))\n sleep(2)\n\n return 1\n", "path": "instapy/comment_util.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Module which handles the commenting features\"\"\"\nfrom random import choice\nfrom .time_util import sleep\nfrom selenium.common.exceptions import WebDriverException\nimport emoji\n\ndef get_comment_input(browser):\n comment_input = browser.find_elements_by_xpath(\n '//textarea[@placeholder = \"Add a comment\u2026\"]')\n if len(comment_input) <= 0:\n comment_input = browser.find_elements_by_xpath(\n '//input[@placeholder = \"Add a comment\u2026\"]')\n return comment_input\n\ndef open_comment_section(browser):\n missing_comment_elem_warning = (\n '--> Warning: Comment Button Not Found:'\n ' May cause issues with browser windows of smaller widths')\n comment_elem = browser.find_elements_by_xpath(\n \"//a[@role='button']/span[text()='Comment']/..\")\n if len(comment_elem) > 0:\n try:\n browser.execute_script(\n \"arguments[0].click();\", comment_elem[0])\n except WebDriverException:\n print(missing_comment_elem_warning)\n else:\n print(missing_comment_elem_warning)\n\ndef comment_image(browser, comments):\n \"\"\"Checks if it should comment on the image\"\"\"\n rand_comment = (choice(comments))\n rand_comment = emoji.demojize(rand_comment)\n rand_comment = emoji.emojize(rand_comment, use_aliases=True)\n\n open_comment_section(browser)\n comment_input = get_comment_input(browser)\n\n if len(comment_input) > 0:\n comment_input[0].clear()\n comment_input = get_comment_input(browser)\n\n browser.execute_script(\n \"arguments[0].value = '\" + rand_comment + \" ';\", comment_input[0])\n # An extra space is added here and then deleted.\n # This forces the input box to update the reactJS core\n comment_input[0].send_keys(\"\\b\")\n comment_input = get_comment_input(browser)\n comment_input[0].submit()\n else:\n print('--> Warning: Comment Action Likely Failed:'\n ' Comment Element not found')\n\n print(\"--> Commented: {}\".format(rand_comment.encode('utf-8')))\n sleep(2)\n\n return 1\n", "path": "instapy/comment_util.py"}]} | 753 | 549 |
gh_patches_debug_14352 | rasdani/github-patches | git_diff | gratipay__gratipay.com-2429 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
preclude adding stub participants as team members
Reticketed from #2362.
preclude adding stub participants as team members
Reticketed from #2362.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gittip/models/_mixin_team.py`
Content:
```
1 """Teams on Gittip are plural participants with members.
2 """
3 from decimal import Decimal
4
5 from aspen.utils import typecheck
6
7
8 class MemberLimitReached(Exception): pass
9
10
11 class MixinTeam(object):
12 """This class provides methods for working with a Participant as a Team.
13
14 :param Participant participant: the underlying :py:class:`~gittip.participant.Participant` object for this team
15
16 """
17
18 # XXX These were all written with the ORM and need to be converted.
19
20 def __init__(self, participant):
21 self.participant = participant
22
23 def show_as_team(self, user):
24 """Return a boolean, whether to show this participant as a team.
25 """
26 if not self.IS_PLURAL:
27 return False
28 if user.ADMIN:
29 return True
30 if not self.get_takes():
31 if self == user.participant:
32 return True
33 return False
34 return True
35
36 def add_member(self, member):
37 """Add a member to this team.
38 """
39 assert self.IS_PLURAL
40 if len(self.get_takes()) == 149:
41 raise MemberLimitReached
42 self.__set_take_for(member, Decimal('0.01'), self)
43
44 def remove_member(self, member):
45 """Remove a member from this team.
46 """
47 assert self.IS_PLURAL
48 self.__set_take_for(member, Decimal('0.00'), self)
49
50 def member_of(self, team):
51 """Given a Participant object, return a boolean.
52 """
53 assert team.IS_PLURAL
54 for take in team.get_takes():
55 if take['member'] == self.username:
56 return True
57 return False
58
59 def get_take_last_week_for(self, member):
60 """What did the user actually take most recently? Used in throttling.
61 """
62 assert self.IS_PLURAL
63 membername = member.username if hasattr(member, 'username') \
64 else member['username']
65 return self.db.one("""
66
67 SELECT amount
68 FROM transfers
69 WHERE tipper=%s AND tippee=%s
70 AND timestamp >
71 (SELECT ts_start FROM paydays ORDER BY ts_start DESC LIMIT 1)
72 ORDER BY timestamp DESC LIMIT 1
73
74 """, (self.username, membername), default=Decimal('0.00'))
75
76 def get_take_for(self, member):
77 """Return a Decimal representation of the take for this member, or 0.
78 """
79 assert self.IS_PLURAL
80 return self.db.one( "SELECT amount FROM current_takes "
81 "WHERE member=%s AND team=%s"
82 , (member.username, self.username)
83 , default=Decimal('0.00')
84 )
85
86 def compute_max_this_week(self, last_week):
87 """2x last week's take, but at least a dollar.
88 """
89 return max(last_week * Decimal('2'), Decimal('1.00'))
90
91 def set_take_for(self, member, take, recorder):
92 """Sets member's take from the team pool.
93 """
94 assert self.IS_PLURAL
95
96 # lazy import to avoid circular import
97 from gittip.security.user import User
98 from gittip.models.participant import Participant
99
100 typecheck( member, Participant
101 , take, Decimal
102 , recorder, (Participant, User)
103 )
104
105 last_week = self.get_take_last_week_for(member)
106 max_this_week = self.compute_max_this_week(last_week)
107 if take > max_this_week:
108 take = max_this_week
109
110 self.__set_take_for(member, take, recorder)
111 return take
112
113 def __set_take_for(self, member, amount, recorder):
114 assert self.IS_PLURAL
115 # XXX Factored out for testing purposes only! :O Use .set_take_for.
116 self.db.run("""
117
118 INSERT INTO takes (ctime, member, team, amount, recorder)
119 VALUES ( COALESCE (( SELECT ctime
120 FROM takes
121 WHERE member=%s
122 AND team=%s
123 LIMIT 1
124 ), CURRENT_TIMESTAMP)
125 , %s
126 , %s
127 , %s
128 , %s
129 )
130
131 """, (member.username, self.username, member.username, self.username, \
132 amount, recorder.username))
133
134 def get_takes(self, for_payday=False):
135 """Return a list of member takes for a team.
136
137 This is implemented parallel to Participant.get_tips_and_total. See
138 over there for an explanation of for_payday.
139
140 """
141 assert self.IS_PLURAL
142
143 args = dict(team=self.username)
144
145 if for_payday:
146 args['ts_start'] = for_payday
147
148 # Get the takes for this team, as they were before ts_start,
149 # filtering out the ones we've already transferred (in case payday
150 # is interrupted and restarted).
151
152 TAKES = """\
153
154 SELECT * FROM (
155 SELECT DISTINCT ON (member) t.*
156 FROM takes t
157 JOIN participants p ON p.username = member
158 WHERE team=%(team)s
159 AND mtime < %(ts_start)s
160 AND p.is_suspicious IS NOT true
161 AND ( SELECT id
162 FROM transfers
163 WHERE tipper=t.team
164 AND tippee=t.member
165 AND as_team_member IS true
166 AND timestamp >= %(ts_start)s
167 ) IS NULL
168 ORDER BY member, mtime DESC
169 ) AS foo
170 ORDER BY ctime DESC
171
172 """
173 else:
174 TAKES = """\
175
176 SELECT member, amount, ctime, mtime
177 FROM current_takes
178 WHERE team=%(team)s
179 ORDER BY ctime DESC
180
181 """
182
183 return self.db.all(TAKES, args, back_as=dict)
184
185 def get_team_take(self):
186 """Return a single take for a team, the team itself's take.
187 """
188 assert self.IS_PLURAL
189 TAKE = "SELECT sum(amount) FROM current_takes WHERE team=%s"
190 total_take = self.db.one(TAKE, (self.username,), default=0)
191 team_take = max(self.get_dollars_receiving() - total_take, 0)
192 membership = { "ctime": None
193 , "mtime": None
194 , "member": self.username
195 , "amount": team_take
196 }
197 return membership
198
199 def get_members(self, current_participant):
200 """Return a list of member dicts.
201 """
202 assert self.IS_PLURAL
203 takes = self.get_takes()
204 takes.append(self.get_team_take())
205 budget = balance = self.get_dollars_receiving()
206 members = []
207 for take in takes:
208 member = {}
209 member['username'] = take['member']
210 member['take'] = take['amount']
211
212 member['removal_allowed'] = current_participant == self
213 member['editing_allowed'] = False
214 member['is_current_user'] = False
215 if current_participant is not None:
216 if member['username'] == current_participant.username:
217 member['is_current_user'] = True
218 if take['ctime'] is not None:
219 # current user, but not the team itself
220 member['editing_allowed']= True
221
222 member['last_week'] = last_week = self.get_take_last_week_for(member)
223 member['max_this_week'] = self.compute_max_this_week(last_week)
224 amount = min(take['amount'], balance)
225 balance -= amount
226 member['balance'] = balance
227 member['percentage'] = (amount / budget) if budget > 0 else 0
228 members.append(member)
229 return members
230
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gittip/models/_mixin_team.py b/gittip/models/_mixin_team.py
--- a/gittip/models/_mixin_team.py
+++ b/gittip/models/_mixin_team.py
@@ -7,6 +7,7 @@
class MemberLimitReached(Exception): pass
+class StubParticipantAdded(Exception): pass
class MixinTeam(object):
"""This class provides methods for working with a Participant as a Team.
@@ -39,6 +40,8 @@
assert self.IS_PLURAL
if len(self.get_takes()) == 149:
raise MemberLimitReached
+ if not member.is_claimed:
+ raise StubParticipantAdded
self.__set_take_for(member, Decimal('0.01'), self)
def remove_member(self, member):
| {"golden_diff": "diff --git a/gittip/models/_mixin_team.py b/gittip/models/_mixin_team.py\n--- a/gittip/models/_mixin_team.py\n+++ b/gittip/models/_mixin_team.py\n@@ -7,6 +7,7 @@\n \n class MemberLimitReached(Exception): pass\n \n+class StubParticipantAdded(Exception): pass\n \n class MixinTeam(object):\n \"\"\"This class provides methods for working with a Participant as a Team.\n@@ -39,6 +40,8 @@\n assert self.IS_PLURAL\n if len(self.get_takes()) == 149:\n raise MemberLimitReached\n+ if not member.is_claimed:\n+ raise StubParticipantAdded\n self.__set_take_for(member, Decimal('0.01'), self)\n \n def remove_member(self, member):\n", "issue": "preclude adding stub participants as team members\nReticketed from #2362.\n\npreclude adding stub participants as team members\nReticketed from #2362.\n\n", "before_files": [{"content": "\"\"\"Teams on Gittip are plural participants with members.\n\"\"\"\nfrom decimal import Decimal\n\nfrom aspen.utils import typecheck\n\n\nclass MemberLimitReached(Exception): pass\n\n\nclass MixinTeam(object):\n \"\"\"This class provides methods for working with a Participant as a Team.\n\n :param Participant participant: the underlying :py:class:`~gittip.participant.Participant` object for this team\n\n \"\"\"\n\n # XXX These were all written with the ORM and need to be converted.\n\n def __init__(self, participant):\n self.participant = participant\n\n def show_as_team(self, user):\n \"\"\"Return a boolean, whether to show this participant as a team.\n \"\"\"\n if not self.IS_PLURAL:\n return False\n if user.ADMIN:\n return True\n if not self.get_takes():\n if self == user.participant:\n return True\n return False\n return True\n\n def add_member(self, member):\n \"\"\"Add a member to this team.\n \"\"\"\n assert self.IS_PLURAL\n if len(self.get_takes()) == 149:\n raise MemberLimitReached\n self.__set_take_for(member, Decimal('0.01'), self)\n\n def remove_member(self, member):\n \"\"\"Remove a member from this team.\n \"\"\"\n assert self.IS_PLURAL\n self.__set_take_for(member, Decimal('0.00'), self)\n\n def member_of(self, team):\n \"\"\"Given a Participant object, return a boolean.\n \"\"\"\n assert team.IS_PLURAL\n for take in team.get_takes():\n if take['member'] == self.username:\n return True\n return False\n\n def get_take_last_week_for(self, member):\n \"\"\"What did the user actually take most recently? Used in throttling.\n \"\"\"\n assert self.IS_PLURAL\n membername = member.username if hasattr(member, 'username') \\\n else member['username']\n return self.db.one(\"\"\"\n\n SELECT amount\n FROM transfers\n WHERE tipper=%s AND tippee=%s\n AND timestamp >\n (SELECT ts_start FROM paydays ORDER BY ts_start DESC LIMIT 1)\n ORDER BY timestamp DESC LIMIT 1\n\n \"\"\", (self.username, membername), default=Decimal('0.00'))\n\n def get_take_for(self, member):\n \"\"\"Return a Decimal representation of the take for this member, or 0.\n \"\"\"\n assert self.IS_PLURAL\n return self.db.one( \"SELECT amount FROM current_takes \"\n \"WHERE member=%s AND team=%s\"\n , (member.username, self.username)\n , default=Decimal('0.00')\n )\n\n def compute_max_this_week(self, last_week):\n \"\"\"2x last week's take, but at least a dollar.\n \"\"\"\n return max(last_week * Decimal('2'), Decimal('1.00'))\n\n def set_take_for(self, member, take, recorder):\n \"\"\"Sets member's take from the team pool.\n \"\"\"\n assert self.IS_PLURAL\n\n # lazy import to avoid circular import\n from gittip.security.user import User\n from gittip.models.participant import Participant\n\n typecheck( member, Participant\n , take, Decimal\n , recorder, (Participant, User)\n )\n\n last_week = self.get_take_last_week_for(member)\n max_this_week = self.compute_max_this_week(last_week)\n if take > max_this_week:\n take = max_this_week\n\n self.__set_take_for(member, take, recorder)\n return take\n\n def __set_take_for(self, member, amount, recorder):\n assert self.IS_PLURAL\n # XXX Factored out for testing purposes only! :O Use .set_take_for.\n self.db.run(\"\"\"\n\n INSERT INTO takes (ctime, member, team, amount, recorder)\n VALUES ( COALESCE (( SELECT ctime\n FROM takes\n WHERE member=%s\n AND team=%s\n LIMIT 1\n ), CURRENT_TIMESTAMP)\n , %s\n , %s\n , %s\n , %s\n )\n\n \"\"\", (member.username, self.username, member.username, self.username, \\\n amount, recorder.username))\n\n def get_takes(self, for_payday=False):\n \"\"\"Return a list of member takes for a team.\n\n This is implemented parallel to Participant.get_tips_and_total. See\n over there for an explanation of for_payday.\n\n \"\"\"\n assert self.IS_PLURAL\n\n args = dict(team=self.username)\n\n if for_payday:\n args['ts_start'] = for_payday\n\n # Get the takes for this team, as they were before ts_start,\n # filtering out the ones we've already transferred (in case payday\n # is interrupted and restarted).\n\n TAKES = \"\"\"\\\n\n SELECT * FROM (\n SELECT DISTINCT ON (member) t.*\n FROM takes t\n JOIN participants p ON p.username = member\n WHERE team=%(team)s\n AND mtime < %(ts_start)s\n AND p.is_suspicious IS NOT true\n AND ( SELECT id\n FROM transfers\n WHERE tipper=t.team\n AND tippee=t.member\n AND as_team_member IS true\n AND timestamp >= %(ts_start)s\n ) IS NULL\n ORDER BY member, mtime DESC\n ) AS foo\n ORDER BY ctime DESC\n\n \"\"\"\n else:\n TAKES = \"\"\"\\\n\n SELECT member, amount, ctime, mtime\n FROM current_takes\n WHERE team=%(team)s\n ORDER BY ctime DESC\n\n \"\"\"\n\n return self.db.all(TAKES, args, back_as=dict)\n\n def get_team_take(self):\n \"\"\"Return a single take for a team, the team itself's take.\n \"\"\"\n assert self.IS_PLURAL\n TAKE = \"SELECT sum(amount) FROM current_takes WHERE team=%s\"\n total_take = self.db.one(TAKE, (self.username,), default=0)\n team_take = max(self.get_dollars_receiving() - total_take, 0)\n membership = { \"ctime\": None\n , \"mtime\": None\n , \"member\": self.username\n , \"amount\": team_take\n }\n return membership\n\n def get_members(self, current_participant):\n \"\"\"Return a list of member dicts.\n \"\"\"\n assert self.IS_PLURAL\n takes = self.get_takes()\n takes.append(self.get_team_take())\n budget = balance = self.get_dollars_receiving()\n members = []\n for take in takes:\n member = {}\n member['username'] = take['member']\n member['take'] = take['amount']\n\n member['removal_allowed'] = current_participant == self\n member['editing_allowed'] = False\n member['is_current_user'] = False\n if current_participant is not None:\n if member['username'] == current_participant.username:\n member['is_current_user'] = True\n if take['ctime'] is not None:\n # current user, but not the team itself\n member['editing_allowed']= True\n\n member['last_week'] = last_week = self.get_take_last_week_for(member)\n member['max_this_week'] = self.compute_max_this_week(last_week)\n amount = min(take['amount'], balance)\n balance -= amount\n member['balance'] = balance\n member['percentage'] = (amount / budget) if budget > 0 else 0\n members.append(member)\n return members\n", "path": "gittip/models/_mixin_team.py"}], "after_files": [{"content": "\"\"\"Teams on Gittip are plural participants with members.\n\"\"\"\nfrom decimal import Decimal\n\nfrom aspen.utils import typecheck\n\n\nclass MemberLimitReached(Exception): pass\n\nclass StubParticipantAdded(Exception): pass\n\nclass MixinTeam(object):\n \"\"\"This class provides methods for working with a Participant as a Team.\n\n :param Participant participant: the underlying :py:class:`~gittip.participant.Participant` object for this team\n\n \"\"\"\n\n # XXX These were all written with the ORM and need to be converted.\n\n def __init__(self, participant):\n self.participant = participant\n\n def show_as_team(self, user):\n \"\"\"Return a boolean, whether to show this participant as a team.\n \"\"\"\n if not self.IS_PLURAL:\n return False\n if user.ADMIN:\n return True\n if not self.get_takes():\n if self == user.participant:\n return True\n return False\n return True\n\n def add_member(self, member):\n \"\"\"Add a member to this team.\n \"\"\"\n assert self.IS_PLURAL\n if len(self.get_takes()) == 149:\n raise MemberLimitReached\n if not member.is_claimed:\n raise StubParticipantAdded\n self.__set_take_for(member, Decimal('0.01'), self)\n\n def remove_member(self, member):\n \"\"\"Remove a member from this team.\n \"\"\"\n assert self.IS_PLURAL\n self.__set_take_for(member, Decimal('0.00'), self)\n\n def member_of(self, team):\n \"\"\"Given a Participant object, return a boolean.\n \"\"\"\n assert team.IS_PLURAL\n for take in team.get_takes():\n if take['member'] == self.username:\n return True\n return False\n\n def get_take_last_week_for(self, member):\n \"\"\"What did the user actually take most recently? Used in throttling.\n \"\"\"\n assert self.IS_PLURAL\n membername = member.username if hasattr(member, 'username') \\\n else member['username']\n return self.db.one(\"\"\"\n\n SELECT amount\n FROM transfers\n WHERE tipper=%s AND tippee=%s\n AND timestamp >\n (SELECT ts_start FROM paydays ORDER BY ts_start DESC LIMIT 1)\n ORDER BY timestamp DESC LIMIT 1\n\n \"\"\", (self.username, membername), default=Decimal('0.00'))\n\n def get_take_for(self, member):\n \"\"\"Return a Decimal representation of the take for this member, or 0.\n \"\"\"\n assert self.IS_PLURAL\n return self.db.one( \"SELECT amount FROM current_takes \"\n \"WHERE member=%s AND team=%s\"\n , (member.username, self.username)\n , default=Decimal('0.00')\n )\n\n def compute_max_this_week(self, last_week):\n \"\"\"2x last week's take, but at least a dollar.\n \"\"\"\n return max(last_week * Decimal('2'), Decimal('1.00'))\n\n def set_take_for(self, member, take, recorder):\n \"\"\"Sets member's take from the team pool.\n \"\"\"\n assert self.IS_PLURAL\n\n # lazy import to avoid circular import\n from gittip.security.user import User\n from gittip.models.participant import Participant\n\n typecheck( member, Participant\n , take, Decimal\n , recorder, (Participant, User)\n )\n\n last_week = self.get_take_last_week_for(member)\n max_this_week = self.compute_max_this_week(last_week)\n if take > max_this_week:\n take = max_this_week\n\n self.__set_take_for(member, take, recorder)\n return take\n\n def __set_take_for(self, member, amount, recorder):\n assert self.IS_PLURAL\n # XXX Factored out for testing purposes only! :O Use .set_take_for.\n self.db.run(\"\"\"\n\n INSERT INTO takes (ctime, member, team, amount, recorder)\n VALUES ( COALESCE (( SELECT ctime\n FROM takes\n WHERE member=%s\n AND team=%s\n LIMIT 1\n ), CURRENT_TIMESTAMP)\n , %s\n , %s\n , %s\n , %s\n )\n\n \"\"\", (member.username, self.username, member.username, self.username, \\\n amount, recorder.username))\n\n def get_takes(self, for_payday=False):\n \"\"\"Return a list of member takes for a team.\n\n This is implemented parallel to Participant.get_tips_and_total. See\n over there for an explanation of for_payday.\n\n \"\"\"\n assert self.IS_PLURAL\n\n args = dict(team=self.username)\n\n if for_payday:\n args['ts_start'] = for_payday\n\n # Get the takes for this team, as they were before ts_start,\n # filtering out the ones we've already transferred (in case payday\n # is interrupted and restarted).\n\n TAKES = \"\"\"\\\n\n SELECT * FROM (\n SELECT DISTINCT ON (member) t.*\n FROM takes t\n JOIN participants p ON p.username = member\n WHERE team=%(team)s\n AND mtime < %(ts_start)s\n AND p.is_suspicious IS NOT true\n AND ( SELECT id\n FROM transfers\n WHERE tipper=t.team\n AND tippee=t.member\n AND as_team_member IS true\n AND timestamp >= %(ts_start)s\n ) IS NULL\n ORDER BY member, mtime DESC\n ) AS foo\n ORDER BY ctime DESC\n\n \"\"\"\n else:\n TAKES = \"\"\"\\\n\n SELECT member, amount, ctime, mtime\n FROM current_takes\n WHERE team=%(team)s\n ORDER BY ctime DESC\n\n \"\"\"\n\n return self.db.all(TAKES, args, back_as=dict)\n\n def get_team_take(self):\n \"\"\"Return a single take for a team, the team itself's take.\n \"\"\"\n assert self.IS_PLURAL\n TAKE = \"SELECT sum(amount) FROM current_takes WHERE team=%s\"\n total_take = self.db.one(TAKE, (self.username,), default=0)\n team_take = max(self.get_dollars_receiving() - total_take, 0)\n membership = { \"ctime\": None\n , \"mtime\": None\n , \"member\": self.username\n , \"amount\": team_take\n }\n return membership\n\n def get_members(self, current_participant):\n \"\"\"Return a list of member dicts.\n \"\"\"\n assert self.IS_PLURAL\n takes = self.get_takes()\n takes.append(self.get_team_take())\n budget = balance = self.get_dollars_receiving()\n members = []\n for take in takes:\n member = {}\n member['username'] = take['member']\n member['take'] = take['amount']\n\n member['removal_allowed'] = current_participant == self\n member['editing_allowed'] = False\n member['is_current_user'] = False\n if current_participant is not None:\n if member['username'] == current_participant.username:\n member['is_current_user'] = True\n if take['ctime'] is not None:\n # current user, but not the team itself\n member['editing_allowed']= True\n\n member['last_week'] = last_week = self.get_take_last_week_for(member)\n member['max_this_week'] = self.compute_max_this_week(last_week)\n amount = min(take['amount'], balance)\n balance -= amount\n member['balance'] = balance\n member['percentage'] = (amount / budget) if budget > 0 else 0\n members.append(member)\n return members\n", "path": "gittip/models/_mixin_team.py"}]} | 2,531 | 178 |
gh_patches_debug_24835 | rasdani/github-patches | git_diff | napari__napari-589 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
novel gene discovery in Napari (a.k.a. imperfect layer name incrementing)
## 🐛 Bug
napari does a smart thing and adds an integer to the name of a layer if the name is already in use in the viewer. It then increments that number when another layer is added that fits the pattern. This is great until you have layer names that end in numbers

which in this case results in different names that are sometimes real genes and sometimes not.
## To Reproduce
Steps to reproduce the behavior:
```
v = napari.Viewer()
for i in range(3):
v.add_points(1000*np.random.rand(10,2), name = "no_problem")
for i in range(3):
v.add_points(1000*np.random.rand(10,2), name = "problem:GAD1")
```
## Expected behavior
consistent use of a space or other character to separate the auto-increment integer from the original string
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `napari/util/naming.py`
Content:
```
1 """Automatically generate names.
2 """
3 import re
4 from .misc import formatdoc
5
6
7 sep = ' '
8 start = 1
9
10 numbered_patt = re.compile(r'(?<!\d)(?:\d+|)$')
11
12
13 def _inc_name_count_sub(match):
14 count = match.group(0)
15
16 try:
17 count = int(count)
18 except ValueError: # not an int
19 count = f'{sep}{start}'
20 else:
21 count = f'{count + 1}'
22
23 return count
24
25
26 @formatdoc
27 def inc_name_count(name):
28 """Increase a name's count matching `{numbered_patt}` by ``1``.
29
30 If the name is not already numbered, append '{sep}{start}'.
31
32 Parameters
33 ----------
34 name : str
35 Original name.
36
37 Returns
38 -------
39 incremented_name : str
40 Numbered name incremented by ``1``.
41 """
42 return numbered_patt.sub(_inc_name_count_sub, name)
43
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/napari/util/naming.py b/napari/util/naming.py
--- a/napari/util/naming.py
+++ b/napari/util/naming.py
@@ -7,7 +7,9 @@
sep = ' '
start = 1
-numbered_patt = re.compile(r'(?<!\d)(?:\d+|)$')
+# Match integer between square brackets at end of string if after space
+# or at beginning of string or just match end of string
+numbered_patt = re.compile(r'((?<=\A\[)|(?<=\s\[))(?:\d+|)(?=\]$)|$')
def _inc_name_count_sub(match):
@@ -16,7 +18,7 @@
try:
count = int(count)
except ValueError: # not an int
- count = f'{sep}{start}'
+ count = f'{sep}[{start}]'
else:
count = f'{count + 1}'
@@ -27,7 +29,7 @@
def inc_name_count(name):
"""Increase a name's count matching `{numbered_patt}` by ``1``.
- If the name is not already numbered, append '{sep}{start}'.
+ If the name is not already numbered, append '{sep}[{start}]'.
Parameters
----------
@@ -39,4 +41,4 @@
incremented_name : str
Numbered name incremented by ``1``.
"""
- return numbered_patt.sub(_inc_name_count_sub, name)
+ return numbered_patt.sub(_inc_name_count_sub, name, count=1)
| {"golden_diff": "diff --git a/napari/util/naming.py b/napari/util/naming.py\n--- a/napari/util/naming.py\n+++ b/napari/util/naming.py\n@@ -7,7 +7,9 @@\n sep = ' '\n start = 1\n \n-numbered_patt = re.compile(r'(?<!\\d)(?:\\d+|)$')\n+# Match integer between square brackets at end of string if after space\n+# or at beginning of string or just match end of string\n+numbered_patt = re.compile(r'((?<=\\A\\[)|(?<=\\s\\[))(?:\\d+|)(?=\\]$)|$')\n \n \n def _inc_name_count_sub(match):\n@@ -16,7 +18,7 @@\n try:\n count = int(count)\n except ValueError: # not an int\n- count = f'{sep}{start}'\n+ count = f'{sep}[{start}]'\n else:\n count = f'{count + 1}'\n \n@@ -27,7 +29,7 @@\n def inc_name_count(name):\n \"\"\"Increase a name's count matching `{numbered_patt}` by ``1``.\n \n- If the name is not already numbered, append '{sep}{start}'.\n+ If the name is not already numbered, append '{sep}[{start}]'.\n \n Parameters\n ----------\n@@ -39,4 +41,4 @@\n incremented_name : str\n Numbered name incremented by ``1``.\n \"\"\"\n- return numbered_patt.sub(_inc_name_count_sub, name)\n+ return numbered_patt.sub(_inc_name_count_sub, name, count=1)\n", "issue": "novel gene discovery in Napari (a.k.a. imperfect layer name incrementing)\n## \ud83d\udc1b Bug\r\nnapari does a smart thing and adds an integer to the name of a layer if the name is already in use in the viewer. It then increments that number when another layer is added that fits the pattern. This is great until you have layer names that end in numbers \r\n\r\nwhich in this case results in different names that are sometimes real genes and sometimes not.\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n```\r\nv = napari.Viewer()\r\nfor i in range(3):\r\n v.add_points(1000*np.random.rand(10,2), name = \"no_problem\")\r\nfor i in range(3):\r\n v.add_points(1000*np.random.rand(10,2), name = \"problem:GAD1\")\r\n```\r\n\r\n## Expected behavior\r\nconsistent use of a space or other character to separate the auto-increment integer from the original string\r\n\n", "before_files": [{"content": "\"\"\"Automatically generate names.\n\"\"\"\nimport re\nfrom .misc import formatdoc\n\n\nsep = ' '\nstart = 1\n\nnumbered_patt = re.compile(r'(?<!\\d)(?:\\d+|)$')\n\n\ndef _inc_name_count_sub(match):\n count = match.group(0)\n\n try:\n count = int(count)\n except ValueError: # not an int\n count = f'{sep}{start}'\n else:\n count = f'{count + 1}'\n\n return count\n\n\n@formatdoc\ndef inc_name_count(name):\n \"\"\"Increase a name's count matching `{numbered_patt}` by ``1``.\n\n If the name is not already numbered, append '{sep}{start}'.\n\n Parameters\n ----------\n name : str\n Original name.\n\n Returns\n -------\n incremented_name : str\n Numbered name incremented by ``1``.\n \"\"\"\n return numbered_patt.sub(_inc_name_count_sub, name)\n", "path": "napari/util/naming.py"}], "after_files": [{"content": "\"\"\"Automatically generate names.\n\"\"\"\nimport re\nfrom .misc import formatdoc\n\n\nsep = ' '\nstart = 1\n\n# Match integer between square brackets at end of string if after space\n# or at beginning of string or just match end of string\nnumbered_patt = re.compile(r'((?<=\\A\\[)|(?<=\\s\\[))(?:\\d+|)(?=\\]$)|$')\n\n\ndef _inc_name_count_sub(match):\n count = match.group(0)\n\n try:\n count = int(count)\n except ValueError: # not an int\n count = f'{sep}[{start}]'\n else:\n count = f'{count + 1}'\n\n return count\n\n\n@formatdoc\ndef inc_name_count(name):\n \"\"\"Increase a name's count matching `{numbered_patt}` by ``1``.\n\n If the name is not already numbered, append '{sep}[{start}]'.\n\n Parameters\n ----------\n name : str\n Original name.\n\n Returns\n -------\n incremented_name : str\n Numbered name incremented by ``1``.\n \"\"\"\n return numbered_patt.sub(_inc_name_count_sub, name, count=1)\n", "path": "napari/util/naming.py"}]} | 843 | 361 |
gh_patches_debug_1164 | rasdani/github-patches | git_diff | microsoft__botbuilder-python-1907 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
German language is not appropiate used when using Confirmprompts
### The Issue
I am building a chatbot for german users. I am sending the local "de-de" as user, and can confirm this actual arrives the bot. When i want to use Confirmprompts the bot returns Yes and No and not "Ja" "Nein".
### The Solution
After a lot of digging, I found the underlying cause and a fix. The culture model does not actually recognices German (de-de) as supported language, and thus switches to the default (english). But in the prompt_culture_models.py German actualy exists and ther is a todo "# TODO: Replace with Culture.German after Recognizers-Text package updates." Which I looked up and the Recognizers-Text package sis already updated :) . Still this is not the real issue.
The reason is that german is not listed in the supported cultures function. I simply added it and every thing works fine.
` @classmethod
def get_supported_cultures(cls) -> List[PromptCultureModel]:
"""
Gets a list of the supported culture models.
"""
return [
cls.Chinese,
cls.German,
cls.Dutch,
cls.English,
cls.French,
cls.Italian,
cls.Japanese,
cls.Korean,
cls.Portuguese,
cls.Spanish,
cls.Turkish,
]`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libraries/botbuilder-dialogs/botbuilder/dialogs/prompts/prompt_culture_models.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 from typing import List
5
6 from recognizers_text import Culture
7
8
9 class PromptCultureModel:
10 """
11 Culture model used in Choice and Confirm Prompts.
12 """
13
14 def __init__(
15 self,
16 locale: str,
17 separator: str,
18 inline_or: str,
19 inline_or_more: str,
20 yes_in_language: str,
21 no_in_language: str,
22 ):
23 """
24
25 :param locale: Culture Model's Locale. Example: "en-US".
26 :param separator: Culture Model's Inline Separator. Example: ", ".
27 :param inline_or: Culture Model's Inline Or. Example: " or ".
28 :param inline_or_more Culture Model's Inline Or More. Example: ", or ".
29 :param yes_in_language: Equivalent of "Yes" in Culture Model's Language. Example: "Yes".
30 :param no_in_language: Equivalent of "No" in Culture Model's Language. Example: "No".
31 """
32 self.locale = locale
33 self.separator = separator
34 self.inline_or = inline_or
35 self.inline_or_more = inline_or_more
36 self.yes_in_language = yes_in_language
37 self.no_in_language = no_in_language
38
39
40 class PromptCultureModels:
41 """
42 Class container for currently-supported Culture Models in Confirm and Choice Prompt.
43 """
44
45 Chinese = PromptCultureModel(
46 locale=Culture.Chinese,
47 inline_or=" 要么 ",
48 inline_or_more=", 要么 ",
49 separator=", ",
50 no_in_language="不",
51 yes_in_language="是的",
52 )
53
54 Dutch = PromptCultureModel(
55 locale=Culture.Dutch,
56 inline_or=" of ",
57 inline_or_more=", of ",
58 separator=", ",
59 no_in_language="Nee",
60 yes_in_language="Ja",
61 )
62
63 English = PromptCultureModel(
64 locale=Culture.English,
65 inline_or=" or ",
66 inline_or_more=", or ",
67 separator=", ",
68 no_in_language="No",
69 yes_in_language="Yes",
70 )
71
72 French = PromptCultureModel(
73 locale=Culture.French,
74 inline_or=" ou ",
75 inline_or_more=", ou ",
76 separator=", ",
77 no_in_language="Non",
78 yes_in_language="Oui",
79 )
80
81 German = PromptCultureModel(
82 # TODO: Replace with Culture.German after Recognizers-Text package updates.
83 locale="de-de",
84 inline_or=" oder ",
85 inline_or_more=", oder ",
86 separator=", ",
87 no_in_language="Nein",
88 yes_in_language="Ja",
89 )
90
91 Italian = PromptCultureModel(
92 locale=Culture.Italian,
93 inline_or=" o ",
94 inline_or_more=" o ",
95 separator=", ",
96 no_in_language="No",
97 yes_in_language="Si",
98 )
99
100 Japanese = PromptCultureModel(
101 locale=Culture.Japanese,
102 inline_or=" または ",
103 inline_or_more="、 または ",
104 separator="、 ",
105 no_in_language="いいえ",
106 yes_in_language="はい",
107 )
108
109 Korean = PromptCultureModel(
110 locale=Culture.Korean,
111 inline_or=" 또는 ",
112 inline_or_more=" 또는 ",
113 separator=", ",
114 no_in_language="아니",
115 yes_in_language="예",
116 )
117
118 Portuguese = PromptCultureModel(
119 locale=Culture.Portuguese,
120 inline_or=" ou ",
121 inline_or_more=", ou ",
122 separator=", ",
123 no_in_language="Não",
124 yes_in_language="Sim",
125 )
126
127 Spanish = PromptCultureModel(
128 locale=Culture.Spanish,
129 inline_or=" o ",
130 inline_or_more=", o ",
131 separator=", ",
132 no_in_language="No",
133 yes_in_language="Sí",
134 )
135
136 Turkish = PromptCultureModel(
137 locale=Culture.Turkish,
138 inline_or=" veya ",
139 inline_or_more=" veya ",
140 separator=", ",
141 no_in_language="Hayır",
142 yes_in_language="Evet",
143 )
144
145 @classmethod
146 def map_to_nearest_language(cls, culture_code: str) -> str:
147 """
148 Normalize various potential locale strings to a standard.
149 :param culture_code: Represents locale. Examples: "en-US, en-us, EN".
150 :return: Normalized locale.
151 :rtype: str
152
153 .. remarks::
154 In our other SDKs, this method is a copy/paste of the ones from the Recognizers-Text library.
155 However, that doesn't exist in Python.
156 """
157 if culture_code:
158 culture_code = culture_code.lower()
159 supported_culture_codes = cls._get_supported_locales()
160
161 if culture_code not in supported_culture_codes:
162 culture_prefix = culture_code.split("-")[0]
163
164 for supported_culture_code in supported_culture_codes:
165 if supported_culture_code.startswith(culture_prefix):
166 culture_code = supported_culture_code
167
168 return culture_code
169
170 @classmethod
171 def get_supported_cultures(cls) -> List[PromptCultureModel]:
172 """
173 Gets a list of the supported culture models.
174 """
175 return [
176 cls.Chinese,
177 cls.Dutch,
178 cls.English,
179 cls.French,
180 cls.Italian,
181 cls.Japanese,
182 cls.Korean,
183 cls.Portuguese,
184 cls.Spanish,
185 cls.Turkish,
186 ]
187
188 @classmethod
189 def _get_supported_locales(cls) -> List[str]:
190 return [c.locale for c in cls.get_supported_cultures()]
191
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/libraries/botbuilder-dialogs/botbuilder/dialogs/prompts/prompt_culture_models.py b/libraries/botbuilder-dialogs/botbuilder/dialogs/prompts/prompt_culture_models.py
--- a/libraries/botbuilder-dialogs/botbuilder/dialogs/prompts/prompt_culture_models.py
+++ b/libraries/botbuilder-dialogs/botbuilder/dialogs/prompts/prompt_culture_models.py
@@ -174,6 +174,7 @@
"""
return [
cls.Chinese,
+ cls.German,
cls.Dutch,
cls.English,
cls.French,
| {"golden_diff": "diff --git a/libraries/botbuilder-dialogs/botbuilder/dialogs/prompts/prompt_culture_models.py b/libraries/botbuilder-dialogs/botbuilder/dialogs/prompts/prompt_culture_models.py\n--- a/libraries/botbuilder-dialogs/botbuilder/dialogs/prompts/prompt_culture_models.py\n+++ b/libraries/botbuilder-dialogs/botbuilder/dialogs/prompts/prompt_culture_models.py\n@@ -174,6 +174,7 @@\n \"\"\"\n return [\n cls.Chinese,\n+ cls.German,\n cls.Dutch,\n cls.English,\n cls.French,\n", "issue": "German language is not appropiate used when using Confirmprompts\n### The Issue\r\nI am building a chatbot for german users. I am sending the local \"de-de\" as user, and can confirm this actual arrives the bot. When i want to use Confirmprompts the bot returns Yes and No and not \"Ja\" \"Nein\". \r\n### The Solution\r\nAfter a lot of digging, I found the underlying cause and a fix. The culture model does not actually recognices German (de-de) as supported language, and thus switches to the default (english). But in the prompt_culture_models.py German actualy exists and ther is a todo \"# TODO: Replace with Culture.German after Recognizers-Text package updates.\" Which I looked up and the Recognizers-Text package sis already updated :) . Still this is not the real issue. \r\n\r\nThe reason is that german is not listed in the supported cultures function. I simply added it and every thing works fine. \r\n\r\n` @classmethod\r\n def get_supported_cultures(cls) -> List[PromptCultureModel]:\r\n \"\"\"\r\n Gets a list of the supported culture models.\r\n \"\"\"\r\n return [\r\n cls.Chinese,\r\n cls.German,\r\n cls.Dutch,\r\n cls.English,\r\n cls.French,\r\n cls.Italian,\r\n cls.Japanese,\r\n cls.Korean,\r\n cls.Portuguese,\r\n cls.Spanish,\r\n cls.Turkish,\r\n ]`\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nfrom typing import List\n\nfrom recognizers_text import Culture\n\n\nclass PromptCultureModel:\n \"\"\"\n Culture model used in Choice and Confirm Prompts.\n \"\"\"\n\n def __init__(\n self,\n locale: str,\n separator: str,\n inline_or: str,\n inline_or_more: str,\n yes_in_language: str,\n no_in_language: str,\n ):\n \"\"\"\n\n :param locale: Culture Model's Locale. Example: \"en-US\".\n :param separator: Culture Model's Inline Separator. Example: \", \".\n :param inline_or: Culture Model's Inline Or. Example: \" or \".\n :param inline_or_more Culture Model's Inline Or More. Example: \", or \".\n :param yes_in_language: Equivalent of \"Yes\" in Culture Model's Language. Example: \"Yes\".\n :param no_in_language: Equivalent of \"No\" in Culture Model's Language. Example: \"No\".\n \"\"\"\n self.locale = locale\n self.separator = separator\n self.inline_or = inline_or\n self.inline_or_more = inline_or_more\n self.yes_in_language = yes_in_language\n self.no_in_language = no_in_language\n\n\nclass PromptCultureModels:\n \"\"\"\n Class container for currently-supported Culture Models in Confirm and Choice Prompt.\n \"\"\"\n\n Chinese = PromptCultureModel(\n locale=Culture.Chinese,\n inline_or=\" \u8981\u4e48 \",\n inline_or_more=\"\uff0c \u8981\u4e48 \",\n separator=\"\uff0c \",\n no_in_language=\"\u4e0d\",\n yes_in_language=\"\u662f\u7684\",\n )\n\n Dutch = PromptCultureModel(\n locale=Culture.Dutch,\n inline_or=\" of \",\n inline_or_more=\", of \",\n separator=\", \",\n no_in_language=\"Nee\",\n yes_in_language=\"Ja\",\n )\n\n English = PromptCultureModel(\n locale=Culture.English,\n inline_or=\" or \",\n inline_or_more=\", or \",\n separator=\", \",\n no_in_language=\"No\",\n yes_in_language=\"Yes\",\n )\n\n French = PromptCultureModel(\n locale=Culture.French,\n inline_or=\" ou \",\n inline_or_more=\", ou \",\n separator=\", \",\n no_in_language=\"Non\",\n yes_in_language=\"Oui\",\n )\n\n German = PromptCultureModel(\n # TODO: Replace with Culture.German after Recognizers-Text package updates.\n locale=\"de-de\",\n inline_or=\" oder \",\n inline_or_more=\", oder \",\n separator=\", \",\n no_in_language=\"Nein\",\n yes_in_language=\"Ja\",\n )\n\n Italian = PromptCultureModel(\n locale=Culture.Italian,\n inline_or=\" o \",\n inline_or_more=\" o \",\n separator=\", \",\n no_in_language=\"No\",\n yes_in_language=\"Si\",\n )\n\n Japanese = PromptCultureModel(\n locale=Culture.Japanese,\n inline_or=\" \u307e\u305f\u306f \",\n inline_or_more=\"\u3001 \u307e\u305f\u306f \",\n separator=\"\u3001 \",\n no_in_language=\"\u3044\u3044\u3048\",\n yes_in_language=\"\u306f\u3044\",\n )\n\n Korean = PromptCultureModel(\n locale=Culture.Korean,\n inline_or=\" \ub610\ub294 \",\n inline_or_more=\" \ub610\ub294 \",\n separator=\", \",\n no_in_language=\"\uc544\ub2c8\",\n yes_in_language=\"\uc608\",\n )\n\n Portuguese = PromptCultureModel(\n locale=Culture.Portuguese,\n inline_or=\" ou \",\n inline_or_more=\", ou \",\n separator=\", \",\n no_in_language=\"N\u00e3o\",\n yes_in_language=\"Sim\",\n )\n\n Spanish = PromptCultureModel(\n locale=Culture.Spanish,\n inline_or=\" o \",\n inline_or_more=\", o \",\n separator=\", \",\n no_in_language=\"No\",\n yes_in_language=\"S\u00ed\",\n )\n\n Turkish = PromptCultureModel(\n locale=Culture.Turkish,\n inline_or=\" veya \",\n inline_or_more=\" veya \",\n separator=\", \",\n no_in_language=\"Hay\u0131r\",\n yes_in_language=\"Evet\",\n )\n\n @classmethod\n def map_to_nearest_language(cls, culture_code: str) -> str:\n \"\"\"\n Normalize various potential locale strings to a standard.\n :param culture_code: Represents locale. Examples: \"en-US, en-us, EN\".\n :return: Normalized locale.\n :rtype: str\n\n .. remarks::\n In our other SDKs, this method is a copy/paste of the ones from the Recognizers-Text library.\n However, that doesn't exist in Python.\n \"\"\"\n if culture_code:\n culture_code = culture_code.lower()\n supported_culture_codes = cls._get_supported_locales()\n\n if culture_code not in supported_culture_codes:\n culture_prefix = culture_code.split(\"-\")[0]\n\n for supported_culture_code in supported_culture_codes:\n if supported_culture_code.startswith(culture_prefix):\n culture_code = supported_culture_code\n\n return culture_code\n\n @classmethod\n def get_supported_cultures(cls) -> List[PromptCultureModel]:\n \"\"\"\n Gets a list of the supported culture models.\n \"\"\"\n return [\n cls.Chinese,\n cls.Dutch,\n cls.English,\n cls.French,\n cls.Italian,\n cls.Japanese,\n cls.Korean,\n cls.Portuguese,\n cls.Spanish,\n cls.Turkish,\n ]\n\n @classmethod\n def _get_supported_locales(cls) -> List[str]:\n return [c.locale for c in cls.get_supported_cultures()]\n", "path": "libraries/botbuilder-dialogs/botbuilder/dialogs/prompts/prompt_culture_models.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nfrom typing import List\n\nfrom recognizers_text import Culture\n\n\nclass PromptCultureModel:\n \"\"\"\n Culture model used in Choice and Confirm Prompts.\n \"\"\"\n\n def __init__(\n self,\n locale: str,\n separator: str,\n inline_or: str,\n inline_or_more: str,\n yes_in_language: str,\n no_in_language: str,\n ):\n \"\"\"\n\n :param locale: Culture Model's Locale. Example: \"en-US\".\n :param separator: Culture Model's Inline Separator. Example: \", \".\n :param inline_or: Culture Model's Inline Or. Example: \" or \".\n :param inline_or_more Culture Model's Inline Or More. Example: \", or \".\n :param yes_in_language: Equivalent of \"Yes\" in Culture Model's Language. Example: \"Yes\".\n :param no_in_language: Equivalent of \"No\" in Culture Model's Language. Example: \"No\".\n \"\"\"\n self.locale = locale\n self.separator = separator\n self.inline_or = inline_or\n self.inline_or_more = inline_or_more\n self.yes_in_language = yes_in_language\n self.no_in_language = no_in_language\n\n\nclass PromptCultureModels:\n \"\"\"\n Class container for currently-supported Culture Models in Confirm and Choice Prompt.\n \"\"\"\n\n Chinese = PromptCultureModel(\n locale=Culture.Chinese,\n inline_or=\" \u8981\u4e48 \",\n inline_or_more=\"\uff0c \u8981\u4e48 \",\n separator=\"\uff0c \",\n no_in_language=\"\u4e0d\",\n yes_in_language=\"\u662f\u7684\",\n )\n\n Dutch = PromptCultureModel(\n locale=Culture.Dutch,\n inline_or=\" of \",\n inline_or_more=\", of \",\n separator=\", \",\n no_in_language=\"Nee\",\n yes_in_language=\"Ja\",\n )\n\n English = PromptCultureModel(\n locale=Culture.English,\n inline_or=\" or \",\n inline_or_more=\", or \",\n separator=\", \",\n no_in_language=\"No\",\n yes_in_language=\"Yes\",\n )\n\n French = PromptCultureModel(\n locale=Culture.French,\n inline_or=\" ou \",\n inline_or_more=\", ou \",\n separator=\", \",\n no_in_language=\"Non\",\n yes_in_language=\"Oui\",\n )\n\n German = PromptCultureModel(\n # TODO: Replace with Culture.German after Recognizers-Text package updates.\n locale=\"de-de\",\n inline_or=\" oder \",\n inline_or_more=\", oder \",\n separator=\", \",\n no_in_language=\"Nein\",\n yes_in_language=\"Ja\",\n )\n\n Italian = PromptCultureModel(\n locale=Culture.Italian,\n inline_or=\" o \",\n inline_or_more=\" o \",\n separator=\", \",\n no_in_language=\"No\",\n yes_in_language=\"Si\",\n )\n\n Japanese = PromptCultureModel(\n locale=Culture.Japanese,\n inline_or=\" \u307e\u305f\u306f \",\n inline_or_more=\"\u3001 \u307e\u305f\u306f \",\n separator=\"\u3001 \",\n no_in_language=\"\u3044\u3044\u3048\",\n yes_in_language=\"\u306f\u3044\",\n )\n\n Korean = PromptCultureModel(\n locale=Culture.Korean,\n inline_or=\" \ub610\ub294 \",\n inline_or_more=\" \ub610\ub294 \",\n separator=\", \",\n no_in_language=\"\uc544\ub2c8\",\n yes_in_language=\"\uc608\",\n )\n\n Portuguese = PromptCultureModel(\n locale=Culture.Portuguese,\n inline_or=\" ou \",\n inline_or_more=\", ou \",\n separator=\", \",\n no_in_language=\"N\u00e3o\",\n yes_in_language=\"Sim\",\n )\n\n Spanish = PromptCultureModel(\n locale=Culture.Spanish,\n inline_or=\" o \",\n inline_or_more=\", o \",\n separator=\", \",\n no_in_language=\"No\",\n yes_in_language=\"S\u00ed\",\n )\n\n Turkish = PromptCultureModel(\n locale=Culture.Turkish,\n inline_or=\" veya \",\n inline_or_more=\" veya \",\n separator=\", \",\n no_in_language=\"Hay\u0131r\",\n yes_in_language=\"Evet\",\n )\n\n @classmethod\n def map_to_nearest_language(cls, culture_code: str) -> str:\n \"\"\"\n Normalize various potential locale strings to a standard.\n :param culture_code: Represents locale. Examples: \"en-US, en-us, EN\".\n :return: Normalized locale.\n :rtype: str\n\n .. remarks::\n In our other SDKs, this method is a copy/paste of the ones from the Recognizers-Text library.\n However, that doesn't exist in Python.\n \"\"\"\n if culture_code:\n culture_code = culture_code.lower()\n supported_culture_codes = cls._get_supported_locales()\n\n if culture_code not in supported_culture_codes:\n culture_prefix = culture_code.split(\"-\")[0]\n\n for supported_culture_code in supported_culture_codes:\n if supported_culture_code.startswith(culture_prefix):\n culture_code = supported_culture_code\n\n return culture_code\n\n @classmethod\n def get_supported_cultures(cls) -> List[PromptCultureModel]:\n \"\"\"\n Gets a list of the supported culture models.\n \"\"\"\n return [\n cls.Chinese,\n cls.German,\n cls.Dutch,\n cls.English,\n cls.French,\n cls.Italian,\n cls.Japanese,\n cls.Korean,\n cls.Portuguese,\n cls.Spanish,\n cls.Turkish,\n ]\n\n @classmethod\n def _get_supported_locales(cls) -> List[str]:\n return [c.locale for c in cls.get_supported_cultures()]\n", "path": "libraries/botbuilder-dialogs/botbuilder/dialogs/prompts/prompt_culture_models.py"}]} | 2,229 | 137 |
gh_patches_debug_13137 | rasdani/github-patches | git_diff | genialis__resolwe-313 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
404 returned when deleting entity with `delete_content` set to `true`
Probably because sample gets deleted when the last data object is deleted.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `resolwe/flow/views/entity.py`
Content:
```
1 """Entity viewset."""
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 from distutils.util import strtobool # pylint: disable=import-error,no-name-in-module
5
6 from django.db.models import Max
7 from django.db.models.query import Prefetch
8
9 from rest_framework import exceptions, status
10 from rest_framework.decorators import detail_route
11 from rest_framework.response import Response
12
13 from resolwe.flow.filters import EntityFilter
14 from resolwe.flow.models import Collection, Data, Entity
15 from resolwe.flow.serializers import EntitySerializer
16 from resolwe.permissions.utils import remove_permission, update_permission
17
18 from .collection import CollectionViewSet
19
20
21 class EntityViewSet(CollectionViewSet):
22 """API view for entities."""
23
24 filter_class = EntityFilter
25 serializer_class = EntitySerializer
26
27 queryset = Entity.objects.prefetch_related(
28 Prefetch('data', queryset=Data.objects.all().order_by('id')),
29 'descriptor_schema',
30 'contributor'
31 ).annotate(
32 latest_date=Max('data__modified')
33 ).order_by('-latest_date')
34
35 def _check_collection_permissions(self, collection_id, user):
36 """Check that collection exists and user has `add` permission."""
37 collection_query = Collection.objects.filter(pk=collection_id)
38 if not collection_query.exists():
39 raise exceptions.ValidationError('Collection id does not exist')
40
41 collection = collection_query.first()
42 if not user.has_perm('add_collection', obj=collection):
43 if user.is_authenticated():
44 raise exceptions.PermissionDenied()
45 else:
46 raise exceptions.NotFound()
47
48 def set_content_permissions(self, user, obj, payload):
49 """Apply permissions to data objects in ``Entity``."""
50 # Data doesn't have "ADD" permission, so it has to be removed
51 payload = remove_permission(payload, 'add')
52
53 for data in obj.data.all():
54 if user.has_perm('share_data', data):
55 update_permission(data, payload)
56
57 def destroy(self, request, *args, **kwargs):
58 """Destroy a model instance.
59
60 If ``delete_content`` flag is set in query parameters, also all
61 Data objects contained in entity will be deleted.
62 """
63 obj = self.get_object()
64 user = request.user
65
66 if strtobool(request.query_params.get('delete_content', 'false')):
67 for data in obj.data.all():
68 if user.has_perm('edit_data', data):
69 data.delete()
70
71 # NOTE: Collection's ``destroy`` method should be skiped, so we
72 # intentionaly call it's parent.
73 return super(CollectionViewSet, self).destroy( # pylint: disable=no-member,bad-super-call
74 request, *args, **kwargs
75 )
76
77 @detail_route(methods=[u'post'])
78 def add_to_collection(self, request, pk=None):
79 """Add Entity to a collection."""
80 entity = self.get_object()
81
82 if 'ids' not in request.data:
83 return Response({"error": "`ids` parameter is required"}, status=status.HTTP_400_BAD_REQUEST)
84
85 for collection_id in request.data['ids']:
86 self._check_collection_permissions(collection_id, request.user)
87
88 for collection_id in request.data['ids']:
89 entity.collections.add(collection_id)
90
91 collection = Collection.objects.get(pk=collection_id)
92 for data in entity.data.all():
93 collection.data.add(data)
94
95 return Response()
96
97 @detail_route(methods=[u'post'])
98 def remove_from_collection(self, request, pk=None):
99 """Remove Entity from a collection."""
100 entity = self.get_object()
101
102 if 'ids' not in request.data:
103 return Response({"error": "`ids` parameter is required"}, status=status.HTTP_400_BAD_REQUEST)
104
105 for collection_id in request.data['ids']:
106 self._check_collection_permissions(collection_id, request.user)
107
108 for collection_id in request.data['ids']:
109 entity.collections.remove(collection_id)
110
111 collection = Collection.objects.get(pk=collection_id)
112 for data in entity.data.all():
113 collection.data.remove(data)
114
115 return Response()
116
117 @detail_route(methods=[u'post'])
118 def add_data(self, request, pk=None):
119 """Add data to Entity and it's collection."""
120 # add data to entity
121 resp = super(EntityViewSet, self).add_data(request, pk)
122
123 # add data to collections in which entity is
124 entity = self.get_object()
125 for collection in entity.collections.all():
126 collection.data.add(*request.data['ids'])
127
128 return resp
129
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/resolwe/flow/views/entity.py b/resolwe/flow/views/entity.py
--- a/resolwe/flow/views/entity.py
+++ b/resolwe/flow/views/entity.py
@@ -68,6 +68,11 @@
if user.has_perm('edit_data', data):
data.delete()
+ # If all data objects in an entity are removed, the entity may
+ # have already been removed, so there is no need to call destroy.
+ if not Entity.objects.filter(pk=obj.pk).exists():
+ return Response(status=status.HTTP_204_NO_CONTENT)
+
# NOTE: Collection's ``destroy`` method should be skiped, so we
# intentionaly call it's parent.
return super(CollectionViewSet, self).destroy( # pylint: disable=no-member,bad-super-call
| {"golden_diff": "diff --git a/resolwe/flow/views/entity.py b/resolwe/flow/views/entity.py\n--- a/resolwe/flow/views/entity.py\n+++ b/resolwe/flow/views/entity.py\n@@ -68,6 +68,11 @@\n if user.has_perm('edit_data', data):\n data.delete()\n \n+ # If all data objects in an entity are removed, the entity may\n+ # have already been removed, so there is no need to call destroy.\n+ if not Entity.objects.filter(pk=obj.pk).exists():\n+ return Response(status=status.HTTP_204_NO_CONTENT)\n+\n # NOTE: Collection's ``destroy`` method should be skiped, so we\n # intentionaly call it's parent.\n return super(CollectionViewSet, self).destroy( # pylint: disable=no-member,bad-super-call\n", "issue": "404 returned when deleting entity with `delete_content` set to `true`\nProbably because sample gets deleted when the last data object is deleted.\n", "before_files": [{"content": "\"\"\"Entity viewset.\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom distutils.util import strtobool # pylint: disable=import-error,no-name-in-module\n\nfrom django.db.models import Max\nfrom django.db.models.query import Prefetch\n\nfrom rest_framework import exceptions, status\nfrom rest_framework.decorators import detail_route\nfrom rest_framework.response import Response\n\nfrom resolwe.flow.filters import EntityFilter\nfrom resolwe.flow.models import Collection, Data, Entity\nfrom resolwe.flow.serializers import EntitySerializer\nfrom resolwe.permissions.utils import remove_permission, update_permission\n\nfrom .collection import CollectionViewSet\n\n\nclass EntityViewSet(CollectionViewSet):\n \"\"\"API view for entities.\"\"\"\n\n filter_class = EntityFilter\n serializer_class = EntitySerializer\n\n queryset = Entity.objects.prefetch_related(\n Prefetch('data', queryset=Data.objects.all().order_by('id')),\n 'descriptor_schema',\n 'contributor'\n ).annotate(\n latest_date=Max('data__modified')\n ).order_by('-latest_date')\n\n def _check_collection_permissions(self, collection_id, user):\n \"\"\"Check that collection exists and user has `add` permission.\"\"\"\n collection_query = Collection.objects.filter(pk=collection_id)\n if not collection_query.exists():\n raise exceptions.ValidationError('Collection id does not exist')\n\n collection = collection_query.first()\n if not user.has_perm('add_collection', obj=collection):\n if user.is_authenticated():\n raise exceptions.PermissionDenied()\n else:\n raise exceptions.NotFound()\n\n def set_content_permissions(self, user, obj, payload):\n \"\"\"Apply permissions to data objects in ``Entity``.\"\"\"\n # Data doesn't have \"ADD\" permission, so it has to be removed\n payload = remove_permission(payload, 'add')\n\n for data in obj.data.all():\n if user.has_perm('share_data', data):\n update_permission(data, payload)\n\n def destroy(self, request, *args, **kwargs):\n \"\"\"Destroy a model instance.\n\n If ``delete_content`` flag is set in query parameters, also all\n Data objects contained in entity will be deleted.\n \"\"\"\n obj = self.get_object()\n user = request.user\n\n if strtobool(request.query_params.get('delete_content', 'false')):\n for data in obj.data.all():\n if user.has_perm('edit_data', data):\n data.delete()\n\n # NOTE: Collection's ``destroy`` method should be skiped, so we\n # intentionaly call it's parent.\n return super(CollectionViewSet, self).destroy( # pylint: disable=no-member,bad-super-call\n request, *args, **kwargs\n )\n\n @detail_route(methods=[u'post'])\n def add_to_collection(self, request, pk=None):\n \"\"\"Add Entity to a collection.\"\"\"\n entity = self.get_object()\n\n if 'ids' not in request.data:\n return Response({\"error\": \"`ids` parameter is required\"}, status=status.HTTP_400_BAD_REQUEST)\n\n for collection_id in request.data['ids']:\n self._check_collection_permissions(collection_id, request.user)\n\n for collection_id in request.data['ids']:\n entity.collections.add(collection_id)\n\n collection = Collection.objects.get(pk=collection_id)\n for data in entity.data.all():\n collection.data.add(data)\n\n return Response()\n\n @detail_route(methods=[u'post'])\n def remove_from_collection(self, request, pk=None):\n \"\"\"Remove Entity from a collection.\"\"\"\n entity = self.get_object()\n\n if 'ids' not in request.data:\n return Response({\"error\": \"`ids` parameter is required\"}, status=status.HTTP_400_BAD_REQUEST)\n\n for collection_id in request.data['ids']:\n self._check_collection_permissions(collection_id, request.user)\n\n for collection_id in request.data['ids']:\n entity.collections.remove(collection_id)\n\n collection = Collection.objects.get(pk=collection_id)\n for data in entity.data.all():\n collection.data.remove(data)\n\n return Response()\n\n @detail_route(methods=[u'post'])\n def add_data(self, request, pk=None):\n \"\"\"Add data to Entity and it's collection.\"\"\"\n # add data to entity\n resp = super(EntityViewSet, self).add_data(request, pk)\n\n # add data to collections in which entity is\n entity = self.get_object()\n for collection in entity.collections.all():\n collection.data.add(*request.data['ids'])\n\n return resp\n", "path": "resolwe/flow/views/entity.py"}], "after_files": [{"content": "\"\"\"Entity viewset.\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom distutils.util import strtobool # pylint: disable=import-error,no-name-in-module\n\nfrom django.db.models import Max\nfrom django.db.models.query import Prefetch\n\nfrom rest_framework import exceptions, status\nfrom rest_framework.decorators import detail_route\nfrom rest_framework.response import Response\n\nfrom resolwe.flow.filters import EntityFilter\nfrom resolwe.flow.models import Collection, Data, Entity\nfrom resolwe.flow.serializers import EntitySerializer\nfrom resolwe.permissions.utils import remove_permission, update_permission\n\nfrom .collection import CollectionViewSet\n\n\nclass EntityViewSet(CollectionViewSet):\n \"\"\"API view for entities.\"\"\"\n\n filter_class = EntityFilter\n serializer_class = EntitySerializer\n\n queryset = Entity.objects.prefetch_related(\n Prefetch('data', queryset=Data.objects.all().order_by('id')),\n 'descriptor_schema',\n 'contributor'\n ).annotate(\n latest_date=Max('data__modified')\n ).order_by('-latest_date')\n\n def _check_collection_permissions(self, collection_id, user):\n \"\"\"Check that collection exists and user has `add` permission.\"\"\"\n collection_query = Collection.objects.filter(pk=collection_id)\n if not collection_query.exists():\n raise exceptions.ValidationError('Collection id does not exist')\n\n collection = collection_query.first()\n if not user.has_perm('add_collection', obj=collection):\n if user.is_authenticated():\n raise exceptions.PermissionDenied()\n else:\n raise exceptions.NotFound()\n\n def set_content_permissions(self, user, obj, payload):\n \"\"\"Apply permissions to data objects in ``Entity``.\"\"\"\n # Data doesn't have \"ADD\" permission, so it has to be removed\n payload = remove_permission(payload, 'add')\n\n for data in obj.data.all():\n if user.has_perm('share_data', data):\n update_permission(data, payload)\n\n def destroy(self, request, *args, **kwargs):\n \"\"\"Destroy a model instance.\n\n If ``delete_content`` flag is set in query parameters, also all\n Data objects contained in entity will be deleted.\n \"\"\"\n obj = self.get_object()\n user = request.user\n\n if strtobool(request.query_params.get('delete_content', 'false')):\n for data in obj.data.all():\n if user.has_perm('edit_data', data):\n data.delete()\n\n # If all data objects in an entity are removed, the entity may\n # have already been removed, so there is no need to call destroy.\n if not Entity.objects.filter(pk=obj.pk).exists():\n return Response(status=status.HTTP_204_NO_CONTENT)\n\n # NOTE: Collection's ``destroy`` method should be skiped, so we\n # intentionaly call it's parent.\n return super(CollectionViewSet, self).destroy( # pylint: disable=no-member,bad-super-call\n request, *args, **kwargs\n )\n\n @detail_route(methods=[u'post'])\n def add_to_collection(self, request, pk=None):\n \"\"\"Add Entity to a collection.\"\"\"\n entity = self.get_object()\n\n if 'ids' not in request.data:\n return Response({\"error\": \"`ids` parameter is required\"}, status=status.HTTP_400_BAD_REQUEST)\n\n for collection_id in request.data['ids']:\n self._check_collection_permissions(collection_id, request.user)\n\n for collection_id in request.data['ids']:\n entity.collections.add(collection_id)\n\n collection = Collection.objects.get(pk=collection_id)\n for data in entity.data.all():\n collection.data.add(data)\n\n return Response()\n\n @detail_route(methods=[u'post'])\n def remove_from_collection(self, request, pk=None):\n \"\"\"Remove Entity from a collection.\"\"\"\n entity = self.get_object()\n\n if 'ids' not in request.data:\n return Response({\"error\": \"`ids` parameter is required\"}, status=status.HTTP_400_BAD_REQUEST)\n\n for collection_id in request.data['ids']:\n self._check_collection_permissions(collection_id, request.user)\n\n for collection_id in request.data['ids']:\n entity.collections.remove(collection_id)\n\n collection = Collection.objects.get(pk=collection_id)\n for data in entity.data.all():\n collection.data.remove(data)\n\n return Response()\n\n @detail_route(methods=[u'post'])\n def add_data(self, request, pk=None):\n \"\"\"Add data to Entity and it's collection.\"\"\"\n # add data to entity\n resp = super(EntityViewSet, self).add_data(request, pk)\n\n # add data to collections in which entity is\n entity = self.get_object()\n for collection in entity.collections.all():\n collection.data.add(*request.data['ids'])\n\n return resp\n", "path": "resolwe/flow/views/entity.py"}]} | 1,531 | 185 |
gh_patches_debug_41642 | rasdani/github-patches | git_diff | pytorch__vision-3656 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Avoid pandas dependency for CelebA dataset
The CelebA dataset has a runtime dependency on `pandas`. Since all we need is `pandas.read_csv`, it would be interesting to see if we can replace `pandas.read_csv` by just the builtin `csv` module. The mergability of the PR would depend on how ugly / complex the code becomes... :)
cc @pmeier
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchvision/datasets/celeba.py`
Content:
```
1 from functools import partial
2 import torch
3 import os
4 import PIL
5 from typing import Any, Callable, List, Optional, Union, Tuple
6 from .vision import VisionDataset
7 from .utils import download_file_from_google_drive, check_integrity, verify_str_arg
8
9
10 class CelebA(VisionDataset):
11 """`Large-scale CelebFaces Attributes (CelebA) Dataset <http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html>`_ Dataset.
12
13 Args:
14 root (string): Root directory where images are downloaded to.
15 split (string): One of {'train', 'valid', 'test', 'all'}.
16 Accordingly dataset is selected.
17 target_type (string or list, optional): Type of target to use, ``attr``, ``identity``, ``bbox``,
18 or ``landmarks``. Can also be a list to output a tuple with all specified target types.
19 The targets represent:
20
21 - ``attr`` (np.array shape=(40,) dtype=int): binary (0, 1) labels for attributes
22 - ``identity`` (int): label for each person (data points with the same identity are the same person)
23 - ``bbox`` (np.array shape=(4,) dtype=int): bounding box (x, y, width, height)
24 - ``landmarks`` (np.array shape=(10,) dtype=int): landmark points (lefteye_x, lefteye_y, righteye_x,
25 righteye_y, nose_x, nose_y, leftmouth_x, leftmouth_y, rightmouth_x, rightmouth_y)
26
27 Defaults to ``attr``. If empty, ``None`` will be returned as target.
28
29 transform (callable, optional): A function/transform that takes in an PIL image
30 and returns a transformed version. E.g, ``transforms.ToTensor``
31 target_transform (callable, optional): A function/transform that takes in the
32 target and transforms it.
33 download (bool, optional): If true, downloads the dataset from the internet and
34 puts it in root directory. If dataset is already downloaded, it is not
35 downloaded again.
36 """
37
38 base_folder = "celeba"
39 # There currently does not appear to be a easy way to extract 7z in python (without introducing additional
40 # dependencies). The "in-the-wild" (not aligned+cropped) images are only in 7z, so they are not available
41 # right now.
42 file_list = [
43 # File ID MD5 Hash Filename
44 ("0B7EVK8r0v71pZjFTYXZWM3FlRnM", "00d2c5bc6d35e252742224ab0c1e8fcb", "img_align_celeba.zip"),
45 # ("0B7EVK8r0v71pbWNEUjJKdDQ3dGc","b6cd7e93bc7a96c2dc33f819aa3ac651", "img_align_celeba_png.7z"),
46 # ("0B7EVK8r0v71peklHb0pGdDl6R28", "b6cd7e93bc7a96c2dc33f819aa3ac651", "img_celeba.7z"),
47 ("0B7EVK8r0v71pblRyaVFSWGxPY0U", "75e246fa4810816ffd6ee81facbd244c", "list_attr_celeba.txt"),
48 ("1_ee_0u7vcNLOfNLegJRHmolfH5ICW-XS", "32bd1bd63d3c78cd57e08160ec5ed1e2", "identity_CelebA.txt"),
49 ("0B7EVK8r0v71pbThiMVRxWXZ4dU0", "00566efa6fedff7a56946cd1c10f1c16", "list_bbox_celeba.txt"),
50 ("0B7EVK8r0v71pd0FJY3Blby1HUTQ", "cc24ecafdb5b50baae59b03474781f8c", "list_landmarks_align_celeba.txt"),
51 # ("0B7EVK8r0v71pTzJIdlJWdHczRlU", "063ee6ddb681f96bc9ca28c6febb9d1a", "list_landmarks_celeba.txt"),
52 ("0B7EVK8r0v71pY0NSMzRuSXJEVkk", "d32c9cbf5e040fd4025c592c306e6668", "list_eval_partition.txt"),
53 ]
54
55 def __init__(
56 self,
57 root: str,
58 split: str = "train",
59 target_type: Union[List[str], str] = "attr",
60 transform: Optional[Callable] = None,
61 target_transform: Optional[Callable] = None,
62 download: bool = False,
63 ) -> None:
64 import pandas
65 super(CelebA, self).__init__(root, transform=transform,
66 target_transform=target_transform)
67 self.split = split
68 if isinstance(target_type, list):
69 self.target_type = target_type
70 else:
71 self.target_type = [target_type]
72
73 if not self.target_type and self.target_transform is not None:
74 raise RuntimeError('target_transform is specified but target_type is empty')
75
76 if download:
77 self.download()
78
79 if not self._check_integrity():
80 raise RuntimeError('Dataset not found or corrupted.' +
81 ' You can use download=True to download it')
82
83 split_map = {
84 "train": 0,
85 "valid": 1,
86 "test": 2,
87 "all": None,
88 }
89 split_ = split_map[verify_str_arg(split.lower(), "split",
90 ("train", "valid", "test", "all"))]
91
92 fn = partial(os.path.join, self.root, self.base_folder)
93 splits = pandas.read_csv(fn("list_eval_partition.txt"), delim_whitespace=True, header=None, index_col=0)
94 identity = pandas.read_csv(fn("identity_CelebA.txt"), delim_whitespace=True, header=None, index_col=0)
95 bbox = pandas.read_csv(fn("list_bbox_celeba.txt"), delim_whitespace=True, header=1, index_col=0)
96 landmarks_align = pandas.read_csv(fn("list_landmarks_align_celeba.txt"), delim_whitespace=True, header=1)
97 attr = pandas.read_csv(fn("list_attr_celeba.txt"), delim_whitespace=True, header=1)
98
99 mask = slice(None) if split_ is None else (splits[1] == split_)
100
101 self.filename = splits[mask].index.values
102 self.identity = torch.as_tensor(identity[mask].values)
103 self.bbox = torch.as_tensor(bbox[mask].values)
104 self.landmarks_align = torch.as_tensor(landmarks_align[mask].values)
105 self.attr = torch.as_tensor(attr[mask].values)
106 self.attr = (self.attr + 1) // 2 # map from {-1, 1} to {0, 1}
107 self.attr_names = list(attr.columns)
108
109 def _check_integrity(self) -> bool:
110 for (_, md5, filename) in self.file_list:
111 fpath = os.path.join(self.root, self.base_folder, filename)
112 _, ext = os.path.splitext(filename)
113 # Allow original archive to be deleted (zip and 7z)
114 # Only need the extracted images
115 if ext not in [".zip", ".7z"] and not check_integrity(fpath, md5):
116 return False
117
118 # Should check a hash of the images
119 return os.path.isdir(os.path.join(self.root, self.base_folder, "img_align_celeba"))
120
121 def download(self) -> None:
122 import zipfile
123
124 if self._check_integrity():
125 print('Files already downloaded and verified')
126 return
127
128 for (file_id, md5, filename) in self.file_list:
129 download_file_from_google_drive(file_id, os.path.join(self.root, self.base_folder), filename, md5)
130
131 with zipfile.ZipFile(os.path.join(self.root, self.base_folder, "img_align_celeba.zip"), "r") as f:
132 f.extractall(os.path.join(self.root, self.base_folder))
133
134 def __getitem__(self, index: int) -> Tuple[Any, Any]:
135 X = PIL.Image.open(os.path.join(self.root, self.base_folder, "img_align_celeba", self.filename[index]))
136
137 target: Any = []
138 for t in self.target_type:
139 if t == "attr":
140 target.append(self.attr[index, :])
141 elif t == "identity":
142 target.append(self.identity[index, 0])
143 elif t == "bbox":
144 target.append(self.bbox[index, :])
145 elif t == "landmarks":
146 target.append(self.landmarks_align[index, :])
147 else:
148 # TODO: refactor with utils.verify_str_arg
149 raise ValueError("Target type \"{}\" is not recognized.".format(t))
150
151 if self.transform is not None:
152 X = self.transform(X)
153
154 if target:
155 target = tuple(target) if len(target) > 1 else target[0]
156
157 if self.target_transform is not None:
158 target = self.target_transform(target)
159 else:
160 target = None
161
162 return X, target
163
164 def __len__(self) -> int:
165 return len(self.attr)
166
167 def extra_repr(self) -> str:
168 lines = ["Target type: {target_type}", "Split: {split}"]
169 return '\n'.join(lines).format(**self.__dict__)
170
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/torchvision/datasets/celeba.py b/torchvision/datasets/celeba.py
--- a/torchvision/datasets/celeba.py
+++ b/torchvision/datasets/celeba.py
@@ -1,3 +1,5 @@
+from collections import namedtuple
+import csv
from functools import partial
import torch
import os
@@ -6,6 +8,8 @@
from .vision import VisionDataset
from .utils import download_file_from_google_drive, check_integrity, verify_str_arg
+CSV = namedtuple("CSV", ["header", "index", "data"])
+
class CelebA(VisionDataset):
"""`Large-scale CelebFaces Attributes (CelebA) Dataset <http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html>`_ Dataset.
@@ -61,7 +65,6 @@
target_transform: Optional[Callable] = None,
download: bool = False,
) -> None:
- import pandas
super(CelebA, self).__init__(root, transform=transform,
target_transform=target_transform)
self.split = split
@@ -88,23 +91,42 @@
}
split_ = split_map[verify_str_arg(split.lower(), "split",
("train", "valid", "test", "all"))]
+ splits = self._load_csv("list_eval_partition.txt")
+ identity = self._load_csv("identity_CelebA.txt")
+ bbox = self._load_csv("list_bbox_celeba.txt", header=1)
+ landmarks_align = self._load_csv("list_landmarks_align_celeba.txt", header=1)
+ attr = self._load_csv("list_attr_celeba.txt", header=1)
+
+ mask = slice(None) if split_ is None else (splits.data == split_).squeeze()
+
+ self.filename = splits.index
+ self.identity = identity.data[mask]
+ self.bbox = bbox.data[mask]
+ self.landmarks_align = landmarks_align.data[mask]
+ self.attr = attr.data[mask]
+ self.attr = (self.attr + 1) // 2 # map from {-1, 1} to {0, 1}
+ self.attr_names = attr.header
+
+ def _load_csv(
+ self,
+ filename: str,
+ header: Optional[int] = None,
+ ) -> CSV:
+ data, indices, headers = [], [], []
fn = partial(os.path.join, self.root, self.base_folder)
- splits = pandas.read_csv(fn("list_eval_partition.txt"), delim_whitespace=True, header=None, index_col=0)
- identity = pandas.read_csv(fn("identity_CelebA.txt"), delim_whitespace=True, header=None, index_col=0)
- bbox = pandas.read_csv(fn("list_bbox_celeba.txt"), delim_whitespace=True, header=1, index_col=0)
- landmarks_align = pandas.read_csv(fn("list_landmarks_align_celeba.txt"), delim_whitespace=True, header=1)
- attr = pandas.read_csv(fn("list_attr_celeba.txt"), delim_whitespace=True, header=1)
-
- mask = slice(None) if split_ is None else (splits[1] == split_)
-
- self.filename = splits[mask].index.values
- self.identity = torch.as_tensor(identity[mask].values)
- self.bbox = torch.as_tensor(bbox[mask].values)
- self.landmarks_align = torch.as_tensor(landmarks_align[mask].values)
- self.attr = torch.as_tensor(attr[mask].values)
- self.attr = (self.attr + 1) // 2 # map from {-1, 1} to {0, 1}
- self.attr_names = list(attr.columns)
+ with open(fn(filename)) as csv_file:
+ data = list(csv.reader(csv_file, delimiter=' ', skipinitialspace=True))
+
+ if header is not None:
+ headers = data[header]
+ data = data[header + 1:]
+
+ indices = [row[0] for row in data]
+ data = [row[1:] for row in data]
+ data_int = [list(map(int, i)) for i in data]
+
+ return CSV(headers, indices, torch.tensor(data_int))
def _check_integrity(self) -> bool:
for (_, md5, filename) in self.file_list:
| {"golden_diff": "diff --git a/torchvision/datasets/celeba.py b/torchvision/datasets/celeba.py\n--- a/torchvision/datasets/celeba.py\n+++ b/torchvision/datasets/celeba.py\n@@ -1,3 +1,5 @@\n+from collections import namedtuple\n+import csv\n from functools import partial\n import torch\n import os\n@@ -6,6 +8,8 @@\n from .vision import VisionDataset\n from .utils import download_file_from_google_drive, check_integrity, verify_str_arg\n \n+CSV = namedtuple(\"CSV\", [\"header\", \"index\", \"data\"])\n+\n \n class CelebA(VisionDataset):\n \"\"\"`Large-scale CelebFaces Attributes (CelebA) Dataset <http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html>`_ Dataset.\n@@ -61,7 +65,6 @@\n target_transform: Optional[Callable] = None,\n download: bool = False,\n ) -> None:\n- import pandas\n super(CelebA, self).__init__(root, transform=transform,\n target_transform=target_transform)\n self.split = split\n@@ -88,23 +91,42 @@\n }\n split_ = split_map[verify_str_arg(split.lower(), \"split\",\n (\"train\", \"valid\", \"test\", \"all\"))]\n+ splits = self._load_csv(\"list_eval_partition.txt\")\n+ identity = self._load_csv(\"identity_CelebA.txt\")\n+ bbox = self._load_csv(\"list_bbox_celeba.txt\", header=1)\n+ landmarks_align = self._load_csv(\"list_landmarks_align_celeba.txt\", header=1)\n+ attr = self._load_csv(\"list_attr_celeba.txt\", header=1)\n+\n+ mask = slice(None) if split_ is None else (splits.data == split_).squeeze()\n+\n+ self.filename = splits.index\n+ self.identity = identity.data[mask]\n+ self.bbox = bbox.data[mask]\n+ self.landmarks_align = landmarks_align.data[mask]\n+ self.attr = attr.data[mask]\n+ self.attr = (self.attr + 1) // 2 # map from {-1, 1} to {0, 1}\n+ self.attr_names = attr.header\n+\n+ def _load_csv(\n+ self,\n+ filename: str,\n+ header: Optional[int] = None,\n+ ) -> CSV:\n+ data, indices, headers = [], [], []\n \n fn = partial(os.path.join, self.root, self.base_folder)\n- splits = pandas.read_csv(fn(\"list_eval_partition.txt\"), delim_whitespace=True, header=None, index_col=0)\n- identity = pandas.read_csv(fn(\"identity_CelebA.txt\"), delim_whitespace=True, header=None, index_col=0)\n- bbox = pandas.read_csv(fn(\"list_bbox_celeba.txt\"), delim_whitespace=True, header=1, index_col=0)\n- landmarks_align = pandas.read_csv(fn(\"list_landmarks_align_celeba.txt\"), delim_whitespace=True, header=1)\n- attr = pandas.read_csv(fn(\"list_attr_celeba.txt\"), delim_whitespace=True, header=1)\n-\n- mask = slice(None) if split_ is None else (splits[1] == split_)\n-\n- self.filename = splits[mask].index.values\n- self.identity = torch.as_tensor(identity[mask].values)\n- self.bbox = torch.as_tensor(bbox[mask].values)\n- self.landmarks_align = torch.as_tensor(landmarks_align[mask].values)\n- self.attr = torch.as_tensor(attr[mask].values)\n- self.attr = (self.attr + 1) // 2 # map from {-1, 1} to {0, 1}\n- self.attr_names = list(attr.columns)\n+ with open(fn(filename)) as csv_file:\n+ data = list(csv.reader(csv_file, delimiter=' ', skipinitialspace=True))\n+\n+ if header is not None:\n+ headers = data[header]\n+ data = data[header + 1:]\n+\n+ indices = [row[0] for row in data]\n+ data = [row[1:] for row in data]\n+ data_int = [list(map(int, i)) for i in data]\n+\n+ return CSV(headers, indices, torch.tensor(data_int))\n \n def _check_integrity(self) -> bool:\n for (_, md5, filename) in self.file_list:\n", "issue": "Avoid pandas dependency for CelebA dataset\nThe CelebA dataset has a runtime dependency on `pandas`. Since all we need is `pandas.read_csv`, it would be interesting to see if we can replace `pandas.read_csv` by just the builtin `csv` module. The mergability of the PR would depend on how ugly / complex the code becomes... :)\n\ncc @pmeier\n", "before_files": [{"content": "from functools import partial\nimport torch\nimport os\nimport PIL\nfrom typing import Any, Callable, List, Optional, Union, Tuple\nfrom .vision import VisionDataset\nfrom .utils import download_file_from_google_drive, check_integrity, verify_str_arg\n\n\nclass CelebA(VisionDataset):\n \"\"\"`Large-scale CelebFaces Attributes (CelebA) Dataset <http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html>`_ Dataset.\n\n Args:\n root (string): Root directory where images are downloaded to.\n split (string): One of {'train', 'valid', 'test', 'all'}.\n Accordingly dataset is selected.\n target_type (string or list, optional): Type of target to use, ``attr``, ``identity``, ``bbox``,\n or ``landmarks``. Can also be a list to output a tuple with all specified target types.\n The targets represent:\n\n - ``attr`` (np.array shape=(40,) dtype=int): binary (0, 1) labels for attributes\n - ``identity`` (int): label for each person (data points with the same identity are the same person)\n - ``bbox`` (np.array shape=(4,) dtype=int): bounding box (x, y, width, height)\n - ``landmarks`` (np.array shape=(10,) dtype=int): landmark points (lefteye_x, lefteye_y, righteye_x,\n righteye_y, nose_x, nose_y, leftmouth_x, leftmouth_y, rightmouth_x, rightmouth_y)\n\n Defaults to ``attr``. If empty, ``None`` will be returned as target.\n\n transform (callable, optional): A function/transform that takes in an PIL image\n and returns a transformed version. E.g, ``transforms.ToTensor``\n target_transform (callable, optional): A function/transform that takes in the\n target and transforms it.\n download (bool, optional): If true, downloads the dataset from the internet and\n puts it in root directory. If dataset is already downloaded, it is not\n downloaded again.\n \"\"\"\n\n base_folder = \"celeba\"\n # There currently does not appear to be a easy way to extract 7z in python (without introducing additional\n # dependencies). The \"in-the-wild\" (not aligned+cropped) images are only in 7z, so they are not available\n # right now.\n file_list = [\n # File ID MD5 Hash Filename\n (\"0B7EVK8r0v71pZjFTYXZWM3FlRnM\", \"00d2c5bc6d35e252742224ab0c1e8fcb\", \"img_align_celeba.zip\"),\n # (\"0B7EVK8r0v71pbWNEUjJKdDQ3dGc\",\"b6cd7e93bc7a96c2dc33f819aa3ac651\", \"img_align_celeba_png.7z\"),\n # (\"0B7EVK8r0v71peklHb0pGdDl6R28\", \"b6cd7e93bc7a96c2dc33f819aa3ac651\", \"img_celeba.7z\"),\n (\"0B7EVK8r0v71pblRyaVFSWGxPY0U\", \"75e246fa4810816ffd6ee81facbd244c\", \"list_attr_celeba.txt\"),\n (\"1_ee_0u7vcNLOfNLegJRHmolfH5ICW-XS\", \"32bd1bd63d3c78cd57e08160ec5ed1e2\", \"identity_CelebA.txt\"),\n (\"0B7EVK8r0v71pbThiMVRxWXZ4dU0\", \"00566efa6fedff7a56946cd1c10f1c16\", \"list_bbox_celeba.txt\"),\n (\"0B7EVK8r0v71pd0FJY3Blby1HUTQ\", \"cc24ecafdb5b50baae59b03474781f8c\", \"list_landmarks_align_celeba.txt\"),\n # (\"0B7EVK8r0v71pTzJIdlJWdHczRlU\", \"063ee6ddb681f96bc9ca28c6febb9d1a\", \"list_landmarks_celeba.txt\"),\n (\"0B7EVK8r0v71pY0NSMzRuSXJEVkk\", \"d32c9cbf5e040fd4025c592c306e6668\", \"list_eval_partition.txt\"),\n ]\n\n def __init__(\n self,\n root: str,\n split: str = \"train\",\n target_type: Union[List[str], str] = \"attr\",\n transform: Optional[Callable] = None,\n target_transform: Optional[Callable] = None,\n download: bool = False,\n ) -> None:\n import pandas\n super(CelebA, self).__init__(root, transform=transform,\n target_transform=target_transform)\n self.split = split\n if isinstance(target_type, list):\n self.target_type = target_type\n else:\n self.target_type = [target_type]\n\n if not self.target_type and self.target_transform is not None:\n raise RuntimeError('target_transform is specified but target_type is empty')\n\n if download:\n self.download()\n\n if not self._check_integrity():\n raise RuntimeError('Dataset not found or corrupted.' +\n ' You can use download=True to download it')\n\n split_map = {\n \"train\": 0,\n \"valid\": 1,\n \"test\": 2,\n \"all\": None,\n }\n split_ = split_map[verify_str_arg(split.lower(), \"split\",\n (\"train\", \"valid\", \"test\", \"all\"))]\n\n fn = partial(os.path.join, self.root, self.base_folder)\n splits = pandas.read_csv(fn(\"list_eval_partition.txt\"), delim_whitespace=True, header=None, index_col=0)\n identity = pandas.read_csv(fn(\"identity_CelebA.txt\"), delim_whitespace=True, header=None, index_col=0)\n bbox = pandas.read_csv(fn(\"list_bbox_celeba.txt\"), delim_whitespace=True, header=1, index_col=0)\n landmarks_align = pandas.read_csv(fn(\"list_landmarks_align_celeba.txt\"), delim_whitespace=True, header=1)\n attr = pandas.read_csv(fn(\"list_attr_celeba.txt\"), delim_whitespace=True, header=1)\n\n mask = slice(None) if split_ is None else (splits[1] == split_)\n\n self.filename = splits[mask].index.values\n self.identity = torch.as_tensor(identity[mask].values)\n self.bbox = torch.as_tensor(bbox[mask].values)\n self.landmarks_align = torch.as_tensor(landmarks_align[mask].values)\n self.attr = torch.as_tensor(attr[mask].values)\n self.attr = (self.attr + 1) // 2 # map from {-1, 1} to {0, 1}\n self.attr_names = list(attr.columns)\n\n def _check_integrity(self) -> bool:\n for (_, md5, filename) in self.file_list:\n fpath = os.path.join(self.root, self.base_folder, filename)\n _, ext = os.path.splitext(filename)\n # Allow original archive to be deleted (zip and 7z)\n # Only need the extracted images\n if ext not in [\".zip\", \".7z\"] and not check_integrity(fpath, md5):\n return False\n\n # Should check a hash of the images\n return os.path.isdir(os.path.join(self.root, self.base_folder, \"img_align_celeba\"))\n\n def download(self) -> None:\n import zipfile\n\n if self._check_integrity():\n print('Files already downloaded and verified')\n return\n\n for (file_id, md5, filename) in self.file_list:\n download_file_from_google_drive(file_id, os.path.join(self.root, self.base_folder), filename, md5)\n\n with zipfile.ZipFile(os.path.join(self.root, self.base_folder, \"img_align_celeba.zip\"), \"r\") as f:\n f.extractall(os.path.join(self.root, self.base_folder))\n\n def __getitem__(self, index: int) -> Tuple[Any, Any]:\n X = PIL.Image.open(os.path.join(self.root, self.base_folder, \"img_align_celeba\", self.filename[index]))\n\n target: Any = []\n for t in self.target_type:\n if t == \"attr\":\n target.append(self.attr[index, :])\n elif t == \"identity\":\n target.append(self.identity[index, 0])\n elif t == \"bbox\":\n target.append(self.bbox[index, :])\n elif t == \"landmarks\":\n target.append(self.landmarks_align[index, :])\n else:\n # TODO: refactor with utils.verify_str_arg\n raise ValueError(\"Target type \\\"{}\\\" is not recognized.\".format(t))\n\n if self.transform is not None:\n X = self.transform(X)\n\n if target:\n target = tuple(target) if len(target) > 1 else target[0]\n\n if self.target_transform is not None:\n target = self.target_transform(target)\n else:\n target = None\n\n return X, target\n\n def __len__(self) -> int:\n return len(self.attr)\n\n def extra_repr(self) -> str:\n lines = [\"Target type: {target_type}\", \"Split: {split}\"]\n return '\\n'.join(lines).format(**self.__dict__)\n", "path": "torchvision/datasets/celeba.py"}], "after_files": [{"content": "from collections import namedtuple\nimport csv\nfrom functools import partial\nimport torch\nimport os\nimport PIL\nfrom typing import Any, Callable, List, Optional, Union, Tuple\nfrom .vision import VisionDataset\nfrom .utils import download_file_from_google_drive, check_integrity, verify_str_arg\n\nCSV = namedtuple(\"CSV\", [\"header\", \"index\", \"data\"])\n\n\nclass CelebA(VisionDataset):\n \"\"\"`Large-scale CelebFaces Attributes (CelebA) Dataset <http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html>`_ Dataset.\n\n Args:\n root (string): Root directory where images are downloaded to.\n split (string): One of {'train', 'valid', 'test', 'all'}.\n Accordingly dataset is selected.\n target_type (string or list, optional): Type of target to use, ``attr``, ``identity``, ``bbox``,\n or ``landmarks``. Can also be a list to output a tuple with all specified target types.\n The targets represent:\n\n - ``attr`` (np.array shape=(40,) dtype=int): binary (0, 1) labels for attributes\n - ``identity`` (int): label for each person (data points with the same identity are the same person)\n - ``bbox`` (np.array shape=(4,) dtype=int): bounding box (x, y, width, height)\n - ``landmarks`` (np.array shape=(10,) dtype=int): landmark points (lefteye_x, lefteye_y, righteye_x,\n righteye_y, nose_x, nose_y, leftmouth_x, leftmouth_y, rightmouth_x, rightmouth_y)\n\n Defaults to ``attr``. If empty, ``None`` will be returned as target.\n\n transform (callable, optional): A function/transform that takes in an PIL image\n and returns a transformed version. E.g, ``transforms.ToTensor``\n target_transform (callable, optional): A function/transform that takes in the\n target and transforms it.\n download (bool, optional): If true, downloads the dataset from the internet and\n puts it in root directory. If dataset is already downloaded, it is not\n downloaded again.\n \"\"\"\n\n base_folder = \"celeba\"\n # There currently does not appear to be a easy way to extract 7z in python (without introducing additional\n # dependencies). The \"in-the-wild\" (not aligned+cropped) images are only in 7z, so they are not available\n # right now.\n file_list = [\n # File ID MD5 Hash Filename\n (\"0B7EVK8r0v71pZjFTYXZWM3FlRnM\", \"00d2c5bc6d35e252742224ab0c1e8fcb\", \"img_align_celeba.zip\"),\n # (\"0B7EVK8r0v71pbWNEUjJKdDQ3dGc\",\"b6cd7e93bc7a96c2dc33f819aa3ac651\", \"img_align_celeba_png.7z\"),\n # (\"0B7EVK8r0v71peklHb0pGdDl6R28\", \"b6cd7e93bc7a96c2dc33f819aa3ac651\", \"img_celeba.7z\"),\n (\"0B7EVK8r0v71pblRyaVFSWGxPY0U\", \"75e246fa4810816ffd6ee81facbd244c\", \"list_attr_celeba.txt\"),\n (\"1_ee_0u7vcNLOfNLegJRHmolfH5ICW-XS\", \"32bd1bd63d3c78cd57e08160ec5ed1e2\", \"identity_CelebA.txt\"),\n (\"0B7EVK8r0v71pbThiMVRxWXZ4dU0\", \"00566efa6fedff7a56946cd1c10f1c16\", \"list_bbox_celeba.txt\"),\n (\"0B7EVK8r0v71pd0FJY3Blby1HUTQ\", \"cc24ecafdb5b50baae59b03474781f8c\", \"list_landmarks_align_celeba.txt\"),\n # (\"0B7EVK8r0v71pTzJIdlJWdHczRlU\", \"063ee6ddb681f96bc9ca28c6febb9d1a\", \"list_landmarks_celeba.txt\"),\n (\"0B7EVK8r0v71pY0NSMzRuSXJEVkk\", \"d32c9cbf5e040fd4025c592c306e6668\", \"list_eval_partition.txt\"),\n ]\n\n def __init__(\n self,\n root: str,\n split: str = \"train\",\n target_type: Union[List[str], str] = \"attr\",\n transform: Optional[Callable] = None,\n target_transform: Optional[Callable] = None,\n download: bool = False,\n ) -> None:\n super(CelebA, self).__init__(root, transform=transform,\n target_transform=target_transform)\n self.split = split\n if isinstance(target_type, list):\n self.target_type = target_type\n else:\n self.target_type = [target_type]\n\n if not self.target_type and self.target_transform is not None:\n raise RuntimeError('target_transform is specified but target_type is empty')\n\n if download:\n self.download()\n\n if not self._check_integrity():\n raise RuntimeError('Dataset not found or corrupted.' +\n ' You can use download=True to download it')\n\n split_map = {\n \"train\": 0,\n \"valid\": 1,\n \"test\": 2,\n \"all\": None,\n }\n split_ = split_map[verify_str_arg(split.lower(), \"split\",\n (\"train\", \"valid\", \"test\", \"all\"))]\n splits = self._load_csv(\"list_eval_partition.txt\")\n identity = self._load_csv(\"identity_CelebA.txt\")\n bbox = self._load_csv(\"list_bbox_celeba.txt\", header=1)\n landmarks_align = self._load_csv(\"list_landmarks_align_celeba.txt\", header=1)\n attr = self._load_csv(\"list_attr_celeba.txt\", header=1)\n\n mask = slice(None) if split_ is None else (splits.data == split_).squeeze()\n\n self.filename = splits.index\n self.identity = identity.data[mask]\n self.bbox = bbox.data[mask]\n self.landmarks_align = landmarks_align.data[mask]\n self.attr = attr.data[mask]\n self.attr = (self.attr + 1) // 2 # map from {-1, 1} to {0, 1}\n self.attr_names = attr.header\n\n def _load_csv(\n self,\n filename: str,\n header: Optional[int] = None,\n ) -> CSV:\n data, indices, headers = [], [], []\n\n fn = partial(os.path.join, self.root, self.base_folder)\n with open(fn(filename)) as csv_file:\n data = list(csv.reader(csv_file, delimiter=' ', skipinitialspace=True))\n\n if header is not None:\n headers = data[header]\n data = data[header + 1:]\n\n indices = [row[0] for row in data]\n data = [row[1:] for row in data]\n data_int = [list(map(int, i)) for i in data]\n\n return CSV(headers, indices, torch.tensor(data_int))\n\n def _check_integrity(self) -> bool:\n for (_, md5, filename) in self.file_list:\n fpath = os.path.join(self.root, self.base_folder, filename)\n _, ext = os.path.splitext(filename)\n # Allow original archive to be deleted (zip and 7z)\n # Only need the extracted images\n if ext not in [\".zip\", \".7z\"] and not check_integrity(fpath, md5):\n return False\n\n # Should check a hash of the images\n return os.path.isdir(os.path.join(self.root, self.base_folder, \"img_align_celeba\"))\n\n def download(self) -> None:\n import zipfile\n\n if self._check_integrity():\n print('Files already downloaded and verified')\n return\n\n for (file_id, md5, filename) in self.file_list:\n download_file_from_google_drive(file_id, os.path.join(self.root, self.base_folder), filename, md5)\n\n with zipfile.ZipFile(os.path.join(self.root, self.base_folder, \"img_align_celeba.zip\"), \"r\") as f:\n f.extractall(os.path.join(self.root, self.base_folder))\n\n def __getitem__(self, index: int) -> Tuple[Any, Any]:\n X = PIL.Image.open(os.path.join(self.root, self.base_folder, \"img_align_celeba\", self.filename[index]))\n\n target: Any = []\n for t in self.target_type:\n if t == \"attr\":\n target.append(self.attr[index, :])\n elif t == \"identity\":\n target.append(self.identity[index, 0])\n elif t == \"bbox\":\n target.append(self.bbox[index, :])\n elif t == \"landmarks\":\n target.append(self.landmarks_align[index, :])\n else:\n # TODO: refactor with utils.verify_str_arg\n raise ValueError(\"Target type \\\"{}\\\" is not recognized.\".format(t))\n\n if self.transform is not None:\n X = self.transform(X)\n\n if target:\n target = tuple(target) if len(target) > 1 else target[0]\n\n if self.target_transform is not None:\n target = self.target_transform(target)\n else:\n target = None\n\n return X, target\n\n def __len__(self) -> int:\n return len(self.attr)\n\n def extra_repr(self) -> str:\n lines = [\"Target type: {target_type}\", \"Split: {split}\"]\n return '\\n'.join(lines).format(**self.__dict__)\n", "path": "torchvision/datasets/celeba.py"}]} | 2,948 | 972 |
gh_patches_debug_6684 | rasdani/github-patches | git_diff | netbox-community__netbox-11404 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Scheduling a job in the past raises an exception
### NetBox version
v3.4.2
### Python version
3.10
### Steps to Reproduce
1. Create a script
2. Schedule it in the past
### Expected Behavior
Form validation error message
### Observed Behavior
```
Traceback (most recent call last):
File "/home/main/devel/repos/netbox/venv/lib/python3.10/site-packages/django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
File "/home/main/devel/repos/netbox/venv/lib/python3.10/site-packages/django/core/handlers/base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/main/devel/repos/netbox/venv/lib/python3.10/site-packages/django/views/generic/base.py", line 103, in view
return self.dispatch(request, *args, **kwargs)
File "/home/main/devel/repos/netbox/netbox/utilities/views.py", line 53, in dispatch
return super().dispatch(request, *args, **kwargs)
File "/home/main/devel/repos/netbox/venv/lib/python3.10/site-packages/django/views/generic/base.py", line 142, in dispatch
return handler(request, *args, **kwargs)
File "/home/main/devel/repos/netbox/netbox/extras/views.py", line 815, in post
elif form.is_valid():
File "/home/main/devel/repos/netbox/netbox/utilities/forms/forms.py", line 69, in is_valid
is_valid = super().is_valid()
File "/home/main/devel/repos/netbox/venv/lib/python3.10/site-packages/django/forms/forms.py", line 205, in is_valid
return self.is_bound and not self.errors
File "/home/main/devel/repos/netbox/venv/lib/python3.10/site-packages/django/forms/forms.py", line 200, in errors
self.full_clean()
File "/home/main/devel/repos/netbox/venv/lib/python3.10/site-packages/django/forms/forms.py", line 437, in full_clean
self._clean_fields()
File "/home/main/devel/repos/netbox/venv/lib/python3.10/site-packages/django/forms/forms.py", line 455, in _clean_fields
self.add_error(name, e)
File "/home/main/devel/repos/netbox/venv/lib/python3.10/site-packages/django/forms/forms.py", line 392, in add_error
raise TypeError(
TypeError: The argument `field` must be `None` when the `error` argument contains errors for multiple fields.
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `netbox/extras/forms/scripts.py`
Content:
```
1 from django import forms
2 from django.utils import timezone
3 from django.utils.translation import gettext as _
4
5 from utilities.forms import BootstrapMixin, DateTimePicker, SelectDurationWidget
6
7 __all__ = (
8 'ScriptForm',
9 )
10
11
12 class ScriptForm(BootstrapMixin, forms.Form):
13 _commit = forms.BooleanField(
14 required=False,
15 initial=True,
16 label=_("Commit changes"),
17 help_text=_("Commit changes to the database (uncheck for a dry-run)")
18 )
19 _schedule_at = forms.DateTimeField(
20 required=False,
21 widget=DateTimePicker(),
22 label=_("Schedule at"),
23 help_text=_("Schedule execution of script to a set time"),
24 )
25 _interval = forms.IntegerField(
26 required=False,
27 min_value=1,
28 label=_("Recurs every"),
29 widget=SelectDurationWidget(),
30 help_text=_("Interval at which this script is re-run (in minutes)")
31 )
32
33 def __init__(self, *args, **kwargs):
34 super().__init__(*args, **kwargs)
35
36 # Annotate the current system time for reference
37 now = timezone.now().strftime('%Y-%m-%d %H:%M:%S')
38 self.fields['_schedule_at'].help_text += f' (current time: <strong>{now}</strong>)'
39
40 # Move _commit and _schedule_at to the end of the form
41 schedule_at = self.fields.pop('_schedule_at')
42 interval = self.fields.pop('_interval')
43 commit = self.fields.pop('_commit')
44 self.fields['_schedule_at'] = schedule_at
45 self.fields['_interval'] = interval
46 self.fields['_commit'] = commit
47
48 def clean__schedule_at(self):
49 scheduled_time = self.cleaned_data['_schedule_at']
50 if scheduled_time and scheduled_time < timezone.now():
51 raise forms.ValidationError({
52 '_schedule_at': _('Scheduled time must be in the future.')
53 })
54
55 return scheduled_time
56
57 @property
58 def requires_input(self):
59 """
60 A boolean indicating whether the form requires user input (ignore the built-in fields).
61 """
62 return bool(len(self.fields) > 3)
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/netbox/extras/forms/scripts.py b/netbox/extras/forms/scripts.py
--- a/netbox/extras/forms/scripts.py
+++ b/netbox/extras/forms/scripts.py
@@ -48,9 +48,7 @@
def clean__schedule_at(self):
scheduled_time = self.cleaned_data['_schedule_at']
if scheduled_time and scheduled_time < timezone.now():
- raise forms.ValidationError({
- '_schedule_at': _('Scheduled time must be in the future.')
- })
+ raise forms.ValidationError(_('Scheduled time must be in the future.'))
return scheduled_time
| {"golden_diff": "diff --git a/netbox/extras/forms/scripts.py b/netbox/extras/forms/scripts.py\n--- a/netbox/extras/forms/scripts.py\n+++ b/netbox/extras/forms/scripts.py\n@@ -48,9 +48,7 @@\n def clean__schedule_at(self):\n scheduled_time = self.cleaned_data['_schedule_at']\n if scheduled_time and scheduled_time < timezone.now():\n- raise forms.ValidationError({\n- '_schedule_at': _('Scheduled time must be in the future.')\n- })\n+ raise forms.ValidationError(_('Scheduled time must be in the future.'))\n \n return scheduled_time\n", "issue": "Scheduling a job in the past raises an exception\n### NetBox version\n\nv3.4.2\n\n### Python version\n\n3.10\n\n### Steps to Reproduce\n\n1. Create a script\r\n2. Schedule it in the past\n\n### Expected Behavior\n\nForm validation error message\n\n### Observed Behavior\n\n```\r\nTraceback (most recent call last):\r\n File \"/home/main/devel/repos/netbox/venv/lib/python3.10/site-packages/django/core/handlers/exception.py\", line 55, in inner\r\n response = get_response(request)\r\n File \"/home/main/devel/repos/netbox/venv/lib/python3.10/site-packages/django/core/handlers/base.py\", line 197, in _get_response\r\n response = wrapped_callback(request, *callback_args, **callback_kwargs)\r\n File \"/home/main/devel/repos/netbox/venv/lib/python3.10/site-packages/django/views/generic/base.py\", line 103, in view\r\n return self.dispatch(request, *args, **kwargs)\r\n File \"/home/main/devel/repos/netbox/netbox/utilities/views.py\", line 53, in dispatch\r\n return super().dispatch(request, *args, **kwargs)\r\n File \"/home/main/devel/repos/netbox/venv/lib/python3.10/site-packages/django/views/generic/base.py\", line 142, in dispatch\r\n return handler(request, *args, **kwargs)\r\n File \"/home/main/devel/repos/netbox/netbox/extras/views.py\", line 815, in post\r\n elif form.is_valid():\r\n File \"/home/main/devel/repos/netbox/netbox/utilities/forms/forms.py\", line 69, in is_valid\r\n is_valid = super().is_valid()\r\n File \"/home/main/devel/repos/netbox/venv/lib/python3.10/site-packages/django/forms/forms.py\", line 205, in is_valid\r\n return self.is_bound and not self.errors\r\n File \"/home/main/devel/repos/netbox/venv/lib/python3.10/site-packages/django/forms/forms.py\", line 200, in errors\r\n self.full_clean()\r\n File \"/home/main/devel/repos/netbox/venv/lib/python3.10/site-packages/django/forms/forms.py\", line 437, in full_clean\r\n self._clean_fields()\r\n File \"/home/main/devel/repos/netbox/venv/lib/python3.10/site-packages/django/forms/forms.py\", line 455, in _clean_fields\r\n self.add_error(name, e)\r\n File \"/home/main/devel/repos/netbox/venv/lib/python3.10/site-packages/django/forms/forms.py\", line 392, in add_error\r\n raise TypeError(\r\nTypeError: The argument `field` must be `None` when the `error` argument contains errors for multiple fields.\r\n```\n", "before_files": [{"content": "from django import forms\nfrom django.utils import timezone\nfrom django.utils.translation import gettext as _\n\nfrom utilities.forms import BootstrapMixin, DateTimePicker, SelectDurationWidget\n\n__all__ = (\n 'ScriptForm',\n)\n\n\nclass ScriptForm(BootstrapMixin, forms.Form):\n _commit = forms.BooleanField(\n required=False,\n initial=True,\n label=_(\"Commit changes\"),\n help_text=_(\"Commit changes to the database (uncheck for a dry-run)\")\n )\n _schedule_at = forms.DateTimeField(\n required=False,\n widget=DateTimePicker(),\n label=_(\"Schedule at\"),\n help_text=_(\"Schedule execution of script to a set time\"),\n )\n _interval = forms.IntegerField(\n required=False,\n min_value=1,\n label=_(\"Recurs every\"),\n widget=SelectDurationWidget(),\n help_text=_(\"Interval at which this script is re-run (in minutes)\")\n )\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n # Annotate the current system time for reference\n now = timezone.now().strftime('%Y-%m-%d %H:%M:%S')\n self.fields['_schedule_at'].help_text += f' (current time: <strong>{now}</strong>)'\n\n # Move _commit and _schedule_at to the end of the form\n schedule_at = self.fields.pop('_schedule_at')\n interval = self.fields.pop('_interval')\n commit = self.fields.pop('_commit')\n self.fields['_schedule_at'] = schedule_at\n self.fields['_interval'] = interval\n self.fields['_commit'] = commit\n\n def clean__schedule_at(self):\n scheduled_time = self.cleaned_data['_schedule_at']\n if scheduled_time and scheduled_time < timezone.now():\n raise forms.ValidationError({\n '_schedule_at': _('Scheduled time must be in the future.')\n })\n\n return scheduled_time\n\n @property\n def requires_input(self):\n \"\"\"\n A boolean indicating whether the form requires user input (ignore the built-in fields).\n \"\"\"\n return bool(len(self.fields) > 3)\n", "path": "netbox/extras/forms/scripts.py"}], "after_files": [{"content": "from django import forms\nfrom django.utils import timezone\nfrom django.utils.translation import gettext as _\n\nfrom utilities.forms import BootstrapMixin, DateTimePicker, SelectDurationWidget\n\n__all__ = (\n 'ScriptForm',\n)\n\n\nclass ScriptForm(BootstrapMixin, forms.Form):\n _commit = forms.BooleanField(\n required=False,\n initial=True,\n label=_(\"Commit changes\"),\n help_text=_(\"Commit changes to the database (uncheck for a dry-run)\")\n )\n _schedule_at = forms.DateTimeField(\n required=False,\n widget=DateTimePicker(),\n label=_(\"Schedule at\"),\n help_text=_(\"Schedule execution of script to a set time\"),\n )\n _interval = forms.IntegerField(\n required=False,\n min_value=1,\n label=_(\"Recurs every\"),\n widget=SelectDurationWidget(),\n help_text=_(\"Interval at which this script is re-run (in minutes)\")\n )\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n # Annotate the current system time for reference\n now = timezone.now().strftime('%Y-%m-%d %H:%M:%S')\n self.fields['_schedule_at'].help_text += f' (current time: <strong>{now}</strong>)'\n\n # Move _commit and _schedule_at to the end of the form\n schedule_at = self.fields.pop('_schedule_at')\n interval = self.fields.pop('_interval')\n commit = self.fields.pop('_commit')\n self.fields['_schedule_at'] = schedule_at\n self.fields['_interval'] = interval\n self.fields['_commit'] = commit\n\n def clean__schedule_at(self):\n scheduled_time = self.cleaned_data['_schedule_at']\n if scheduled_time and scheduled_time < timezone.now():\n raise forms.ValidationError(_('Scheduled time must be in the future.'))\n\n return scheduled_time\n\n @property\n def requires_input(self):\n \"\"\"\n A boolean indicating whether the form requires user input (ignore the built-in fields).\n \"\"\"\n return bool(len(self.fields) > 3)\n", "path": "netbox/extras/forms/scripts.py"}]} | 1,448 | 127 |
gh_patches_debug_42729 | rasdani/github-patches | git_diff | microsoft__botbuilder-python-1227 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[PORT] Replace UseState() with UseBotState()
> Port this change from botbuilder-dotnet/master branch:
https://github.com/microsoft/botbuilder-dotnet/pull/3862
Fixes #3859
and use untyped params so that order and type are not fixed.
Tweak RegisterMiddlewareClass so you can provide the key for the turnstate.
# Changed projects
* Microsoft.Bot.Builder.Dialogs.Adaptive.Testing
* Microsoft.Bot.Builder
* Microsoft.Bot.Builder.AI.QnA.Tests
* Microsoft.Bot.Builder.Dialogs.Adaptive.Templates.Tests
* Microsoft.Bot.Builder.Dialogs.Adaptive.Tests
* Microsoft.Bot.Builder.Dialogs.Declarative.Tests
* Microsoft.Bot.Builder.Dialogs.Tests
* Microsoft.Bot.Builder.TestBot.Json
*
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libraries/botbuilder-core/botbuilder/core/register_class_middleware.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3 from typing import Callable, Awaitable
4
5 from botbuilder.core import Middleware, TurnContext
6
7
8 class RegisterClassMiddleware(Middleware):
9 """
10 Middleware for adding an object to or registering a service with the current turn context.
11 """
12
13 def __init__(self, service):
14 self.service = service
15
16 async def on_turn(
17 self, context: TurnContext, logic: Callable[[TurnContext], Awaitable]
18 ):
19 # C# has TurnStateCollection with has overrides for adding items
20 # to TurnState. Python does not. In C#'s case, there is an 'Add'
21 # to handle adding object, and that uses the fully qualified class name.
22 context.turn_state[self.fullname(self.service)] = self.service
23 await logic()
24
25 @staticmethod
26 def fullname(obj):
27 module = obj.__class__.__module__
28 if module is None or module == str.__class__.__module__:
29 return obj.__class__.__name__ # Avoid reporting __builtin__
30 return module + "." + obj.__class__.__name__
31
```
Path: `libraries/botbuilder-core/botbuilder/core/adapter_extensions.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3 from botbuilder.core import (
4 BotAdapter,
5 Storage,
6 RegisterClassMiddleware,
7 UserState,
8 ConversationState,
9 AutoSaveStateMiddleware,
10 )
11
12
13 class AdapterExtensions:
14 @staticmethod
15 def use_storage(adapter: BotAdapter, storage: Storage) -> BotAdapter:
16 """
17 Registers a storage layer with the adapter. The storage object will be available via the turn context's
18 `turn_state` property.
19
20 :param adapter: The BotAdapter on which to register the storage object.
21 :param storage: The Storage object to register.
22 :return: The BotAdapter
23 """
24 return adapter.use(RegisterClassMiddleware(storage))
25
26 @staticmethod
27 def use_state(
28 adapter: BotAdapter,
29 user_state: UserState,
30 conversation_state: ConversationState,
31 auto: bool = True,
32 ) -> BotAdapter:
33 """
34 Registers user and conversation state objects with the adapter. These objects will be available via
35 the turn context's `turn_state` property.
36
37 :param adapter: The BotAdapter on which to register the state objects.
38 :param user_state: The UserState object to register.
39 :param conversation_state: The ConversationState object to register.
40 :param auto: True to automatically persist state each turn.
41 :return: The BotAdapter
42 """
43 if not adapter:
44 raise TypeError("BotAdapter is required")
45
46 if not user_state:
47 raise TypeError("UserState is required")
48
49 if not conversation_state:
50 raise TypeError("ConversationState is required")
51
52 adapter.use(RegisterClassMiddleware(user_state))
53 adapter.use(RegisterClassMiddleware(conversation_state))
54
55 if auto:
56 adapter.use(AutoSaveStateMiddleware([user_state, conversation_state]))
57
58 return adapter
59
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/libraries/botbuilder-core/botbuilder/core/adapter_extensions.py b/libraries/botbuilder-core/botbuilder/core/adapter_extensions.py
--- a/libraries/botbuilder-core/botbuilder/core/adapter_extensions.py
+++ b/libraries/botbuilder-core/botbuilder/core/adapter_extensions.py
@@ -1,7 +1,10 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
+from warnings import warn
+
from botbuilder.core import (
BotAdapter,
+ BotState,
Storage,
RegisterClassMiddleware,
UserState,
@@ -23,6 +26,39 @@
"""
return adapter.use(RegisterClassMiddleware(storage))
+ @staticmethod
+ def use_bot_state(
+ bot_adapter: BotAdapter, *bot_states: BotState, auto: bool = True
+ ) -> BotAdapter:
+ """
+ Registers bot state object into the TurnContext. The botstate will be available via the turn context.
+
+ :param bot_adapter: The BotAdapter on which to register the state objects.
+ :param bot_states: One or more BotState objects to register.
+ :return: The updated adapter.
+ """
+ if not bot_states:
+ raise TypeError("At least one BotAdapter is required")
+
+ for bot_state in bot_states:
+ bot_adapter.use(
+ RegisterClassMiddleware(
+ bot_state, AdapterExtensions.fullname(bot_state)
+ )
+ )
+
+ if auto:
+ bot_adapter.use(AutoSaveStateMiddleware(bot_states))
+
+ return bot_adapter
+
+ @staticmethod
+ def fullname(obj):
+ module = obj.__class__.__module__
+ if module is None or module == str.__class__.__module__:
+ return obj.__class__.__name__ # Avoid reporting __builtin__
+ return module + "." + obj.__class__.__name__
+
@staticmethod
def use_state(
adapter: BotAdapter,
@@ -31,7 +67,7 @@
auto: bool = True,
) -> BotAdapter:
"""
- Registers user and conversation state objects with the adapter. These objects will be available via
+ [DEPRECATED] Registers user and conversation state objects with the adapter. These objects will be available via
the turn context's `turn_state` property.
:param adapter: The BotAdapter on which to register the state objects.
@@ -40,6 +76,11 @@
:param auto: True to automatically persist state each turn.
:return: The BotAdapter
"""
+ warn(
+ "This method is deprecated in 4.9. You should use the method .use_bot_state() instead.",
+ DeprecationWarning,
+ )
+
if not adapter:
raise TypeError("BotAdapter is required")
diff --git a/libraries/botbuilder-core/botbuilder/core/register_class_middleware.py b/libraries/botbuilder-core/botbuilder/core/register_class_middleware.py
--- a/libraries/botbuilder-core/botbuilder/core/register_class_middleware.py
+++ b/libraries/botbuilder-core/botbuilder/core/register_class_middleware.py
@@ -10,8 +10,9 @@
Middleware for adding an object to or registering a service with the current turn context.
"""
- def __init__(self, service):
+ def __init__(self, service, key: str = None):
self.service = service
+ self._key = key
async def on_turn(
self, context: TurnContext, logic: Callable[[TurnContext], Awaitable]
@@ -19,7 +20,8 @@
# C# has TurnStateCollection with has overrides for adding items
# to TurnState. Python does not. In C#'s case, there is an 'Add'
# to handle adding object, and that uses the fully qualified class name.
- context.turn_state[self.fullname(self.service)] = self.service
+ key = self._key or self.fullname(self.service)
+ context.turn_state[key] = self.service
await logic()
@staticmethod
| {"golden_diff": "diff --git a/libraries/botbuilder-core/botbuilder/core/adapter_extensions.py b/libraries/botbuilder-core/botbuilder/core/adapter_extensions.py\n--- a/libraries/botbuilder-core/botbuilder/core/adapter_extensions.py\n+++ b/libraries/botbuilder-core/botbuilder/core/adapter_extensions.py\n@@ -1,7 +1,10 @@\n # Copyright (c) Microsoft Corporation. All rights reserved.\n # Licensed under the MIT License.\n+from warnings import warn\n+\n from botbuilder.core import (\n BotAdapter,\n+ BotState,\n Storage,\n RegisterClassMiddleware,\n UserState,\n@@ -23,6 +26,39 @@\n \"\"\"\n return adapter.use(RegisterClassMiddleware(storage))\n \n+ @staticmethod\n+ def use_bot_state(\n+ bot_adapter: BotAdapter, *bot_states: BotState, auto: bool = True\n+ ) -> BotAdapter:\n+ \"\"\"\n+ Registers bot state object into the TurnContext. The botstate will be available via the turn context.\n+\n+ :param bot_adapter: The BotAdapter on which to register the state objects.\n+ :param bot_states: One or more BotState objects to register.\n+ :return: The updated adapter.\n+ \"\"\"\n+ if not bot_states:\n+ raise TypeError(\"At least one BotAdapter is required\")\n+\n+ for bot_state in bot_states:\n+ bot_adapter.use(\n+ RegisterClassMiddleware(\n+ bot_state, AdapterExtensions.fullname(bot_state)\n+ )\n+ )\n+\n+ if auto:\n+ bot_adapter.use(AutoSaveStateMiddleware(bot_states))\n+\n+ return bot_adapter\n+\n+ @staticmethod\n+ def fullname(obj):\n+ module = obj.__class__.__module__\n+ if module is None or module == str.__class__.__module__:\n+ return obj.__class__.__name__ # Avoid reporting __builtin__\n+ return module + \".\" + obj.__class__.__name__\n+\n @staticmethod\n def use_state(\n adapter: BotAdapter,\n@@ -31,7 +67,7 @@\n auto: bool = True,\n ) -> BotAdapter:\n \"\"\"\n- Registers user and conversation state objects with the adapter. These objects will be available via\n+ [DEPRECATED] Registers user and conversation state objects with the adapter. These objects will be available via\n the turn context's `turn_state` property.\n \n :param adapter: The BotAdapter on which to register the state objects.\n@@ -40,6 +76,11 @@\n :param auto: True to automatically persist state each turn.\n :return: The BotAdapter\n \"\"\"\n+ warn(\n+ \"This method is deprecated in 4.9. You should use the method .use_bot_state() instead.\",\n+ DeprecationWarning,\n+ )\n+\n if not adapter:\n raise TypeError(\"BotAdapter is required\")\n \ndiff --git a/libraries/botbuilder-core/botbuilder/core/register_class_middleware.py b/libraries/botbuilder-core/botbuilder/core/register_class_middleware.py\n--- a/libraries/botbuilder-core/botbuilder/core/register_class_middleware.py\n+++ b/libraries/botbuilder-core/botbuilder/core/register_class_middleware.py\n@@ -10,8 +10,9 @@\n Middleware for adding an object to or registering a service with the current turn context.\n \"\"\"\n \n- def __init__(self, service):\n+ def __init__(self, service, key: str = None):\n self.service = service\n+ self._key = key\n \n async def on_turn(\n self, context: TurnContext, logic: Callable[[TurnContext], Awaitable]\n@@ -19,7 +20,8 @@\n # C# has TurnStateCollection with has overrides for adding items\n # to TurnState. Python does not. In C#'s case, there is an 'Add'\n # to handle adding object, and that uses the fully qualified class name.\n- context.turn_state[self.fullname(self.service)] = self.service\n+ key = self._key or self.fullname(self.service)\n+ context.turn_state[key] = self.service\n await logic()\n \n @staticmethod\n", "issue": "[PORT] Replace UseState() with UseBotState() \n> Port this change from botbuilder-dotnet/master branch:\nhttps://github.com/microsoft/botbuilder-dotnet/pull/3862\n\nFixes #3859 \r\nand use untyped params so that order and type are not fixed.\r\nTweak RegisterMiddlewareClass so you can provide the key for the turnstate.\n\n\r\n# Changed projects\r\n* Microsoft.Bot.Builder.Dialogs.Adaptive.Testing\r\n* Microsoft.Bot.Builder\r\n* Microsoft.Bot.Builder.AI.QnA.Tests\r\n* Microsoft.Bot.Builder.Dialogs.Adaptive.Templates.Tests\r\n* Microsoft.Bot.Builder.Dialogs.Adaptive.Tests\r\n* Microsoft.Bot.Builder.Dialogs.Declarative.Tests\r\n* Microsoft.Bot.Builder.Dialogs.Tests\r\n* Microsoft.Bot.Builder.TestBot.Json\r\n* \r\n\r\n\r\n\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\nfrom typing import Callable, Awaitable\n\nfrom botbuilder.core import Middleware, TurnContext\n\n\nclass RegisterClassMiddleware(Middleware):\n \"\"\"\n Middleware for adding an object to or registering a service with the current turn context.\n \"\"\"\n\n def __init__(self, service):\n self.service = service\n\n async def on_turn(\n self, context: TurnContext, logic: Callable[[TurnContext], Awaitable]\n ):\n # C# has TurnStateCollection with has overrides for adding items\n # to TurnState. Python does not. In C#'s case, there is an 'Add'\n # to handle adding object, and that uses the fully qualified class name.\n context.turn_state[self.fullname(self.service)] = self.service\n await logic()\n\n @staticmethod\n def fullname(obj):\n module = obj.__class__.__module__\n if module is None or module == str.__class__.__module__:\n return obj.__class__.__name__ # Avoid reporting __builtin__\n return module + \".\" + obj.__class__.__name__\n", "path": "libraries/botbuilder-core/botbuilder/core/register_class_middleware.py"}, {"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\nfrom botbuilder.core import (\n BotAdapter,\n Storage,\n RegisterClassMiddleware,\n UserState,\n ConversationState,\n AutoSaveStateMiddleware,\n)\n\n\nclass AdapterExtensions:\n @staticmethod\n def use_storage(adapter: BotAdapter, storage: Storage) -> BotAdapter:\n \"\"\"\n Registers a storage layer with the adapter. The storage object will be available via the turn context's\n `turn_state` property.\n\n :param adapter: The BotAdapter on which to register the storage object.\n :param storage: The Storage object to register.\n :return: The BotAdapter\n \"\"\"\n return adapter.use(RegisterClassMiddleware(storage))\n\n @staticmethod\n def use_state(\n adapter: BotAdapter,\n user_state: UserState,\n conversation_state: ConversationState,\n auto: bool = True,\n ) -> BotAdapter:\n \"\"\"\n Registers user and conversation state objects with the adapter. These objects will be available via\n the turn context's `turn_state` property.\n\n :param adapter: The BotAdapter on which to register the state objects.\n :param user_state: The UserState object to register.\n :param conversation_state: The ConversationState object to register.\n :param auto: True to automatically persist state each turn.\n :return: The BotAdapter\n \"\"\"\n if not adapter:\n raise TypeError(\"BotAdapter is required\")\n\n if not user_state:\n raise TypeError(\"UserState is required\")\n\n if not conversation_state:\n raise TypeError(\"ConversationState is required\")\n\n adapter.use(RegisterClassMiddleware(user_state))\n adapter.use(RegisterClassMiddleware(conversation_state))\n\n if auto:\n adapter.use(AutoSaveStateMiddleware([user_state, conversation_state]))\n\n return adapter\n", "path": "libraries/botbuilder-core/botbuilder/core/adapter_extensions.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\nfrom typing import Callable, Awaitable\n\nfrom botbuilder.core import Middleware, TurnContext\n\n\nclass RegisterClassMiddleware(Middleware):\n \"\"\"\n Middleware for adding an object to or registering a service with the current turn context.\n \"\"\"\n\n def __init__(self, service, key: str = None):\n self.service = service\n self._key = key\n\n async def on_turn(\n self, context: TurnContext, logic: Callable[[TurnContext], Awaitable]\n ):\n # C# has TurnStateCollection with has overrides for adding items\n # to TurnState. Python does not. In C#'s case, there is an 'Add'\n # to handle adding object, and that uses the fully qualified class name.\n key = self._key or self.fullname(self.service)\n context.turn_state[key] = self.service\n await logic()\n\n @staticmethod\n def fullname(obj):\n module = obj.__class__.__module__\n if module is None or module == str.__class__.__module__:\n return obj.__class__.__name__ # Avoid reporting __builtin__\n return module + \".\" + obj.__class__.__name__\n", "path": "libraries/botbuilder-core/botbuilder/core/register_class_middleware.py"}, {"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\nfrom warnings import warn\n\nfrom botbuilder.core import (\n BotAdapter,\n BotState,\n Storage,\n RegisterClassMiddleware,\n UserState,\n ConversationState,\n AutoSaveStateMiddleware,\n)\n\n\nclass AdapterExtensions:\n @staticmethod\n def use_storage(adapter: BotAdapter, storage: Storage) -> BotAdapter:\n \"\"\"\n Registers a storage layer with the adapter. The storage object will be available via the turn context's\n `turn_state` property.\n\n :param adapter: The BotAdapter on which to register the storage object.\n :param storage: The Storage object to register.\n :return: The BotAdapter\n \"\"\"\n return adapter.use(RegisterClassMiddleware(storage))\n\n @staticmethod\n def use_bot_state(\n bot_adapter: BotAdapter, *bot_states: BotState, auto: bool = True\n ) -> BotAdapter:\n \"\"\"\n Registers bot state object into the TurnContext. The botstate will be available via the turn context.\n\n :param bot_adapter: The BotAdapter on which to register the state objects.\n :param bot_states: One or more BotState objects to register.\n :return: The updated adapter.\n \"\"\"\n if not bot_states:\n raise TypeError(\"At least one BotAdapter is required\")\n\n for bot_state in bot_states:\n bot_adapter.use(\n RegisterClassMiddleware(\n bot_state, AdapterExtensions.fullname(bot_state)\n )\n )\n\n if auto:\n bot_adapter.use(AutoSaveStateMiddleware(bot_states))\n\n return bot_adapter\n\n @staticmethod\n def fullname(obj):\n module = obj.__class__.__module__\n if module is None or module == str.__class__.__module__:\n return obj.__class__.__name__ # Avoid reporting __builtin__\n return module + \".\" + obj.__class__.__name__\n\n @staticmethod\n def use_state(\n adapter: BotAdapter,\n user_state: UserState,\n conversation_state: ConversationState,\n auto: bool = True,\n ) -> BotAdapter:\n \"\"\"\n [DEPRECATED] Registers user and conversation state objects with the adapter. These objects will be available via\n the turn context's `turn_state` property.\n\n :param adapter: The BotAdapter on which to register the state objects.\n :param user_state: The UserState object to register.\n :param conversation_state: The ConversationState object to register.\n :param auto: True to automatically persist state each turn.\n :return: The BotAdapter\n \"\"\"\n warn(\n \"This method is deprecated in 4.9. You should use the method .use_bot_state() instead.\",\n DeprecationWarning,\n )\n\n if not adapter:\n raise TypeError(\"BotAdapter is required\")\n\n if not user_state:\n raise TypeError(\"UserState is required\")\n\n if not conversation_state:\n raise TypeError(\"ConversationState is required\")\n\n adapter.use(RegisterClassMiddleware(user_state))\n adapter.use(RegisterClassMiddleware(conversation_state))\n\n if auto:\n adapter.use(AutoSaveStateMiddleware([user_state, conversation_state]))\n\n return adapter\n", "path": "libraries/botbuilder-core/botbuilder/core/adapter_extensions.py"}]} | 1,252 | 910 |
gh_patches_debug_44942 | rasdani/github-patches | git_diff | nonebot__nonebot2-743 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: `require` 重复加载插件错误
**描述问题:**
`require` load plugin raises `RuntimeError: Plugin already exists`
**如何复现?**
使用 `nonebot.load_from_toml(["third_party_plugin"], ["plugin_dir"])`,并在插件中添加 `require("third_party_plugin")`
由于加载顺序随机,有一定概率失败
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nonebot/plugin/load.py`
Content:
```
1 """本模块定义插件加载接口。
2
3 FrontMatter:
4 sidebar_position: 1
5 description: nonebot.plugin.load 模块
6 """
7 import json
8 import warnings
9 from typing import Set, Iterable, Optional
10
11 import tomlkit
12
13 from . import _managers
14 from .export import Export
15 from .manager import PluginManager
16 from .plugin import Plugin, get_plugin
17
18
19 def load_plugin(module_path: str) -> Optional[Plugin]:
20 """加载单个插件,可以是本地插件或是通过 `pip` 安装的插件。
21
22 参数:
23 module_path: 插件名称 `path.to.your.plugin`
24 """
25
26 manager = PluginManager([module_path])
27 _managers.append(manager)
28 return manager.load_plugin(module_path)
29
30
31 def load_plugins(*plugin_dir: str) -> Set[Plugin]:
32 """导入文件夹下多个插件,以 `_` 开头的插件不会被导入!
33
34 参数:
35 plugin_dir: 文件夹路径
36 """
37 manager = PluginManager(search_path=plugin_dir)
38 _managers.append(manager)
39 return manager.load_all_plugins()
40
41
42 def load_all_plugins(
43 module_path: Iterable[str], plugin_dir: Iterable[str]
44 ) -> Set[Plugin]:
45 """导入指定列表中的插件以及指定目录下多个插件,以 `_` 开头的插件不会被导入!
46
47 参数:
48 module_path: 指定插件集合
49 plugin_dir: 指定文件夹路径集合
50 """
51 manager = PluginManager(module_path, plugin_dir)
52 _managers.append(manager)
53 return manager.load_all_plugins()
54
55
56 def load_from_json(file_path: str, encoding: str = "utf-8") -> Set[Plugin]:
57 """导入指定 json 文件中的 `plugins` 以及 `plugin_dirs` 下多个插件,以 `_` 开头的插件不会被导入!
58
59 参数:
60 file_path: 指定 json 文件路径
61 encoding: 指定 json 文件编码
62
63 用法:
64 ```json title=plugins.json
65 {
66 "plugins": ["some_plugin"],
67 "plugin_dirs": ["some_dir"]
68 }
69 ```
70
71 ```python
72 nonebot.load_from_json("plugins.json")
73 ```
74 """
75 with open(file_path, "r", encoding=encoding) as f:
76 data = json.load(f)
77 plugins = data.get("plugins")
78 plugin_dirs = data.get("plugin_dirs")
79 assert isinstance(plugins, list), "plugins must be a list of plugin name"
80 assert isinstance(plugin_dirs, list), "plugin_dirs must be a list of directories"
81 return load_all_plugins(set(plugins), set(plugin_dirs))
82
83
84 def load_from_toml(file_path: str, encoding: str = "utf-8") -> Set[Plugin]:
85 """导入指定 toml 文件 `[tool.nonebot]` 中的 `plugins` 以及 `plugin_dirs` 下多个插件,以 `_` 开头的插件不会被导入!
86
87 参数:
88 file_path: 指定 toml 文件路径
89 encoding: 指定 toml 文件编码
90
91 用法:
92 ```toml title=pyproject.toml
93 [tool.nonebot]
94 plugins = ["some_plugin"]
95 plugin_dirs = ["some_dir"]
96 ```
97
98 ```python
99 nonebot.load_from_toml("pyproject.toml")
100 ```
101 """
102 with open(file_path, "r", encoding=encoding) as f:
103 data = tomlkit.parse(f.read()) # type: ignore
104
105 nonebot_data = data.get("tool", {}).get("nonebot")
106 if not nonebot_data:
107 nonebot_data = data.get("nonebot", {}).get("plugins")
108 if nonebot_data:
109 warnings.warn(
110 "[nonebot.plugins] table are now deprecated. Use [tool.nonebot] instead.",
111 DeprecationWarning,
112 )
113 else:
114 raise ValueError("Cannot find '[tool.nonebot]' in given toml file!")
115 plugins = nonebot_data.get("plugins", [])
116 plugin_dirs = nonebot_data.get("plugin_dirs", [])
117 assert isinstance(plugins, list), "plugins must be a list of plugin name"
118 assert isinstance(plugin_dirs, list), "plugin_dirs must be a list of directories"
119 return load_all_plugins(plugins, plugin_dirs)
120
121
122 def load_builtin_plugin(name: str) -> Optional[Plugin]:
123 """导入 NoneBot 内置插件。
124
125 参数:
126 name: 插件名称
127 """
128 return load_plugin(f"nonebot.plugins.{name}")
129
130
131 def load_builtin_plugins(*plugins) -> Set[Plugin]:
132 """导入多个 NoneBot 内置插件。
133
134 参数:
135 plugins: 插件名称列表
136 """
137 return load_all_plugins([f"nonebot.plugins.{p}" for p in plugins], [])
138
139
140 def require(name: str) -> Export:
141 """获取一个插件的导出内容。
142
143 如果为 `load_plugins` 文件夹导入的插件,则为文件(夹)名。
144
145 参数:
146 name: 插件名,即 {ref}`nonebot.plugin.plugin.Plugin.name`。
147
148 异常:
149 RuntimeError: 插件无法加载
150 """
151 plugin = get_plugin(name) or load_plugin(name)
152 if not plugin:
153 raise RuntimeError(f'Cannot load plugin "{name}"!')
154 return plugin.export
155
```
Path: `nonebot/plugin/manager.py`
Content:
```
1 """本模块实现插件加载流程。
2
3 参考: [import hooks](https://docs.python.org/3/reference/import.html#import-hooks), [PEP302](https://www.python.org/dev/peps/pep-0302/)
4
5 FrontMatter:
6 sidebar_position: 5
7 description: nonebot.plugin.manager 模块
8 """
9 import sys
10 import pkgutil
11 import importlib
12 from pathlib import Path
13 from itertools import chain
14 from types import ModuleType
15 from importlib.abc import MetaPathFinder
16 from importlib.machinery import PathFinder, SourceFileLoader
17 from typing import Set, Dict, List, Union, Iterable, Optional, Sequence
18
19 from nonebot.log import logger
20 from nonebot.utils import escape_tag
21
22 from . import _managers, _current_plugin
23 from .plugin import Plugin, _new_plugin, _confirm_plugin
24
25
26 class PluginManager:
27 def __init__(
28 self,
29 plugins: Optional[Iterable[str]] = None,
30 search_path: Optional[Iterable[str]] = None,
31 ):
32
33 # simple plugin not in search path
34 self.plugins: Set[str] = set(plugins or [])
35 self.search_path: Set[str] = set(search_path or [])
36 # cache plugins
37 self.searched_plugins: Dict[str, Path] = {}
38 self.list_plugins()
39
40 def _path_to_module_name(self, path: Path) -> str:
41 rel_path = path.resolve().relative_to(Path(".").resolve())
42 if rel_path.stem == "__init__":
43 return ".".join(rel_path.parts[:-1])
44 else:
45 return ".".join(rel_path.parts[:-1] + (rel_path.stem,))
46
47 def _previous_plugins(self) -> List[str]:
48 _pre_managers: List[PluginManager]
49 if self in _managers:
50 _pre_managers = _managers[: _managers.index(self)]
51 else:
52 _pre_managers = _managers[:]
53
54 return [
55 *chain.from_iterable(
56 [*manager.plugins, *manager.searched_plugins.keys()]
57 for manager in _pre_managers
58 )
59 ]
60
61 def list_plugins(self) -> Set[str]:
62 # get all previous ready to load plugins
63 previous_plugins = self._previous_plugins()
64 searched_plugins: Dict[str, Path] = {}
65 third_party_plugins: Set[str] = set()
66
67 for plugin in self.plugins:
68 name = plugin.rsplit(".", 1)[-1] if "." in plugin else plugin
69 if name in third_party_plugins or name in previous_plugins:
70 raise RuntimeError(
71 f"Plugin already exists: {name}! Check your plugin name"
72 )
73 third_party_plugins.add(plugin)
74
75 for module_info in pkgutil.iter_modules(self.search_path):
76 if module_info.name.startswith("_"):
77 continue
78 if (
79 module_info.name in searched_plugins.keys()
80 or module_info.name in previous_plugins
81 or module_info.name in third_party_plugins
82 ):
83 raise RuntimeError(
84 f"Plugin already exists: {module_info.name}! Check your plugin name"
85 )
86 module_spec = module_info.module_finder.find_spec(module_info.name, None)
87 if not module_spec:
88 continue
89 module_path = module_spec.origin
90 if not module_path:
91 continue
92 searched_plugins[module_info.name] = Path(module_path).resolve()
93
94 self.searched_plugins = searched_plugins
95
96 return third_party_plugins | set(self.searched_plugins.keys())
97
98 def load_plugin(self, name) -> Optional[Plugin]:
99 try:
100 if name in self.plugins:
101 module = importlib.import_module(name)
102 elif name not in self.searched_plugins:
103 raise RuntimeError(f"Plugin not found: {name}! Check your plugin name")
104 else:
105 module = importlib.import_module(
106 self._path_to_module_name(self.searched_plugins[name])
107 )
108
109 logger.opt(colors=True).success(
110 f'Succeeded to import "<y>{escape_tag(name)}</y>"'
111 )
112 return getattr(module, "__plugin__", None)
113 except Exception as e:
114 logger.opt(colors=True, exception=e).error(
115 f'<r><bg #f8bbd0>Failed to import "{escape_tag(name)}"</bg #f8bbd0></r>'
116 )
117
118 def load_all_plugins(self) -> Set[Plugin]:
119 return set(
120 filter(None, (self.load_plugin(name) for name in self.list_plugins()))
121 )
122
123
124 class PluginFinder(MetaPathFinder):
125 def find_spec(
126 self,
127 fullname: str,
128 path: Optional[Sequence[Union[bytes, str]]],
129 target: Optional[ModuleType] = None,
130 ):
131 if _managers:
132 index = -1
133 module_spec = PathFinder.find_spec(fullname, path, target)
134 if not module_spec:
135 return
136 module_origin = module_spec.origin
137 if not module_origin:
138 return
139 module_path = Path(module_origin).resolve()
140
141 while -index <= len(_managers):
142 manager = _managers[index]
143
144 if (
145 fullname in manager.plugins
146 or module_path in manager.searched_plugins.values()
147 ):
148 module_spec.loader = PluginLoader(manager, fullname, module_origin)
149 return module_spec
150
151 index -= 1
152 return
153
154
155 class PluginLoader(SourceFileLoader):
156 def __init__(self, manager: PluginManager, fullname: str, path) -> None:
157 self.manager = manager
158 self.loaded = False
159 super().__init__(fullname, path)
160
161 def create_module(self, spec) -> Optional[ModuleType]:
162 if self.name in sys.modules:
163 self.loaded = True
164 return sys.modules[self.name]
165 # return None to use default module creation
166 return super().create_module(spec)
167
168 def exec_module(self, module: ModuleType) -> None:
169 if self.loaded:
170 return
171
172 plugin = _new_plugin(self.name, module, self.manager)
173 parent_plugin = _current_plugin.get()
174 if parent_plugin and _managers.index(parent_plugin.manager) < _managers.index(
175 self.manager
176 ):
177 plugin.parent_plugin = parent_plugin
178 parent_plugin.sub_plugins.add(plugin)
179
180 _plugin_token = _current_plugin.set(plugin)
181
182 setattr(module, "__plugin__", plugin)
183
184 # try:
185 # super().exec_module(module)
186 # except Exception as e:
187 # raise ImportError(
188 # f"Error when executing module {module_name} from {module.__file__}."
189 # ) from e
190 super().exec_module(module)
191
192 _confirm_plugin(plugin)
193
194 _current_plugin.reset(_plugin_token)
195 return
196
197
198 sys.meta_path.insert(0, PluginFinder())
199
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nonebot/plugin/load.py b/nonebot/plugin/load.py
--- a/nonebot/plugin/load.py
+++ b/nonebot/plugin/load.py
@@ -137,6 +137,12 @@
return load_all_plugins([f"nonebot.plugins.{p}" for p in plugins], [])
+def _find_manager_by_name(name: str) -> Optional[PluginManager]:
+ for manager in reversed(_managers):
+ if name in manager.plugins or name in manager.searched_plugins:
+ return manager
+
+
def require(name: str) -> Export:
"""获取一个插件的导出内容。
@@ -148,7 +154,13 @@
异常:
RuntimeError: 插件无法加载
"""
- plugin = get_plugin(name) or load_plugin(name)
+ plugin = get_plugin(name.rsplit(".", 1)[-1])
if not plugin:
- raise RuntimeError(f'Cannot load plugin "{name}"!')
+ manager = _find_manager_by_name(name)
+ if manager:
+ plugin = manager.load_plugin(name)
+ else:
+ plugin = load_plugin(name)
+ if not plugin:
+ raise RuntimeError(f'Cannot load plugin "{name}"!')
return plugin.export
diff --git a/nonebot/plugin/manager.py b/nonebot/plugin/manager.py
--- a/nonebot/plugin/manager.py
+++ b/nonebot/plugin/manager.py
@@ -53,7 +53,10 @@
return [
*chain.from_iterable(
- [*manager.plugins, *manager.searched_plugins.keys()]
+ [
+ *map(lambda x: x.rsplit(".", 1)[-1], manager.plugins),
+ *manager.searched_plugins.keys(),
+ ]
for manager in _pre_managers
)
]
@@ -65,7 +68,7 @@
third_party_plugins: Set[str] = set()
for plugin in self.plugins:
- name = plugin.rsplit(".", 1)[-1] if "." in plugin else plugin
+ name = plugin.rsplit(".", 1)[-1]
if name in third_party_plugins or name in previous_plugins:
raise RuntimeError(
f"Plugin already exists: {name}! Check your plugin name"
@@ -95,21 +98,27 @@
return third_party_plugins | set(self.searched_plugins.keys())
- def load_plugin(self, name) -> Optional[Plugin]:
+ def load_plugin(self, name: str) -> Optional[Plugin]:
try:
if name in self.plugins:
module = importlib.import_module(name)
- elif name not in self.searched_plugins:
- raise RuntimeError(f"Plugin not found: {name}! Check your plugin name")
- else:
+ elif name in self.searched_plugins:
module = importlib.import_module(
self._path_to_module_name(self.searched_plugins[name])
)
+ else:
+ raise RuntimeError(f"Plugin not found: {name}! Check your plugin name")
logger.opt(colors=True).success(
f'Succeeded to import "<y>{escape_tag(name)}</y>"'
)
- return getattr(module, "__plugin__", None)
+ plugin = getattr(module, "__plugin__", None)
+ if plugin is None:
+ raise RuntimeError(
+ f"Module {module.__name__} is not loaded as a plugin! "
+ "Make sure not to import it before loading."
+ )
+ return plugin
except Exception as e:
logger.opt(colors=True, exception=e).error(
f'<r><bg #f8bbd0>Failed to import "{escape_tag(name)}"</bg #f8bbd0></r>'
@@ -129,7 +138,6 @@
target: Optional[ModuleType] = None,
):
if _managers:
- index = -1
module_spec = PathFinder.find_spec(fullname, path, target)
if not module_spec:
return
@@ -138,17 +146,13 @@
return
module_path = Path(module_origin).resolve()
- while -index <= len(_managers):
- manager = _managers[index]
-
+ for manager in reversed(_managers):
if (
fullname in manager.plugins
or module_path in manager.searched_plugins.values()
):
module_spec.loader = PluginLoader(manager, fullname, module_origin)
return module_spec
-
- index -= 1
return
| {"golden_diff": "diff --git a/nonebot/plugin/load.py b/nonebot/plugin/load.py\n--- a/nonebot/plugin/load.py\n+++ b/nonebot/plugin/load.py\n@@ -137,6 +137,12 @@\n return load_all_plugins([f\"nonebot.plugins.{p}\" for p in plugins], [])\n \n \n+def _find_manager_by_name(name: str) -> Optional[PluginManager]:\n+ for manager in reversed(_managers):\n+ if name in manager.plugins or name in manager.searched_plugins:\n+ return manager\n+\n+\n def require(name: str) -> Export:\n \"\"\"\u83b7\u53d6\u4e00\u4e2a\u63d2\u4ef6\u7684\u5bfc\u51fa\u5185\u5bb9\u3002\n \n@@ -148,7 +154,13 @@\n \u5f02\u5e38:\n RuntimeError: \u63d2\u4ef6\u65e0\u6cd5\u52a0\u8f7d\n \"\"\"\n- plugin = get_plugin(name) or load_plugin(name)\n+ plugin = get_plugin(name.rsplit(\".\", 1)[-1])\n if not plugin:\n- raise RuntimeError(f'Cannot load plugin \"{name}\"!')\n+ manager = _find_manager_by_name(name)\n+ if manager:\n+ plugin = manager.load_plugin(name)\n+ else:\n+ plugin = load_plugin(name)\n+ if not plugin:\n+ raise RuntimeError(f'Cannot load plugin \"{name}\"!')\n return plugin.export\ndiff --git a/nonebot/plugin/manager.py b/nonebot/plugin/manager.py\n--- a/nonebot/plugin/manager.py\n+++ b/nonebot/plugin/manager.py\n@@ -53,7 +53,10 @@\n \n return [\n *chain.from_iterable(\n- [*manager.plugins, *manager.searched_plugins.keys()]\n+ [\n+ *map(lambda x: x.rsplit(\".\", 1)[-1], manager.plugins),\n+ *manager.searched_plugins.keys(),\n+ ]\n for manager in _pre_managers\n )\n ]\n@@ -65,7 +68,7 @@\n third_party_plugins: Set[str] = set()\n \n for plugin in self.plugins:\n- name = plugin.rsplit(\".\", 1)[-1] if \".\" in plugin else plugin\n+ name = plugin.rsplit(\".\", 1)[-1]\n if name in third_party_plugins or name in previous_plugins:\n raise RuntimeError(\n f\"Plugin already exists: {name}! Check your plugin name\"\n@@ -95,21 +98,27 @@\n \n return third_party_plugins | set(self.searched_plugins.keys())\n \n- def load_plugin(self, name) -> Optional[Plugin]:\n+ def load_plugin(self, name: str) -> Optional[Plugin]:\n try:\n if name in self.plugins:\n module = importlib.import_module(name)\n- elif name not in self.searched_plugins:\n- raise RuntimeError(f\"Plugin not found: {name}! Check your plugin name\")\n- else:\n+ elif name in self.searched_plugins:\n module = importlib.import_module(\n self._path_to_module_name(self.searched_plugins[name])\n )\n+ else:\n+ raise RuntimeError(f\"Plugin not found: {name}! Check your plugin name\")\n \n logger.opt(colors=True).success(\n f'Succeeded to import \"<y>{escape_tag(name)}</y>\"'\n )\n- return getattr(module, \"__plugin__\", None)\n+ plugin = getattr(module, \"__plugin__\", None)\n+ if plugin is None:\n+ raise RuntimeError(\n+ f\"Module {module.__name__} is not loaded as a plugin! \"\n+ \"Make sure not to import it before loading.\"\n+ )\n+ return plugin\n except Exception as e:\n logger.opt(colors=True, exception=e).error(\n f'<r><bg #f8bbd0>Failed to import \"{escape_tag(name)}\"</bg #f8bbd0></r>'\n@@ -129,7 +138,6 @@\n target: Optional[ModuleType] = None,\n ):\n if _managers:\n- index = -1\n module_spec = PathFinder.find_spec(fullname, path, target)\n if not module_spec:\n return\n@@ -138,17 +146,13 @@\n return\n module_path = Path(module_origin).resolve()\n \n- while -index <= len(_managers):\n- manager = _managers[index]\n-\n+ for manager in reversed(_managers):\n if (\n fullname in manager.plugins\n or module_path in manager.searched_plugins.values()\n ):\n module_spec.loader = PluginLoader(manager, fullname, module_origin)\n return module_spec\n-\n- index -= 1\n return\n", "issue": "Bug: `require` \u91cd\u590d\u52a0\u8f7d\u63d2\u4ef6\u9519\u8bef\n**\u63cf\u8ff0\u95ee\u9898\uff1a**\r\n\r\n`require` load plugin raises `RuntimeError: Plugin already exists`\r\n\r\n**\u5982\u4f55\u590d\u73b0\uff1f**\r\n\r\n\u4f7f\u7528 `nonebot.load_from_toml([\"third_party_plugin\"], [\"plugin_dir\"])`\uff0c\u5e76\u5728\u63d2\u4ef6\u4e2d\u6dfb\u52a0 `require(\"third_party_plugin\")`\r\n\r\n\u7531\u4e8e\u52a0\u8f7d\u987a\u5e8f\u968f\u673a\uff0c\u6709\u4e00\u5b9a\u6982\u7387\u5931\u8d25\r\n\n", "before_files": [{"content": "\"\"\"\u672c\u6a21\u5757\u5b9a\u4e49\u63d2\u4ef6\u52a0\u8f7d\u63a5\u53e3\u3002\n\nFrontMatter:\n sidebar_position: 1\n description: nonebot.plugin.load \u6a21\u5757\n\"\"\"\nimport json\nimport warnings\nfrom typing import Set, Iterable, Optional\n\nimport tomlkit\n\nfrom . import _managers\nfrom .export import Export\nfrom .manager import PluginManager\nfrom .plugin import Plugin, get_plugin\n\n\ndef load_plugin(module_path: str) -> Optional[Plugin]:\n \"\"\"\u52a0\u8f7d\u5355\u4e2a\u63d2\u4ef6\uff0c\u53ef\u4ee5\u662f\u672c\u5730\u63d2\u4ef6\u6216\u662f\u901a\u8fc7 `pip` \u5b89\u88c5\u7684\u63d2\u4ef6\u3002\n\n \u53c2\u6570:\n module_path: \u63d2\u4ef6\u540d\u79f0 `path.to.your.plugin`\n \"\"\"\n\n manager = PluginManager([module_path])\n _managers.append(manager)\n return manager.load_plugin(module_path)\n\n\ndef load_plugins(*plugin_dir: str) -> Set[Plugin]:\n \"\"\"\u5bfc\u5165\u6587\u4ef6\u5939\u4e0b\u591a\u4e2a\u63d2\u4ef6\uff0c\u4ee5 `_` \u5f00\u5934\u7684\u63d2\u4ef6\u4e0d\u4f1a\u88ab\u5bfc\u5165!\n\n \u53c2\u6570:\n plugin_dir: \u6587\u4ef6\u5939\u8def\u5f84\n \"\"\"\n manager = PluginManager(search_path=plugin_dir)\n _managers.append(manager)\n return manager.load_all_plugins()\n\n\ndef load_all_plugins(\n module_path: Iterable[str], plugin_dir: Iterable[str]\n) -> Set[Plugin]:\n \"\"\"\u5bfc\u5165\u6307\u5b9a\u5217\u8868\u4e2d\u7684\u63d2\u4ef6\u4ee5\u53ca\u6307\u5b9a\u76ee\u5f55\u4e0b\u591a\u4e2a\u63d2\u4ef6\uff0c\u4ee5 `_` \u5f00\u5934\u7684\u63d2\u4ef6\u4e0d\u4f1a\u88ab\u5bfc\u5165!\n\n \u53c2\u6570:\n module_path: \u6307\u5b9a\u63d2\u4ef6\u96c6\u5408\n plugin_dir: \u6307\u5b9a\u6587\u4ef6\u5939\u8def\u5f84\u96c6\u5408\n \"\"\"\n manager = PluginManager(module_path, plugin_dir)\n _managers.append(manager)\n return manager.load_all_plugins()\n\n\ndef load_from_json(file_path: str, encoding: str = \"utf-8\") -> Set[Plugin]:\n \"\"\"\u5bfc\u5165\u6307\u5b9a json \u6587\u4ef6\u4e2d\u7684 `plugins` \u4ee5\u53ca `plugin_dirs` \u4e0b\u591a\u4e2a\u63d2\u4ef6\uff0c\u4ee5 `_` \u5f00\u5934\u7684\u63d2\u4ef6\u4e0d\u4f1a\u88ab\u5bfc\u5165!\n\n \u53c2\u6570:\n file_path: \u6307\u5b9a json \u6587\u4ef6\u8def\u5f84\n encoding: \u6307\u5b9a json \u6587\u4ef6\u7f16\u7801\n\n \u7528\u6cd5:\n ```json title=plugins.json\n {\n \"plugins\": [\"some_plugin\"],\n \"plugin_dirs\": [\"some_dir\"]\n }\n ```\n\n ```python\n nonebot.load_from_json(\"plugins.json\")\n ```\n \"\"\"\n with open(file_path, \"r\", encoding=encoding) as f:\n data = json.load(f)\n plugins = data.get(\"plugins\")\n plugin_dirs = data.get(\"plugin_dirs\")\n assert isinstance(plugins, list), \"plugins must be a list of plugin name\"\n assert isinstance(plugin_dirs, list), \"plugin_dirs must be a list of directories\"\n return load_all_plugins(set(plugins), set(plugin_dirs))\n\n\ndef load_from_toml(file_path: str, encoding: str = \"utf-8\") -> Set[Plugin]:\n \"\"\"\u5bfc\u5165\u6307\u5b9a toml \u6587\u4ef6 `[tool.nonebot]` \u4e2d\u7684 `plugins` \u4ee5\u53ca `plugin_dirs` \u4e0b\u591a\u4e2a\u63d2\u4ef6\uff0c\u4ee5 `_` \u5f00\u5934\u7684\u63d2\u4ef6\u4e0d\u4f1a\u88ab\u5bfc\u5165!\n\n \u53c2\u6570:\n file_path: \u6307\u5b9a toml \u6587\u4ef6\u8def\u5f84\n encoding: \u6307\u5b9a toml \u6587\u4ef6\u7f16\u7801\n\n \u7528\u6cd5:\n ```toml title=pyproject.toml\n [tool.nonebot]\n plugins = [\"some_plugin\"]\n plugin_dirs = [\"some_dir\"]\n ```\n\n ```python\n nonebot.load_from_toml(\"pyproject.toml\")\n ```\n \"\"\"\n with open(file_path, \"r\", encoding=encoding) as f:\n data = tomlkit.parse(f.read()) # type: ignore\n\n nonebot_data = data.get(\"tool\", {}).get(\"nonebot\")\n if not nonebot_data:\n nonebot_data = data.get(\"nonebot\", {}).get(\"plugins\")\n if nonebot_data:\n warnings.warn(\n \"[nonebot.plugins] table are now deprecated. Use [tool.nonebot] instead.\",\n DeprecationWarning,\n )\n else:\n raise ValueError(\"Cannot find '[tool.nonebot]' in given toml file!\")\n plugins = nonebot_data.get(\"plugins\", [])\n plugin_dirs = nonebot_data.get(\"plugin_dirs\", [])\n assert isinstance(plugins, list), \"plugins must be a list of plugin name\"\n assert isinstance(plugin_dirs, list), \"plugin_dirs must be a list of directories\"\n return load_all_plugins(plugins, plugin_dirs)\n\n\ndef load_builtin_plugin(name: str) -> Optional[Plugin]:\n \"\"\"\u5bfc\u5165 NoneBot \u5185\u7f6e\u63d2\u4ef6\u3002\n\n \u53c2\u6570:\n name: \u63d2\u4ef6\u540d\u79f0\n \"\"\"\n return load_plugin(f\"nonebot.plugins.{name}\")\n\n\ndef load_builtin_plugins(*plugins) -> Set[Plugin]:\n \"\"\"\u5bfc\u5165\u591a\u4e2a NoneBot \u5185\u7f6e\u63d2\u4ef6\u3002\n\n \u53c2\u6570:\n plugins: \u63d2\u4ef6\u540d\u79f0\u5217\u8868\n \"\"\"\n return load_all_plugins([f\"nonebot.plugins.{p}\" for p in plugins], [])\n\n\ndef require(name: str) -> Export:\n \"\"\"\u83b7\u53d6\u4e00\u4e2a\u63d2\u4ef6\u7684\u5bfc\u51fa\u5185\u5bb9\u3002\n\n \u5982\u679c\u4e3a `load_plugins` \u6587\u4ef6\u5939\u5bfc\u5165\u7684\u63d2\u4ef6\uff0c\u5219\u4e3a\u6587\u4ef6(\u5939)\u540d\u3002\n\n \u53c2\u6570:\n name: \u63d2\u4ef6\u540d\uff0c\u5373 {ref}`nonebot.plugin.plugin.Plugin.name`\u3002\n\n \u5f02\u5e38:\n RuntimeError: \u63d2\u4ef6\u65e0\u6cd5\u52a0\u8f7d\n \"\"\"\n plugin = get_plugin(name) or load_plugin(name)\n if not plugin:\n raise RuntimeError(f'Cannot load plugin \"{name}\"!')\n return plugin.export\n", "path": "nonebot/plugin/load.py"}, {"content": "\"\"\"\u672c\u6a21\u5757\u5b9e\u73b0\u63d2\u4ef6\u52a0\u8f7d\u6d41\u7a0b\u3002\n\n\u53c2\u8003: [import hooks](https://docs.python.org/3/reference/import.html#import-hooks), [PEP302](https://www.python.org/dev/peps/pep-0302/)\n\nFrontMatter:\n sidebar_position: 5\n description: nonebot.plugin.manager \u6a21\u5757\n\"\"\"\nimport sys\nimport pkgutil\nimport importlib\nfrom pathlib import Path\nfrom itertools import chain\nfrom types import ModuleType\nfrom importlib.abc import MetaPathFinder\nfrom importlib.machinery import PathFinder, SourceFileLoader\nfrom typing import Set, Dict, List, Union, Iterable, Optional, Sequence\n\nfrom nonebot.log import logger\nfrom nonebot.utils import escape_tag\n\nfrom . import _managers, _current_plugin\nfrom .plugin import Plugin, _new_plugin, _confirm_plugin\n\n\nclass PluginManager:\n def __init__(\n self,\n plugins: Optional[Iterable[str]] = None,\n search_path: Optional[Iterable[str]] = None,\n ):\n\n # simple plugin not in search path\n self.plugins: Set[str] = set(plugins or [])\n self.search_path: Set[str] = set(search_path or [])\n # cache plugins\n self.searched_plugins: Dict[str, Path] = {}\n self.list_plugins()\n\n def _path_to_module_name(self, path: Path) -> str:\n rel_path = path.resolve().relative_to(Path(\".\").resolve())\n if rel_path.stem == \"__init__\":\n return \".\".join(rel_path.parts[:-1])\n else:\n return \".\".join(rel_path.parts[:-1] + (rel_path.stem,))\n\n def _previous_plugins(self) -> List[str]:\n _pre_managers: List[PluginManager]\n if self in _managers:\n _pre_managers = _managers[: _managers.index(self)]\n else:\n _pre_managers = _managers[:]\n\n return [\n *chain.from_iterable(\n [*manager.plugins, *manager.searched_plugins.keys()]\n for manager in _pre_managers\n )\n ]\n\n def list_plugins(self) -> Set[str]:\n # get all previous ready to load plugins\n previous_plugins = self._previous_plugins()\n searched_plugins: Dict[str, Path] = {}\n third_party_plugins: Set[str] = set()\n\n for plugin in self.plugins:\n name = plugin.rsplit(\".\", 1)[-1] if \".\" in plugin else plugin\n if name in third_party_plugins or name in previous_plugins:\n raise RuntimeError(\n f\"Plugin already exists: {name}! Check your plugin name\"\n )\n third_party_plugins.add(plugin)\n\n for module_info in pkgutil.iter_modules(self.search_path):\n if module_info.name.startswith(\"_\"):\n continue\n if (\n module_info.name in searched_plugins.keys()\n or module_info.name in previous_plugins\n or module_info.name in third_party_plugins\n ):\n raise RuntimeError(\n f\"Plugin already exists: {module_info.name}! Check your plugin name\"\n )\n module_spec = module_info.module_finder.find_spec(module_info.name, None)\n if not module_spec:\n continue\n module_path = module_spec.origin\n if not module_path:\n continue\n searched_plugins[module_info.name] = Path(module_path).resolve()\n\n self.searched_plugins = searched_plugins\n\n return third_party_plugins | set(self.searched_plugins.keys())\n\n def load_plugin(self, name) -> Optional[Plugin]:\n try:\n if name in self.plugins:\n module = importlib.import_module(name)\n elif name not in self.searched_plugins:\n raise RuntimeError(f\"Plugin not found: {name}! Check your plugin name\")\n else:\n module = importlib.import_module(\n self._path_to_module_name(self.searched_plugins[name])\n )\n\n logger.opt(colors=True).success(\n f'Succeeded to import \"<y>{escape_tag(name)}</y>\"'\n )\n return getattr(module, \"__plugin__\", None)\n except Exception as e:\n logger.opt(colors=True, exception=e).error(\n f'<r><bg #f8bbd0>Failed to import \"{escape_tag(name)}\"</bg #f8bbd0></r>'\n )\n\n def load_all_plugins(self) -> Set[Plugin]:\n return set(\n filter(None, (self.load_plugin(name) for name in self.list_plugins()))\n )\n\n\nclass PluginFinder(MetaPathFinder):\n def find_spec(\n self,\n fullname: str,\n path: Optional[Sequence[Union[bytes, str]]],\n target: Optional[ModuleType] = None,\n ):\n if _managers:\n index = -1\n module_spec = PathFinder.find_spec(fullname, path, target)\n if not module_spec:\n return\n module_origin = module_spec.origin\n if not module_origin:\n return\n module_path = Path(module_origin).resolve()\n\n while -index <= len(_managers):\n manager = _managers[index]\n\n if (\n fullname in manager.plugins\n or module_path in manager.searched_plugins.values()\n ):\n module_spec.loader = PluginLoader(manager, fullname, module_origin)\n return module_spec\n\n index -= 1\n return\n\n\nclass PluginLoader(SourceFileLoader):\n def __init__(self, manager: PluginManager, fullname: str, path) -> None:\n self.manager = manager\n self.loaded = False\n super().__init__(fullname, path)\n\n def create_module(self, spec) -> Optional[ModuleType]:\n if self.name in sys.modules:\n self.loaded = True\n return sys.modules[self.name]\n # return None to use default module creation\n return super().create_module(spec)\n\n def exec_module(self, module: ModuleType) -> None:\n if self.loaded:\n return\n\n plugin = _new_plugin(self.name, module, self.manager)\n parent_plugin = _current_plugin.get()\n if parent_plugin and _managers.index(parent_plugin.manager) < _managers.index(\n self.manager\n ):\n plugin.parent_plugin = parent_plugin\n parent_plugin.sub_plugins.add(plugin)\n\n _plugin_token = _current_plugin.set(plugin)\n\n setattr(module, \"__plugin__\", plugin)\n\n # try:\n # super().exec_module(module)\n # except Exception as e:\n # raise ImportError(\n # f\"Error when executing module {module_name} from {module.__file__}.\"\n # ) from e\n super().exec_module(module)\n\n _confirm_plugin(plugin)\n\n _current_plugin.reset(_plugin_token)\n return\n\n\nsys.meta_path.insert(0, PluginFinder())\n", "path": "nonebot/plugin/manager.py"}], "after_files": [{"content": "\"\"\"\u672c\u6a21\u5757\u5b9a\u4e49\u63d2\u4ef6\u52a0\u8f7d\u63a5\u53e3\u3002\n\nFrontMatter:\n sidebar_position: 1\n description: nonebot.plugin.load \u6a21\u5757\n\"\"\"\nimport json\nimport warnings\nfrom typing import Set, Iterable, Optional\n\nimport tomlkit\n\nfrom . import _managers\nfrom .export import Export\nfrom .manager import PluginManager\nfrom .plugin import Plugin, get_plugin\n\n\ndef load_plugin(module_path: str) -> Optional[Plugin]:\n \"\"\"\u52a0\u8f7d\u5355\u4e2a\u63d2\u4ef6\uff0c\u53ef\u4ee5\u662f\u672c\u5730\u63d2\u4ef6\u6216\u662f\u901a\u8fc7 `pip` \u5b89\u88c5\u7684\u63d2\u4ef6\u3002\n\n \u53c2\u6570:\n module_path: \u63d2\u4ef6\u540d\u79f0 `path.to.your.plugin`\n \"\"\"\n\n manager = PluginManager([module_path])\n _managers.append(manager)\n return manager.load_plugin(module_path)\n\n\ndef load_plugins(*plugin_dir: str) -> Set[Plugin]:\n \"\"\"\u5bfc\u5165\u6587\u4ef6\u5939\u4e0b\u591a\u4e2a\u63d2\u4ef6\uff0c\u4ee5 `_` \u5f00\u5934\u7684\u63d2\u4ef6\u4e0d\u4f1a\u88ab\u5bfc\u5165!\n\n \u53c2\u6570:\n plugin_dir: \u6587\u4ef6\u5939\u8def\u5f84\n \"\"\"\n manager = PluginManager(search_path=plugin_dir)\n _managers.append(manager)\n return manager.load_all_plugins()\n\n\ndef load_all_plugins(\n module_path: Iterable[str], plugin_dir: Iterable[str]\n) -> Set[Plugin]:\n \"\"\"\u5bfc\u5165\u6307\u5b9a\u5217\u8868\u4e2d\u7684\u63d2\u4ef6\u4ee5\u53ca\u6307\u5b9a\u76ee\u5f55\u4e0b\u591a\u4e2a\u63d2\u4ef6\uff0c\u4ee5 `_` \u5f00\u5934\u7684\u63d2\u4ef6\u4e0d\u4f1a\u88ab\u5bfc\u5165!\n\n \u53c2\u6570:\n module_path: \u6307\u5b9a\u63d2\u4ef6\u96c6\u5408\n plugin_dir: \u6307\u5b9a\u6587\u4ef6\u5939\u8def\u5f84\u96c6\u5408\n \"\"\"\n manager = PluginManager(module_path, plugin_dir)\n _managers.append(manager)\n return manager.load_all_plugins()\n\n\ndef load_from_json(file_path: str, encoding: str = \"utf-8\") -> Set[Plugin]:\n \"\"\"\u5bfc\u5165\u6307\u5b9a json \u6587\u4ef6\u4e2d\u7684 `plugins` \u4ee5\u53ca `plugin_dirs` \u4e0b\u591a\u4e2a\u63d2\u4ef6\uff0c\u4ee5 `_` \u5f00\u5934\u7684\u63d2\u4ef6\u4e0d\u4f1a\u88ab\u5bfc\u5165!\n\n \u53c2\u6570:\n file_path: \u6307\u5b9a json \u6587\u4ef6\u8def\u5f84\n encoding: \u6307\u5b9a json \u6587\u4ef6\u7f16\u7801\n\n \u7528\u6cd5:\n ```json title=plugins.json\n {\n \"plugins\": [\"some_plugin\"],\n \"plugin_dirs\": [\"some_dir\"]\n }\n ```\n\n ```python\n nonebot.load_from_json(\"plugins.json\")\n ```\n \"\"\"\n with open(file_path, \"r\", encoding=encoding) as f:\n data = json.load(f)\n plugins = data.get(\"plugins\")\n plugin_dirs = data.get(\"plugin_dirs\")\n assert isinstance(plugins, list), \"plugins must be a list of plugin name\"\n assert isinstance(plugin_dirs, list), \"plugin_dirs must be a list of directories\"\n return load_all_plugins(set(plugins), set(plugin_dirs))\n\n\ndef load_from_toml(file_path: str, encoding: str = \"utf-8\") -> Set[Plugin]:\n \"\"\"\u5bfc\u5165\u6307\u5b9a toml \u6587\u4ef6 `[tool.nonebot]` \u4e2d\u7684 `plugins` \u4ee5\u53ca `plugin_dirs` \u4e0b\u591a\u4e2a\u63d2\u4ef6\uff0c\u4ee5 `_` \u5f00\u5934\u7684\u63d2\u4ef6\u4e0d\u4f1a\u88ab\u5bfc\u5165!\n\n \u53c2\u6570:\n file_path: \u6307\u5b9a toml \u6587\u4ef6\u8def\u5f84\n encoding: \u6307\u5b9a toml \u6587\u4ef6\u7f16\u7801\n\n \u7528\u6cd5:\n ```toml title=pyproject.toml\n [tool.nonebot]\n plugins = [\"some_plugin\"]\n plugin_dirs = [\"some_dir\"]\n ```\n\n ```python\n nonebot.load_from_toml(\"pyproject.toml\")\n ```\n \"\"\"\n with open(file_path, \"r\", encoding=encoding) as f:\n data = tomlkit.parse(f.read()) # type: ignore\n\n nonebot_data = data.get(\"tool\", {}).get(\"nonebot\")\n if not nonebot_data:\n nonebot_data = data.get(\"nonebot\", {}).get(\"plugins\")\n if nonebot_data:\n warnings.warn(\n \"[nonebot.plugins] table are now deprecated. Use [tool.nonebot] instead.\",\n DeprecationWarning,\n )\n else:\n raise ValueError(\"Cannot find '[tool.nonebot]' in given toml file!\")\n plugins = nonebot_data.get(\"plugins\", [])\n plugin_dirs = nonebot_data.get(\"plugin_dirs\", [])\n assert isinstance(plugins, list), \"plugins must be a list of plugin name\"\n assert isinstance(plugin_dirs, list), \"plugin_dirs must be a list of directories\"\n return load_all_plugins(plugins, plugin_dirs)\n\n\ndef load_builtin_plugin(name: str) -> Optional[Plugin]:\n \"\"\"\u5bfc\u5165 NoneBot \u5185\u7f6e\u63d2\u4ef6\u3002\n\n \u53c2\u6570:\n name: \u63d2\u4ef6\u540d\u79f0\n \"\"\"\n return load_plugin(f\"nonebot.plugins.{name}\")\n\n\ndef load_builtin_plugins(*plugins) -> Set[Plugin]:\n \"\"\"\u5bfc\u5165\u591a\u4e2a NoneBot \u5185\u7f6e\u63d2\u4ef6\u3002\n\n \u53c2\u6570:\n plugins: \u63d2\u4ef6\u540d\u79f0\u5217\u8868\n \"\"\"\n return load_all_plugins([f\"nonebot.plugins.{p}\" for p in plugins], [])\n\n\ndef _find_manager_by_name(name: str) -> Optional[PluginManager]:\n for manager in reversed(_managers):\n if name in manager.plugins or name in manager.searched_plugins:\n return manager\n\n\ndef require(name: str) -> Export:\n \"\"\"\u83b7\u53d6\u4e00\u4e2a\u63d2\u4ef6\u7684\u5bfc\u51fa\u5185\u5bb9\u3002\n\n \u5982\u679c\u4e3a `load_plugins` \u6587\u4ef6\u5939\u5bfc\u5165\u7684\u63d2\u4ef6\uff0c\u5219\u4e3a\u6587\u4ef6(\u5939)\u540d\u3002\n\n \u53c2\u6570:\n name: \u63d2\u4ef6\u540d\uff0c\u5373 {ref}`nonebot.plugin.plugin.Plugin.name`\u3002\n\n \u5f02\u5e38:\n RuntimeError: \u63d2\u4ef6\u65e0\u6cd5\u52a0\u8f7d\n \"\"\"\n plugin = get_plugin(name.rsplit(\".\", 1)[-1])\n if not plugin:\n manager = _find_manager_by_name(name)\n if manager:\n plugin = manager.load_plugin(name)\n else:\n plugin = load_plugin(name)\n if not plugin:\n raise RuntimeError(f'Cannot load plugin \"{name}\"!')\n return plugin.export\n", "path": "nonebot/plugin/load.py"}, {"content": "\"\"\"\u672c\u6a21\u5757\u5b9e\u73b0\u63d2\u4ef6\u52a0\u8f7d\u6d41\u7a0b\u3002\n\n\u53c2\u8003: [import hooks](https://docs.python.org/3/reference/import.html#import-hooks), [PEP302](https://www.python.org/dev/peps/pep-0302/)\n\nFrontMatter:\n sidebar_position: 5\n description: nonebot.plugin.manager \u6a21\u5757\n\"\"\"\nimport sys\nimport pkgutil\nimport importlib\nfrom pathlib import Path\nfrom itertools import chain\nfrom types import ModuleType\nfrom importlib.abc import MetaPathFinder\nfrom importlib.machinery import PathFinder, SourceFileLoader\nfrom typing import Set, Dict, List, Union, Iterable, Optional, Sequence\n\nfrom nonebot.log import logger\nfrom nonebot.utils import escape_tag\n\nfrom . import _managers, _current_plugin\nfrom .plugin import Plugin, _new_plugin, _confirm_plugin\n\n\nclass PluginManager:\n def __init__(\n self,\n plugins: Optional[Iterable[str]] = None,\n search_path: Optional[Iterable[str]] = None,\n ):\n\n # simple plugin not in search path\n self.plugins: Set[str] = set(plugins or [])\n self.search_path: Set[str] = set(search_path or [])\n # cache plugins\n self.searched_plugins: Dict[str, Path] = {}\n self.list_plugins()\n\n def _path_to_module_name(self, path: Path) -> str:\n rel_path = path.resolve().relative_to(Path(\".\").resolve())\n if rel_path.stem == \"__init__\":\n return \".\".join(rel_path.parts[:-1])\n else:\n return \".\".join(rel_path.parts[:-1] + (rel_path.stem,))\n\n def _previous_plugins(self) -> List[str]:\n _pre_managers: List[PluginManager]\n if self in _managers:\n _pre_managers = _managers[: _managers.index(self)]\n else:\n _pre_managers = _managers[:]\n\n return [\n *chain.from_iterable(\n [\n *map(lambda x: x.rsplit(\".\", 1)[-1], manager.plugins),\n *manager.searched_plugins.keys(),\n ]\n for manager in _pre_managers\n )\n ]\n\n def list_plugins(self) -> Set[str]:\n # get all previous ready to load plugins\n previous_plugins = self._previous_plugins()\n searched_plugins: Dict[str, Path] = {}\n third_party_plugins: Set[str] = set()\n\n for plugin in self.plugins:\n name = plugin.rsplit(\".\", 1)[-1]\n if name in third_party_plugins or name in previous_plugins:\n raise RuntimeError(\n f\"Plugin already exists: {name}! Check your plugin name\"\n )\n third_party_plugins.add(plugin)\n\n for module_info in pkgutil.iter_modules(self.search_path):\n if module_info.name.startswith(\"_\"):\n continue\n if (\n module_info.name in searched_plugins.keys()\n or module_info.name in previous_plugins\n or module_info.name in third_party_plugins\n ):\n raise RuntimeError(\n f\"Plugin already exists: {module_info.name}! Check your plugin name\"\n )\n module_spec = module_info.module_finder.find_spec(module_info.name, None)\n if not module_spec:\n continue\n module_path = module_spec.origin\n if not module_path:\n continue\n searched_plugins[module_info.name] = Path(module_path).resolve()\n\n self.searched_plugins = searched_plugins\n\n return third_party_plugins | set(self.searched_plugins.keys())\n\n def load_plugin(self, name: str) -> Optional[Plugin]:\n try:\n if name in self.plugins:\n module = importlib.import_module(name)\n elif name in self.searched_plugins:\n module = importlib.import_module(\n self._path_to_module_name(self.searched_plugins[name])\n )\n else:\n raise RuntimeError(f\"Plugin not found: {name}! Check your plugin name\")\n\n logger.opt(colors=True).success(\n f'Succeeded to import \"<y>{escape_tag(name)}</y>\"'\n )\n plugin = getattr(module, \"__plugin__\", None)\n if plugin is None:\n raise RuntimeError(\n f\"Module {module.__name__} is not loaded as a plugin! \"\n \"Make sure not to import it before loading.\"\n )\n return plugin\n except Exception as e:\n logger.opt(colors=True, exception=e).error(\n f'<r><bg #f8bbd0>Failed to import \"{escape_tag(name)}\"</bg #f8bbd0></r>'\n )\n\n def load_all_plugins(self) -> Set[Plugin]:\n return set(\n filter(None, (self.load_plugin(name) for name in self.list_plugins()))\n )\n\n\nclass PluginFinder(MetaPathFinder):\n def find_spec(\n self,\n fullname: str,\n path: Optional[Sequence[Union[bytes, str]]],\n target: Optional[ModuleType] = None,\n ):\n if _managers:\n module_spec = PathFinder.find_spec(fullname, path, target)\n if not module_spec:\n return\n module_origin = module_spec.origin\n if not module_origin:\n return\n module_path = Path(module_origin).resolve()\n\n for manager in reversed(_managers):\n if (\n fullname in manager.plugins\n or module_path in manager.searched_plugins.values()\n ):\n module_spec.loader = PluginLoader(manager, fullname, module_origin)\n return module_spec\n return\n\n\nclass PluginLoader(SourceFileLoader):\n def __init__(self, manager: PluginManager, fullname: str, path) -> None:\n self.manager = manager\n self.loaded = False\n super().__init__(fullname, path)\n\n def create_module(self, spec) -> Optional[ModuleType]:\n if self.name in sys.modules:\n self.loaded = True\n return sys.modules[self.name]\n # return None to use default module creation\n return super().create_module(spec)\n\n def exec_module(self, module: ModuleType) -> None:\n if self.loaded:\n return\n\n plugin = _new_plugin(self.name, module, self.manager)\n parent_plugin = _current_plugin.get()\n if parent_plugin and _managers.index(parent_plugin.manager) < _managers.index(\n self.manager\n ):\n plugin.parent_plugin = parent_plugin\n parent_plugin.sub_plugins.add(plugin)\n\n _plugin_token = _current_plugin.set(plugin)\n\n setattr(module, \"__plugin__\", plugin)\n\n # try:\n # super().exec_module(module)\n # except Exception as e:\n # raise ImportError(\n # f\"Error when executing module {module_name} from {module.__file__}.\"\n # ) from e\n super().exec_module(module)\n\n _confirm_plugin(plugin)\n\n _current_plugin.reset(_plugin_token)\n return\n\n\nsys.meta_path.insert(0, PluginFinder())\n", "path": "nonebot/plugin/manager.py"}]} | 3,831 | 998 |
gh_patches_debug_30778 | rasdani/github-patches | git_diff | vispy__vispy-1780 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Slow import
On my Windows workstation with >1200 of Python packages installed, `import vispy` takes about 3.4 seconds.
98% of that time is spend at
https://github.com/vispy/vispy/blob/ded293841c6438ab54af10c561f372155bb77edc/vispy/__init__.py#L26
On this system, more than half of `napari`'s import time is spent on determining the vispy version.
Please consider not using `pkg_resources` during runtime.
Loosely related to https://github.com/napari/napari/pull/745
https://github.com/pypa/setuptools/issues/510
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright (c) Vispy Development Team. All Rights Reserved.
3 # Distributed under the (new) BSD License. See LICENSE.txt for more info.
4 """Vispy setup script.
5
6 Steps to do a new release:
7
8 Preparations:
9 * Test on Windows, Linux, Mac
10 * Make release notes
11 * Update API documentation and other docs that need updating.
12
13 Define the version and release:
14 * tag the tip changeset as version x.x.x; `git tag -a 'vX.Y.Z' -m "Version X.Y.Z"`
15 * push tag to github
16 * verify that azure pipelines complete
17 * verify that `.tar.gz` sdist and binary wheels are available on PyPI
18
19 Announcing:
20 * It can be worth waiting a day for eager users to report critical bugs
21 * Announce in scipy-user, vispy mailing list, twitter (@vispyproject)
22
23 """
24
25 import os
26 import sys
27 import platform
28 from os import path as op
29 from distutils import log
30 from setuptools import setup, find_packages, Command, Extension
31 from setuptools.command.sdist import sdist
32 from setuptools.command.build_py import build_py
33 from setuptools.command.egg_info import egg_info
34 from subprocess import check_call
35
36 import numpy as np
37 from Cython.Build import cythonize
38
39 log.set_verbosity(log.DEBUG)
40 log.info('setup.py entered')
41 log.info('$PATH=%s' % os.environ['PATH'])
42
43 name = 'vispy'
44 description = 'Interactive visualization in Python'
45
46 # Special commands for building jupyter notebook extension
47 here = os.path.dirname(os.path.abspath(__file__))
48 node_root = os.path.join(here, 'js')
49 is_repo = os.path.exists(os.path.join(here, '.git'))
50
51 npm_path = os.pathsep.join([
52 os.path.join(node_root, 'node_modules', '.bin'),
53 os.environ.get('PATH', os.defpath),
54 ])
55
56
57 def set_builtin(name, value):
58 if isinstance(__builtins__, dict):
59 __builtins__[name] = value
60 else:
61 setattr(__builtins__, name, value)
62
63
64 def js_prerelease(command, strict=False):
65 """decorator for building minified js/css prior to another command"""
66 class DecoratedCommand(command):
67 def run(self):
68 jsdeps = self.distribution.get_command_obj('jsdeps')
69 if not is_repo and all(os.path.exists(t) for t in jsdeps.targets):
70 # sdist, nothing to do
71 command.run(self)
72 return
73
74 try:
75 self.distribution.run_command('jsdeps')
76 except Exception as e:
77 missing = [t for t in jsdeps.targets if not os.path.exists(t)]
78 if strict or missing:
79 log.warn('rebuilding js and css failed')
80 if missing:
81 log.error('missing files: %s' % missing)
82 # HACK: Allow users who can't build the JS to still install vispy
83 if not is_repo:
84 raise e
85 log.warn('WARNING: continuing installation WITHOUT nbextension javascript')
86 # remove JS files from data_files so setuptools doesn't try to copy
87 # non-existent files
88 self.distribution.data_files = [x for x in self.distribution.data_files
89 if 'jupyter' not in x[0]]
90 else:
91 log.warn('rebuilding js and css failed (not a problem)')
92 log.warn(str(e))
93 command.run(self)
94 update_package_data(self.distribution)
95 return DecoratedCommand
96
97
98 def update_package_data(distribution):
99 """update package_data to catch changes during setup"""
100 build_py = distribution.get_command_obj('build_py')
101 # distribution.package_data = find_package_data()
102 # re-init build_py options which load package_data
103 build_py.finalize_options()
104
105
106 class NPM(Command):
107 description = 'install package.json dependencies using npm'
108
109 user_options = []
110
111 node_modules = os.path.join(node_root, 'node_modules')
112
113 targets = [
114 os.path.join(here, 'vispy', 'static', 'extension.js'),
115 os.path.join(here, 'vispy', 'static', 'index.js')
116 ]
117
118 def initialize_options(self):
119 pass
120
121 def finalize_options(self):
122 pass
123
124 def get_npm_name(self):
125 npmName = 'npm';
126 if platform.system() == 'Windows':
127 npmName = 'npm.cmd';
128
129 return npmName;
130
131 def has_npm(self):
132 npmName = self.get_npm_name();
133 try:
134 check_call([npmName, '--version'])
135 return True
136 except:
137 return False
138
139 def should_run_npm_install(self):
140 package_json = os.path.join(node_root, 'package.json')
141 node_modules_exists = os.path.exists(self.node_modules)
142 return self.has_npm()
143
144 def run(self):
145 has_npm = self.has_npm()
146 if not has_npm:
147 log.error("`npm` unavailable. If you're running this command "
148 "using sudo, make sure `npm` is available to sudo")
149
150 env = os.environ.copy()
151 env['PATH'] = npm_path
152
153 if self.should_run_npm_install():
154 log.info("Installing build dependencies with npm. This may take "
155 "a while...")
156 npmName = self.get_npm_name();
157 check_call([npmName, 'install', '--verbose'], cwd=node_root,
158 stdout=sys.stdout, stderr=sys.stderr)
159 os.utime(self.node_modules, None)
160
161 for t in self.targets:
162 if not os.path.exists(t):
163 msg = 'Missing file: %s' % t
164 if not has_npm:
165 msg += '\nnpm is required to build a development ' \
166 'version of a widget extension'
167 raise ValueError(msg)
168
169 # update package data in case this created new files
170 update_package_data(self.distribution)
171
172
173 extensions = [Extension('vispy.visuals.text._sdf_cpu',
174 [op.join('vispy', 'visuals', 'text', '_sdf_cpu.pyx')],
175 include_dirs=[np.get_include()]),
176 ]
177
178 readme = open('README.rst', 'r').read()
179 setup(
180 name=name,
181 use_scm_version=True,
182 author='Vispy contributors',
183 author_email='[email protected]',
184 license='(new) BSD',
185 url='http://vispy.org',
186 download_url='https://pypi.python.org/pypi/vispy',
187 keywords=[
188 'visualization',
189 'OpenGl',
190 'ES',
191 'medical',
192 'imaging',
193 '3D',
194 'plotting',
195 'numpy',
196 'bigdata',
197 'ipython',
198 'jupyter',
199 'widgets',
200 ],
201 description=description,
202 long_description=readme,
203 platforms='any',
204 provides=['vispy'],
205 cmdclass={
206 'build_py': js_prerelease(build_py),
207 'egg_info': js_prerelease(egg_info),
208 'sdist': js_prerelease(sdist, strict=True),
209 'jsdeps': NPM,
210 },
211 python_requires='>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*',
212 install_requires=['numpy', 'freetype-py', 'setuptools'],
213 setup_requires=['numpy', 'cython', 'setuptools_scm', 'setuptools_scm_git_archive'],
214 extras_require={
215 'ipython-static': ['ipython'],
216 'ipython-vnc': ['ipython>=7'],
217 'ipython-webgl': ['ipywidgets>=7.0', 'ipython>=7', 'tornado'],
218 'pyglet': ['pyglet>=1.2'],
219 'pyqt5': ['pyqt5'],
220 'pyside': ['PySide'],
221 'pyside2': ['PySide2'],
222 'sdl2': ['PySDL2'],
223 'wx': ['wxPython'],
224 'doc': ['sphinx_bootstrap_theme', 'numpydoc'],
225 },
226 packages=find_packages(),
227 ext_modules=cythonize(extensions),
228 package_dir={'vispy': 'vispy'},
229 data_files=[
230 ('share/jupyter/nbextensions/vispy', [
231 'vispy/static/extension.js',
232 'vispy/static/index.js',
233 'vispy/static/index.js.map',
234 ]),
235 ('etc/jupyter/nbconfig/notebook.d', ['vispy.json']),
236 ],
237 include_package_data=True,
238 package_data={
239 'vispy': [op.join('io', '_data', '*'),
240 op.join('html', 'static', 'js', '*'),
241 op.join('app', 'tests', 'qt-designer.ui'),
242 op.join('util', 'fonts', 'data', '*.ttf'),
243 ],
244
245 'vispy.glsl': ['*.vert','*.frag', "*.glsl"],
246 'vispy.glsl.antialias': ['*.vert','*.frag', "*.glsl"],
247 'vispy.glsl.arrowheads': ['*.vert','*.frag', "*.glsl"],
248 'vispy.glsl.arrows': ['*.vert','*.frag', "*.glsl"],
249 'vispy.glsl.collections': ['*.vert','*.frag', "*.glsl"],
250 'vispy.glsl.colormaps': ['*.vert','*.frag', "*.glsl"],
251 'vispy.glsl.lines': ['*.vert','*.frag', "*.glsl"],
252 'vispy.glsl.markers': ['*.vert','*.frag', "*.glsl"],
253 'vispy.glsl.math': ['*.vert','*.frag', "*.glsl"],
254 'vispy.glsl.misc': ['*.vert','*.frag', "*.glsl"],
255 'vispy.glsl.transforms': ['*.vert','*.frag', "*.glsl"],
256
257 },
258 zip_safe=False,
259 classifiers=[
260 'Development Status :: 3 - Alpha',
261 'Intended Audience :: Science/Research',
262 'Intended Audience :: Education',
263 'Intended Audience :: Developers',
264 'Topic :: Scientific/Engineering :: Visualization',
265 'License :: OSI Approved :: BSD License',
266 'Operating System :: MacOS :: MacOS X',
267 'Operating System :: Microsoft :: Windows',
268 'Operating System :: POSIX',
269 'Programming Language :: Python',
270 'Programming Language :: Python :: 2.7',
271 'Programming Language :: Python :: 3.5',
272 'Programming Language :: Python :: 3.6',
273 'Programming Language :: Python :: 3.7',
274 'Framework :: IPython'
275 ],
276 )
277
```
Path: `vispy/__init__.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright (c) Vispy Development Team. All Rights Reserved.
3 # Distributed under the (new) BSD License. See LICENSE.txt for more info.
4
5 """
6
7 =====
8 Vispy
9 =====
10
11 Vispy is a **high-performance interactive 2D/3D data visualization
12 library**. Vispy leverages the computational power of modern **Graphics
13 Processing Units (GPUs)** through the **OpenGL** library to display very
14 large datasets.
15
16 For more information, see http://vispy.org.
17
18 """
19
20 from __future__ import division
21 from pkg_resources import get_distribution, DistributionNotFound
22
23 __all__ = ['use', 'sys_info', 'set_log_level', 'test']
24
25 try:
26 __version__ = get_distribution(__name__).version
27 except DistributionNotFound:
28 # package is not installed
29 pass
30
31 from .util import config, set_log_level, keys, sys_info # noqa
32 from .util.wrappers import use # noqa
33 from .testing import test # noqa
34 # load the two functions that IPython uses to instantiate an extension
35 # that way, the user only needs to run %load_ext vispy.ipython rather that
36 # %load_ext vispy.ipython.ipython
37 from .ipython import load_ipython_extension, unload_ipython_extension # noqa
38
39
40 # Allow for Jupyter extension to be enabled
41 def _jupyter_nbextension_paths():
42 return [{
43 'section': 'notebook',
44 'src': 'static',
45 'dest': 'vispy',
46 'require': 'vispy/extension'
47 }]
48
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -178,7 +178,7 @@
readme = open('README.rst', 'r').read()
setup(
name=name,
- use_scm_version=True,
+ use_scm_version={'write_to': 'vispy/version.py'},
author='Vispy contributors',
author_email='[email protected]',
license='(new) BSD',
@@ -209,7 +209,7 @@
'jsdeps': NPM,
},
python_requires='>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*',
- install_requires=['numpy', 'freetype-py', 'setuptools'],
+ install_requires=['numpy', 'freetype-py'],
setup_requires=['numpy', 'cython', 'setuptools_scm', 'setuptools_scm_git_archive'],
extras_require={
'ipython-static': ['ipython'],
@@ -223,7 +223,7 @@
'wx': ['wxPython'],
'doc': ['sphinx_bootstrap_theme', 'numpydoc'],
},
- packages=find_packages(),
+ packages=find_packages(exclude=['make']),
ext_modules=cythonize(extensions),
package_dir={'vispy': 'vispy'},
data_files=[
diff --git a/vispy/__init__.py b/vispy/__init__.py
--- a/vispy/__init__.py
+++ b/vispy/__init__.py
@@ -18,13 +18,12 @@
"""
from __future__ import division
-from pkg_resources import get_distribution, DistributionNotFound
__all__ = ['use', 'sys_info', 'set_log_level', 'test']
try:
- __version__ = get_distribution(__name__).version
-except DistributionNotFound:
+ from .version import version as __version__ # noqa
+except ImportError:
# package is not installed
pass
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -178,7 +178,7 @@\n readme = open('README.rst', 'r').read()\n setup(\n name=name,\n- use_scm_version=True,\n+ use_scm_version={'write_to': 'vispy/version.py'},\n author='Vispy contributors',\n author_email='[email protected]',\n license='(new) BSD',\n@@ -209,7 +209,7 @@\n 'jsdeps': NPM,\n },\n python_requires='>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*',\n- install_requires=['numpy', 'freetype-py', 'setuptools'],\n+ install_requires=['numpy', 'freetype-py'],\n setup_requires=['numpy', 'cython', 'setuptools_scm', 'setuptools_scm_git_archive'],\n extras_require={\n 'ipython-static': ['ipython'],\n@@ -223,7 +223,7 @@\n 'wx': ['wxPython'],\n 'doc': ['sphinx_bootstrap_theme', 'numpydoc'],\n },\n- packages=find_packages(),\n+ packages=find_packages(exclude=['make']),\n ext_modules=cythonize(extensions),\n package_dir={'vispy': 'vispy'},\n data_files=[\ndiff --git a/vispy/__init__.py b/vispy/__init__.py\n--- a/vispy/__init__.py\n+++ b/vispy/__init__.py\n@@ -18,13 +18,12 @@\n \"\"\"\n \n from __future__ import division\n-from pkg_resources import get_distribution, DistributionNotFound\n \n __all__ = ['use', 'sys_info', 'set_log_level', 'test']\n \n try:\n- __version__ = get_distribution(__name__).version\n-except DistributionNotFound:\n+ from .version import version as __version__ # noqa\n+except ImportError:\n # package is not installed\n pass\n", "issue": "Slow import\nOn my Windows workstation with >1200 of Python packages installed, `import vispy` takes about 3.4 seconds. \r\n\r\n98% of that time is spend at \r\nhttps://github.com/vispy/vispy/blob/ded293841c6438ab54af10c561f372155bb77edc/vispy/__init__.py#L26\r\n\r\nOn this system, more than half of `napari`'s import time is spent on determining the vispy version. \r\n\r\nPlease consider not using `pkg_resources` during runtime.\r\n\r\nLoosely related to https://github.com/napari/napari/pull/745\r\n\r\nhttps://github.com/pypa/setuptools/issues/510\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) Vispy Development Team. All Rights Reserved.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\"\"\"Vispy setup script.\n\nSteps to do a new release:\n\nPreparations:\n * Test on Windows, Linux, Mac\n * Make release notes\n * Update API documentation and other docs that need updating.\n\nDefine the version and release:\n * tag the tip changeset as version x.x.x; `git tag -a 'vX.Y.Z' -m \"Version X.Y.Z\"`\n * push tag to github\n * verify that azure pipelines complete\n * verify that `.tar.gz` sdist and binary wheels are available on PyPI\n\nAnnouncing:\n * It can be worth waiting a day for eager users to report critical bugs\n * Announce in scipy-user, vispy mailing list, twitter (@vispyproject)\n\n\"\"\"\n\nimport os\nimport sys\nimport platform\nfrom os import path as op\nfrom distutils import log\nfrom setuptools import setup, find_packages, Command, Extension\nfrom setuptools.command.sdist import sdist\nfrom setuptools.command.build_py import build_py\nfrom setuptools.command.egg_info import egg_info\nfrom subprocess import check_call\n\nimport numpy as np\nfrom Cython.Build import cythonize\n\nlog.set_verbosity(log.DEBUG)\nlog.info('setup.py entered')\nlog.info('$PATH=%s' % os.environ['PATH'])\n\nname = 'vispy'\ndescription = 'Interactive visualization in Python'\n\n# Special commands for building jupyter notebook extension\nhere = os.path.dirname(os.path.abspath(__file__))\nnode_root = os.path.join(here, 'js')\nis_repo = os.path.exists(os.path.join(here, '.git'))\n\nnpm_path = os.pathsep.join([\n os.path.join(node_root, 'node_modules', '.bin'),\n os.environ.get('PATH', os.defpath),\n])\n\n\ndef set_builtin(name, value):\n if isinstance(__builtins__, dict):\n __builtins__[name] = value\n else:\n setattr(__builtins__, name, value)\n\n\ndef js_prerelease(command, strict=False):\n \"\"\"decorator for building minified js/css prior to another command\"\"\"\n class DecoratedCommand(command):\n def run(self):\n jsdeps = self.distribution.get_command_obj('jsdeps')\n if not is_repo and all(os.path.exists(t) for t in jsdeps.targets):\n # sdist, nothing to do\n command.run(self)\n return\n\n try:\n self.distribution.run_command('jsdeps')\n except Exception as e:\n missing = [t for t in jsdeps.targets if not os.path.exists(t)]\n if strict or missing:\n log.warn('rebuilding js and css failed')\n if missing:\n log.error('missing files: %s' % missing)\n # HACK: Allow users who can't build the JS to still install vispy\n if not is_repo:\n raise e\n log.warn('WARNING: continuing installation WITHOUT nbextension javascript')\n # remove JS files from data_files so setuptools doesn't try to copy\n # non-existent files\n self.distribution.data_files = [x for x in self.distribution.data_files\n if 'jupyter' not in x[0]]\n else:\n log.warn('rebuilding js and css failed (not a problem)')\n log.warn(str(e))\n command.run(self)\n update_package_data(self.distribution)\n return DecoratedCommand\n\n\ndef update_package_data(distribution):\n \"\"\"update package_data to catch changes during setup\"\"\"\n build_py = distribution.get_command_obj('build_py')\n # distribution.package_data = find_package_data()\n # re-init build_py options which load package_data\n build_py.finalize_options()\n\n\nclass NPM(Command):\n description = 'install package.json dependencies using npm'\n\n user_options = []\n\n node_modules = os.path.join(node_root, 'node_modules')\n\n targets = [\n os.path.join(here, 'vispy', 'static', 'extension.js'),\n os.path.join(here, 'vispy', 'static', 'index.js')\n ]\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n def get_npm_name(self):\n npmName = 'npm';\n if platform.system() == 'Windows':\n npmName = 'npm.cmd';\n\n return npmName;\n\n def has_npm(self):\n npmName = self.get_npm_name();\n try:\n check_call([npmName, '--version'])\n return True\n except:\n return False\n\n def should_run_npm_install(self):\n package_json = os.path.join(node_root, 'package.json')\n node_modules_exists = os.path.exists(self.node_modules)\n return self.has_npm()\n\n def run(self):\n has_npm = self.has_npm()\n if not has_npm:\n log.error(\"`npm` unavailable. If you're running this command \"\n \"using sudo, make sure `npm` is available to sudo\")\n\n env = os.environ.copy()\n env['PATH'] = npm_path\n\n if self.should_run_npm_install():\n log.info(\"Installing build dependencies with npm. This may take \"\n \"a while...\")\n npmName = self.get_npm_name();\n check_call([npmName, 'install', '--verbose'], cwd=node_root,\n stdout=sys.stdout, stderr=sys.stderr)\n os.utime(self.node_modules, None)\n\n for t in self.targets:\n if not os.path.exists(t):\n msg = 'Missing file: %s' % t\n if not has_npm:\n msg += '\\nnpm is required to build a development ' \\\n 'version of a widget extension'\n raise ValueError(msg)\n\n # update package data in case this created new files\n update_package_data(self.distribution)\n\n\nextensions = [Extension('vispy.visuals.text._sdf_cpu',\n [op.join('vispy', 'visuals', 'text', '_sdf_cpu.pyx')],\n include_dirs=[np.get_include()]),\n ]\n\nreadme = open('README.rst', 'r').read()\nsetup(\n name=name,\n use_scm_version=True,\n author='Vispy contributors',\n author_email='[email protected]',\n license='(new) BSD',\n url='http://vispy.org',\n download_url='https://pypi.python.org/pypi/vispy',\n keywords=[\n 'visualization',\n 'OpenGl',\n 'ES',\n 'medical',\n 'imaging',\n '3D',\n 'plotting',\n 'numpy',\n 'bigdata',\n 'ipython',\n 'jupyter',\n 'widgets',\n ],\n description=description,\n long_description=readme,\n platforms='any',\n provides=['vispy'],\n cmdclass={\n 'build_py': js_prerelease(build_py),\n 'egg_info': js_prerelease(egg_info),\n 'sdist': js_prerelease(sdist, strict=True),\n 'jsdeps': NPM,\n },\n python_requires='>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*',\n install_requires=['numpy', 'freetype-py', 'setuptools'],\n setup_requires=['numpy', 'cython', 'setuptools_scm', 'setuptools_scm_git_archive'],\n extras_require={\n 'ipython-static': ['ipython'],\n 'ipython-vnc': ['ipython>=7'],\n 'ipython-webgl': ['ipywidgets>=7.0', 'ipython>=7', 'tornado'],\n 'pyglet': ['pyglet>=1.2'],\n 'pyqt5': ['pyqt5'],\n 'pyside': ['PySide'],\n 'pyside2': ['PySide2'],\n 'sdl2': ['PySDL2'],\n 'wx': ['wxPython'],\n 'doc': ['sphinx_bootstrap_theme', 'numpydoc'],\n },\n packages=find_packages(),\n ext_modules=cythonize(extensions),\n package_dir={'vispy': 'vispy'},\n data_files=[\n ('share/jupyter/nbextensions/vispy', [\n 'vispy/static/extension.js',\n 'vispy/static/index.js',\n 'vispy/static/index.js.map',\n ]),\n ('etc/jupyter/nbconfig/notebook.d', ['vispy.json']),\n ],\n include_package_data=True,\n package_data={\n 'vispy': [op.join('io', '_data', '*'),\n op.join('html', 'static', 'js', '*'),\n op.join('app', 'tests', 'qt-designer.ui'),\n op.join('util', 'fonts', 'data', '*.ttf'),\n ],\n\n 'vispy.glsl': ['*.vert','*.frag', \"*.glsl\"],\n 'vispy.glsl.antialias': ['*.vert','*.frag', \"*.glsl\"],\n 'vispy.glsl.arrowheads': ['*.vert','*.frag', \"*.glsl\"],\n 'vispy.glsl.arrows': ['*.vert','*.frag', \"*.glsl\"],\n 'vispy.glsl.collections': ['*.vert','*.frag', \"*.glsl\"],\n 'vispy.glsl.colormaps': ['*.vert','*.frag', \"*.glsl\"],\n 'vispy.glsl.lines': ['*.vert','*.frag', \"*.glsl\"],\n 'vispy.glsl.markers': ['*.vert','*.frag', \"*.glsl\"],\n 'vispy.glsl.math': ['*.vert','*.frag', \"*.glsl\"],\n 'vispy.glsl.misc': ['*.vert','*.frag', \"*.glsl\"],\n 'vispy.glsl.transforms': ['*.vert','*.frag', \"*.glsl\"],\n\n },\n zip_safe=False,\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Education',\n 'Intended Audience :: Developers',\n 'Topic :: Scientific/Engineering :: Visualization',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Framework :: IPython'\n ],\n)\n", "path": "setup.py"}, {"content": "# -*- coding: utf-8 -*-\n# Copyright (c) Vispy Development Team. All Rights Reserved.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\n\"\"\"\n\n=====\nVispy\n=====\n\nVispy is a **high-performance interactive 2D/3D data visualization\nlibrary**. Vispy leverages the computational power of modern **Graphics\nProcessing Units (GPUs)** through the **OpenGL** library to display very\nlarge datasets.\n\nFor more information, see http://vispy.org.\n\n\"\"\"\n\nfrom __future__ import division\nfrom pkg_resources import get_distribution, DistributionNotFound\n\n__all__ = ['use', 'sys_info', 'set_log_level', 'test']\n\ntry:\n __version__ = get_distribution(__name__).version\nexcept DistributionNotFound:\n # package is not installed\n pass\n\nfrom .util import config, set_log_level, keys, sys_info # noqa\nfrom .util.wrappers import use # noqa\nfrom .testing import test # noqa\n# load the two functions that IPython uses to instantiate an extension\n# that way, the user only needs to run %load_ext vispy.ipython rather that\n# %load_ext vispy.ipython.ipython\nfrom .ipython import load_ipython_extension, unload_ipython_extension # noqa\n\n\n# Allow for Jupyter extension to be enabled\ndef _jupyter_nbextension_paths():\n return [{\n 'section': 'notebook',\n 'src': 'static',\n 'dest': 'vispy',\n 'require': 'vispy/extension'\n }]\n", "path": "vispy/__init__.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) Vispy Development Team. All Rights Reserved.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\"\"\"Vispy setup script.\n\nSteps to do a new release:\n\nPreparations:\n * Test on Windows, Linux, Mac\n * Make release notes\n * Update API documentation and other docs that need updating.\n\nDefine the version and release:\n * tag the tip changeset as version x.x.x; `git tag -a 'vX.Y.Z' -m \"Version X.Y.Z\"`\n * push tag to github\n * verify that azure pipelines complete\n * verify that `.tar.gz` sdist and binary wheels are available on PyPI\n\nAnnouncing:\n * It can be worth waiting a day for eager users to report critical bugs\n * Announce in scipy-user, vispy mailing list, twitter (@vispyproject)\n\n\"\"\"\n\nimport os\nimport sys\nimport platform\nfrom os import path as op\nfrom distutils import log\nfrom setuptools import setup, find_packages, Command, Extension\nfrom setuptools.command.sdist import sdist\nfrom setuptools.command.build_py import build_py\nfrom setuptools.command.egg_info import egg_info\nfrom subprocess import check_call\n\nimport numpy as np\nfrom Cython.Build import cythonize\n\nlog.set_verbosity(log.DEBUG)\nlog.info('setup.py entered')\nlog.info('$PATH=%s' % os.environ['PATH'])\n\nname = 'vispy'\ndescription = 'Interactive visualization in Python'\n\n# Special commands for building jupyter notebook extension\nhere = os.path.dirname(os.path.abspath(__file__))\nnode_root = os.path.join(here, 'js')\nis_repo = os.path.exists(os.path.join(here, '.git'))\n\nnpm_path = os.pathsep.join([\n os.path.join(node_root, 'node_modules', '.bin'),\n os.environ.get('PATH', os.defpath),\n])\n\n\ndef set_builtin(name, value):\n if isinstance(__builtins__, dict):\n __builtins__[name] = value\n else:\n setattr(__builtins__, name, value)\n\n\ndef js_prerelease(command, strict=False):\n \"\"\"decorator for building minified js/css prior to another command\"\"\"\n class DecoratedCommand(command):\n def run(self):\n jsdeps = self.distribution.get_command_obj('jsdeps')\n if not is_repo and all(os.path.exists(t) for t in jsdeps.targets):\n # sdist, nothing to do\n command.run(self)\n return\n\n try:\n self.distribution.run_command('jsdeps')\n except Exception as e:\n missing = [t for t in jsdeps.targets if not os.path.exists(t)]\n if strict or missing:\n log.warn('rebuilding js and css failed')\n if missing:\n log.error('missing files: %s' % missing)\n # HACK: Allow users who can't build the JS to still install vispy\n if not is_repo:\n raise e\n log.warn('WARNING: continuing installation WITHOUT nbextension javascript')\n # remove JS files from data_files so setuptools doesn't try to copy\n # non-existent files\n self.distribution.data_files = [x for x in self.distribution.data_files\n if 'jupyter' not in x[0]]\n else:\n log.warn('rebuilding js and css failed (not a problem)')\n log.warn(str(e))\n command.run(self)\n update_package_data(self.distribution)\n return DecoratedCommand\n\n\ndef update_package_data(distribution):\n \"\"\"update package_data to catch changes during setup\"\"\"\n build_py = distribution.get_command_obj('build_py')\n # distribution.package_data = find_package_data()\n # re-init build_py options which load package_data\n build_py.finalize_options()\n\n\nclass NPM(Command):\n description = 'install package.json dependencies using npm'\n\n user_options = []\n\n node_modules = os.path.join(node_root, 'node_modules')\n\n targets = [\n os.path.join(here, 'vispy', 'static', 'extension.js'),\n os.path.join(here, 'vispy', 'static', 'index.js')\n ]\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n def get_npm_name(self):\n npmName = 'npm';\n if platform.system() == 'Windows':\n npmName = 'npm.cmd';\n\n return npmName;\n\n def has_npm(self):\n npmName = self.get_npm_name();\n try:\n check_call([npmName, '--version'])\n return True\n except:\n return False\n\n def should_run_npm_install(self):\n package_json = os.path.join(node_root, 'package.json')\n node_modules_exists = os.path.exists(self.node_modules)\n return self.has_npm()\n\n def run(self):\n has_npm = self.has_npm()\n if not has_npm:\n log.error(\"`npm` unavailable. If you're running this command \"\n \"using sudo, make sure `npm` is available to sudo\")\n\n env = os.environ.copy()\n env['PATH'] = npm_path\n\n if self.should_run_npm_install():\n log.info(\"Installing build dependencies with npm. This may take \"\n \"a while...\")\n npmName = self.get_npm_name();\n check_call([npmName, 'install', '--verbose'], cwd=node_root,\n stdout=sys.stdout, stderr=sys.stderr)\n os.utime(self.node_modules, None)\n\n for t in self.targets:\n if not os.path.exists(t):\n msg = 'Missing file: %s' % t\n if not has_npm:\n msg += '\\nnpm is required to build a development ' \\\n 'version of a widget extension'\n raise ValueError(msg)\n\n # update package data in case this created new files\n update_package_data(self.distribution)\n\n\nextensions = [Extension('vispy.visuals.text._sdf_cpu',\n [op.join('vispy', 'visuals', 'text', '_sdf_cpu.pyx')],\n include_dirs=[np.get_include()]),\n ]\n\nreadme = open('README.rst', 'r').read()\nsetup(\n name=name,\n use_scm_version={'write_to': 'vispy/version.py'},\n author='Vispy contributors',\n author_email='[email protected]',\n license='(new) BSD',\n url='http://vispy.org',\n download_url='https://pypi.python.org/pypi/vispy',\n keywords=[\n 'visualization',\n 'OpenGl',\n 'ES',\n 'medical',\n 'imaging',\n '3D',\n 'plotting',\n 'numpy',\n 'bigdata',\n 'ipython',\n 'jupyter',\n 'widgets',\n ],\n description=description,\n long_description=readme,\n platforms='any',\n provides=['vispy'],\n cmdclass={\n 'build_py': js_prerelease(build_py),\n 'egg_info': js_prerelease(egg_info),\n 'sdist': js_prerelease(sdist, strict=True),\n 'jsdeps': NPM,\n },\n python_requires='>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*',\n install_requires=['numpy', 'freetype-py'],\n setup_requires=['numpy', 'cython', 'setuptools_scm', 'setuptools_scm_git_archive'],\n extras_require={\n 'ipython-static': ['ipython'],\n 'ipython-vnc': ['ipython>=7'],\n 'ipython-webgl': ['ipywidgets>=7.0', 'ipython>=7', 'tornado'],\n 'pyglet': ['pyglet>=1.2'],\n 'pyqt5': ['pyqt5'],\n 'pyside': ['PySide'],\n 'pyside2': ['PySide2'],\n 'sdl2': ['PySDL2'],\n 'wx': ['wxPython'],\n 'doc': ['sphinx_bootstrap_theme', 'numpydoc'],\n },\n packages=find_packages(exclude=['make']),\n ext_modules=cythonize(extensions),\n package_dir={'vispy': 'vispy'},\n data_files=[\n ('share/jupyter/nbextensions/vispy', [\n 'vispy/static/extension.js',\n 'vispy/static/index.js',\n 'vispy/static/index.js.map',\n ]),\n ('etc/jupyter/nbconfig/notebook.d', ['vispy.json']),\n ],\n include_package_data=True,\n package_data={\n 'vispy': [op.join('io', '_data', '*'),\n op.join('html', 'static', 'js', '*'),\n op.join('app', 'tests', 'qt-designer.ui'),\n op.join('util', 'fonts', 'data', '*.ttf'),\n ],\n\n 'vispy.glsl': ['*.vert','*.frag', \"*.glsl\"],\n 'vispy.glsl.antialias': ['*.vert','*.frag', \"*.glsl\"],\n 'vispy.glsl.arrowheads': ['*.vert','*.frag', \"*.glsl\"],\n 'vispy.glsl.arrows': ['*.vert','*.frag', \"*.glsl\"],\n 'vispy.glsl.collections': ['*.vert','*.frag', \"*.glsl\"],\n 'vispy.glsl.colormaps': ['*.vert','*.frag', \"*.glsl\"],\n 'vispy.glsl.lines': ['*.vert','*.frag', \"*.glsl\"],\n 'vispy.glsl.markers': ['*.vert','*.frag', \"*.glsl\"],\n 'vispy.glsl.math': ['*.vert','*.frag', \"*.glsl\"],\n 'vispy.glsl.misc': ['*.vert','*.frag', \"*.glsl\"],\n 'vispy.glsl.transforms': ['*.vert','*.frag', \"*.glsl\"],\n\n },\n zip_safe=False,\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Education',\n 'Intended Audience :: Developers',\n 'Topic :: Scientific/Engineering :: Visualization',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Framework :: IPython'\n ],\n)\n", "path": "setup.py"}, {"content": "# -*- coding: utf-8 -*-\n# Copyright (c) Vispy Development Team. All Rights Reserved.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\n\"\"\"\n\n=====\nVispy\n=====\n\nVispy is a **high-performance interactive 2D/3D data visualization\nlibrary**. Vispy leverages the computational power of modern **Graphics\nProcessing Units (GPUs)** through the **OpenGL** library to display very\nlarge datasets.\n\nFor more information, see http://vispy.org.\n\n\"\"\"\n\nfrom __future__ import division\n\n__all__ = ['use', 'sys_info', 'set_log_level', 'test']\n\ntry:\n from .version import version as __version__ # noqa\nexcept ImportError:\n # package is not installed\n pass\n\nfrom .util import config, set_log_level, keys, sys_info # noqa\nfrom .util.wrappers import use # noqa\nfrom .testing import test # noqa\n# load the two functions that IPython uses to instantiate an extension\n# that way, the user only needs to run %load_ext vispy.ipython rather that\n# %load_ext vispy.ipython.ipython\nfrom .ipython import load_ipython_extension, unload_ipython_extension # noqa\n\n\n# Allow for Jupyter extension to be enabled\ndef _jupyter_nbextension_paths():\n return [{\n 'section': 'notebook',\n 'src': 'static',\n 'dest': 'vispy',\n 'require': 'vispy/extension'\n }]\n", "path": "vispy/__init__.py"}]} | 3,883 | 454 |
gh_patches_debug_2897 | rasdani/github-patches | git_diff | dbt-labs__dbt-core-8922 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[CT-3210] [Bug] Error using `dbt list --select` when there is a cross-project model that is `version=0` in the parent project
### Is this a new bug in dbt-core?
- [X] I believe this is a new bug in dbt-core
- [X] I have searched the existing issues, and I could not find an existing issue for this bug
### Current Behavior
When you attempt to reference a model version 0, you get a stack trace error.
### Expected Behavior
We should allow you to set model version to be 0.
### Steps To Reproduce
1. On parent/hub project, add a versioned model with `v: 0`
2. On the child/spoke project, attempt to reference that versioned model in a model:
`select * from {{ ref('example_hub', 'my_second_dbt_model', v=0) }}`
3. run `dbt list --select anything`
Outstanding question - is this only affecting cross-project refs? Or all refs to a model with `v: 0`?
### Relevant log output
_No response_
### Environment
```markdown
- OS:
- Python:
- dbt:
```
### Which database adapter are you using with dbt?
_No response_
### Additional Context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/dbt/contracts/graph/node_args.py`
Content:
```
1 from dataclasses import dataclass, field
2 from datetime import datetime
3 from typing import Optional, List
4
5 from dbt.contracts.graph.unparsed import NodeVersion
6 from dbt.node_types import NodeType, AccessType
7
8
9 @dataclass
10 class ModelNodeArgs:
11 name: str
12 package_name: str
13 identifier: str
14 schema: str
15 database: Optional[str] = None
16 relation_name: Optional[str] = None
17 version: Optional[NodeVersion] = None
18 latest_version: Optional[NodeVersion] = None
19 deprecation_date: Optional[datetime] = None
20 access: Optional[str] = AccessType.Protected.value
21 generated_at: datetime = field(default_factory=datetime.utcnow)
22 depends_on_nodes: List[str] = field(default_factory=list)
23 enabled: bool = True
24
25 @property
26 def unique_id(self) -> str:
27 unique_id = f"{NodeType.Model}.{self.package_name}.{self.name}"
28 if self.version:
29 unique_id = f"{unique_id}.v{self.version}"
30
31 return unique_id
32
33 @property
34 def fqn(self) -> List[str]:
35 fqn = [self.package_name, self.name]
36 if self.version:
37 fqn.append(f"v{self.version}")
38
39 return fqn
40
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/dbt/contracts/graph/node_args.py b/core/dbt/contracts/graph/node_args.py
--- a/core/dbt/contracts/graph/node_args.py
+++ b/core/dbt/contracts/graph/node_args.py
@@ -33,7 +33,8 @@
@property
def fqn(self) -> List[str]:
fqn = [self.package_name, self.name]
- if self.version:
+ # Test for None explicitly because version can be 0
+ if self.version is not None:
fqn.append(f"v{self.version}")
return fqn
| {"golden_diff": "diff --git a/core/dbt/contracts/graph/node_args.py b/core/dbt/contracts/graph/node_args.py\n--- a/core/dbt/contracts/graph/node_args.py\n+++ b/core/dbt/contracts/graph/node_args.py\n@@ -33,7 +33,8 @@\n @property\n def fqn(self) -> List[str]:\n fqn = [self.package_name, self.name]\n- if self.version:\n+ # Test for None explicitly because version can be 0\n+ if self.version is not None:\n fqn.append(f\"v{self.version}\")\n \n return fqn\n", "issue": "[CT-3210] [Bug] Error using `dbt list --select` when there is a cross-project model that is `version=0` in the parent project\n### Is this a new bug in dbt-core?\r\n\r\n- [X] I believe this is a new bug in dbt-core\r\n- [X] I have searched the existing issues, and I could not find an existing issue for this bug\r\n\r\n### Current Behavior\r\n\r\nWhen you attempt to reference a model version 0, you get a stack trace error.\r\n\r\n\r\n\r\n### Expected Behavior\r\n\r\nWe should allow you to set model version to be 0.\r\n\r\n### Steps To Reproduce\r\n\r\n1. On parent/hub project, add a versioned model with `v: 0`\r\n2. On the child/spoke project, attempt to reference that versioned model in a model:\r\n `select * from {{ ref('example_hub', 'my_second_dbt_model', v=0) }}`\r\n3. run `dbt list --select anything`\r\n\r\nOutstanding question - is this only affecting cross-project refs? Or all refs to a model with `v: 0`?\r\n\r\n### Relevant log output\r\n\r\n_No response_\r\n\r\n### Environment\r\n\r\n```markdown\r\n- OS:\r\n- Python:\r\n- dbt:\r\n```\r\n\r\n\r\n### Which database adapter are you using with dbt?\r\n\r\n_No response_\r\n\r\n### Additional Context\r\n\r\n_No response_\n", "before_files": [{"content": "from dataclasses import dataclass, field\nfrom datetime import datetime\nfrom typing import Optional, List\n\nfrom dbt.contracts.graph.unparsed import NodeVersion\nfrom dbt.node_types import NodeType, AccessType\n\n\n@dataclass\nclass ModelNodeArgs:\n name: str\n package_name: str\n identifier: str\n schema: str\n database: Optional[str] = None\n relation_name: Optional[str] = None\n version: Optional[NodeVersion] = None\n latest_version: Optional[NodeVersion] = None\n deprecation_date: Optional[datetime] = None\n access: Optional[str] = AccessType.Protected.value\n generated_at: datetime = field(default_factory=datetime.utcnow)\n depends_on_nodes: List[str] = field(default_factory=list)\n enabled: bool = True\n\n @property\n def unique_id(self) -> str:\n unique_id = f\"{NodeType.Model}.{self.package_name}.{self.name}\"\n if self.version:\n unique_id = f\"{unique_id}.v{self.version}\"\n\n return unique_id\n\n @property\n def fqn(self) -> List[str]:\n fqn = [self.package_name, self.name]\n if self.version:\n fqn.append(f\"v{self.version}\")\n\n return fqn\n", "path": "core/dbt/contracts/graph/node_args.py"}], "after_files": [{"content": "from dataclasses import dataclass, field\nfrom datetime import datetime\nfrom typing import Optional, List\n\nfrom dbt.contracts.graph.unparsed import NodeVersion\nfrom dbt.node_types import NodeType, AccessType\n\n\n@dataclass\nclass ModelNodeArgs:\n name: str\n package_name: str\n identifier: str\n schema: str\n database: Optional[str] = None\n relation_name: Optional[str] = None\n version: Optional[NodeVersion] = None\n latest_version: Optional[NodeVersion] = None\n deprecation_date: Optional[datetime] = None\n access: Optional[str] = AccessType.Protected.value\n generated_at: datetime = field(default_factory=datetime.utcnow)\n depends_on_nodes: List[str] = field(default_factory=list)\n enabled: bool = True\n\n @property\n def unique_id(self) -> str:\n unique_id = f\"{NodeType.Model}.{self.package_name}.{self.name}\"\n if self.version:\n unique_id = f\"{unique_id}.v{self.version}\"\n\n return unique_id\n\n @property\n def fqn(self) -> List[str]:\n fqn = [self.package_name, self.name]\n # Test for None explicitly because version can be 0\n if self.version is not None:\n fqn.append(f\"v{self.version}\")\n\n return fqn\n", "path": "core/dbt/contracts/graph/node_args.py"}]} | 902 | 132 |
gh_patches_debug_34624 | rasdani/github-patches | git_diff | buildbot__buildbot-4128 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
GitHub source step fails on missing magic `/merge` ref of GitHub
Currently I run into an issue where the changehook is too fast for the magic `merge` ref in GitHub to be created on Pull Requests.
An easy fix would be to try again in this case or wait a reasonable amount in `startVC()` of the GitHub step if the remote ref ends in `merge`.
Thoughts?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `master/buildbot/www/hooks/github.py`
Content:
```
1 # This file is part of Buildbot. Buildbot is free software: you can
2 # redistribute it and/or modify it under the terms of the GNU General Public
3 # License as published by the Free Software Foundation, version 2.
4 #
5 # This program is distributed in the hope that it will be useful, but WITHOUT
6 # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
7 # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
8 # details.
9 #
10 # You should have received a copy of the GNU General Public License along with
11 # this program; if not, write to the Free Software Foundation, Inc., 51
12 # Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
13 #
14 # Copyright Buildbot Team Members
15
16 from __future__ import absolute_import
17 from __future__ import print_function
18
19 import hmac
20 import json
21 import logging
22 import re
23 from hashlib import sha1
24
25 from dateutil.parser import parse as dateparse
26
27 from twisted.internet import defer
28 from twisted.python import log
29
30 from buildbot.changes.github import PullRequestMixin
31 from buildbot.util import bytes2unicode
32 from buildbot.util import httpclientservice
33 from buildbot.util import unicode2bytes
34 from buildbot.www.hooks.base import BaseHookHandler
35
36 _HEADER_EVENT = b'X-GitHub-Event'
37 _HEADER_SIGNATURE = b'X-Hub-Signature'
38
39 DEFAULT_SKIPS_PATTERN = (r'\[ *skip *ci *\]', r'\[ *ci *skip *\]')
40 DEFAULT_GITHUB_API_URL = 'https://api.github.com'
41
42
43 class GitHubEventHandler(PullRequestMixin):
44
45 def __init__(self, secret, strict,
46 codebase=None,
47 github_property_whitelist=None,
48 master=None,
49 skips=None,
50 github_api_endpoint=None,
51 token=None,
52 debug=False,
53 verify=False):
54 self._secret = secret
55 self._strict = strict
56 self._token = token
57 self._codebase = codebase
58 self.github_property_whitelist = github_property_whitelist
59 self.skips = skips
60 self.github_api_endpoint = github_api_endpoint
61 self.master = master
62 if github_property_whitelist is None:
63 self.github_property_whitelist = []
64 if skips is None:
65 self.skips = DEFAULT_SKIPS_PATTERN
66 if github_api_endpoint is None:
67 self.github_api_endpoint = DEFAULT_GITHUB_API_URL
68
69 if self._strict and not self._secret:
70 raise ValueError('Strict mode is requested '
71 'while no secret is provided')
72 self.debug = debug
73 self.verify = verify
74
75 @defer.inlineCallbacks
76 def process(self, request):
77 payload = self._get_payload(request)
78
79 event_type = request.getHeader(_HEADER_EVENT)
80 event_type = bytes2unicode(event_type)
81 log.msg("X-GitHub-Event: {}".format(
82 event_type), logLevel=logging.DEBUG)
83
84 handler = getattr(self, 'handle_{}'.format(event_type), None)
85
86 if handler is None:
87 raise ValueError('Unknown event: {}'.format(event_type))
88
89 result = yield defer.maybeDeferred(lambda: handler(payload, event_type))
90 defer.returnValue(result)
91
92 def _get_payload(self, request):
93 content = request.content.read()
94 content = bytes2unicode(content)
95
96 signature = request.getHeader(_HEADER_SIGNATURE)
97 signature = bytes2unicode(signature)
98
99 if not signature and self._strict:
100 raise ValueError('Request has no required signature')
101
102 if self._secret and signature:
103 try:
104 hash_type, hexdigest = signature.split('=')
105 except ValueError:
106 raise ValueError(
107 'Wrong signature format: {}'.format(signature))
108
109 if hash_type != 'sha1':
110 raise ValueError('Unknown hash type: {}'.format(hash_type))
111
112 mac = hmac.new(unicode2bytes(self._secret),
113 msg=unicode2bytes(content),
114 digestmod=sha1)
115 # NOTE: hmac.compare_digest should be used, but it's only available
116 # starting Python 2.7.7
117 if mac.hexdigest() != hexdigest:
118 raise ValueError('Hash mismatch')
119
120 content_type = request.getHeader(b'Content-Type')
121
122 if content_type == b'application/json':
123 payload = json.loads(content)
124 elif content_type == b'application/x-www-form-urlencoded':
125 payload = json.loads(bytes2unicode(request.args[b'payload'][0]))
126 else:
127 raise ValueError('Unknown content type: {}'.format(content_type))
128
129 log.msg("Payload: {}".format(payload), logLevel=logging.DEBUG)
130
131 return payload
132
133 def handle_ping(self, _, __):
134 return [], 'git'
135
136 def handle_push(self, payload, event):
137 # This field is unused:
138 user = None
139 # user = payload['pusher']['name']
140 repo = payload['repository']['name']
141 repo_url = payload['repository']['html_url']
142 # NOTE: what would be a reasonable value for project?
143 # project = request.args.get('project', [''])[0]
144 project = payload['repository']['full_name']
145
146 # Inject some additional white-listed event payload properties
147 properties = self.extractProperties(payload)
148 changes = self._process_change(payload, user, repo, repo_url, project,
149 event, properties)
150
151 log.msg("Received {} changes from github".format(len(changes)))
152
153 return changes, 'git'
154
155 @defer.inlineCallbacks
156 def handle_pull_request(self, payload, event):
157 changes = []
158 number = payload['number']
159 refname = 'refs/pull/{}/merge'.format(number)
160 commits = payload['pull_request']['commits']
161 title = payload['pull_request']['title']
162 comments = payload['pull_request']['body']
163 repo_full_name = payload['repository']['full_name']
164 head_sha = payload['pull_request']['head']['sha']
165
166 log.msg('Processing GitHub PR #{}'.format(number),
167 logLevel=logging.DEBUG)
168
169 head_msg = yield self._get_commit_msg(repo_full_name, head_sha)
170 if self._has_skip(head_msg):
171 log.msg("GitHub PR #{}, Ignoring: "
172 "head commit message contains skip pattern".format(number))
173 defer.returnValue(([], 'git'))
174
175 action = payload.get('action')
176 if action not in ('opened', 'reopened', 'synchronize'):
177 log.msg("GitHub PR #{} {}, ignoring".format(number, action))
178 defer.returnValue((changes, 'git'))
179
180 properties = self.extractProperties(payload['pull_request'])
181 properties.update({'event': event})
182 change = {
183 'revision': payload['pull_request']['head']['sha'],
184 'when_timestamp': dateparse(payload['pull_request']['created_at']),
185 'branch': refname,
186 'revlink': payload['pull_request']['_links']['html']['href'],
187 'repository': payload['repository']['html_url'],
188 'project': payload['pull_request']['base']['repo']['full_name'],
189 'category': 'pull',
190 # TODO: Get author name based on login id using txgithub module
191 'author': payload['sender']['login'],
192 'comments': u'GitHub Pull Request #{0} ({1} commit{2})\n{3}\n{4}'.format(
193 number, commits, 's' if commits != 1 else '', title, comments),
194 'properties': properties,
195 }
196
197 if callable(self._codebase):
198 change['codebase'] = self._codebase(payload)
199 elif self._codebase is not None:
200 change['codebase'] = self._codebase
201
202 changes.append(change)
203
204 log.msg("Received {} changes from GitHub PR #{}".format(
205 len(changes), number))
206 defer.returnValue((changes, 'git'))
207
208 @defer.inlineCallbacks
209 def _get_commit_msg(self, repo, sha):
210 '''
211 :param repo: the repo full name, ``{owner}/{project}``.
212 e.g. ``buildbot/buildbot``
213 '''
214 headers = {
215 'User-Agent': 'Buildbot'
216 }
217 if self._token:
218 headers['Authorization'] = 'token ' + self._token
219
220 url = '/repos/{}/commits/{}'.format(repo, sha)
221 http = yield httpclientservice.HTTPClientService.getService(
222 self.master, self.github_api_endpoint, headers=headers,
223 debug=self.debug, verify=self.verify)
224 res = yield http.get(url)
225 data = yield res.json()
226 msg = data['commit']['message']
227 defer.returnValue(msg)
228
229 def _process_change(self, payload, user, repo, repo_url, project, event,
230 properties):
231 """
232 Consumes the JSON as a python object and actually starts the build.
233
234 :arguments:
235 payload
236 Python Object that represents the JSON sent by GitHub Service
237 Hook.
238 """
239 changes = []
240 refname = payload['ref']
241
242 # We only care about regular heads or tags
243 match = re.match(r"^refs/(heads|tags)/(.+)$", refname)
244 if not match:
245 log.msg("Ignoring refname `{}': Not a branch".format(refname))
246 return changes
247 category = None # None is the legacy category for when hook only supported push
248 if match.group(1) == "tags":
249 category = "tag"
250
251 branch = match.group(2)
252 if payload.get('deleted'):
253 log.msg("Branch `{}' deleted, ignoring".format(branch))
254 return changes
255
256 # check skip pattern in commit message. e.g.: [ci skip] and [skip ci]
257 head_msg = payload['head_commit'].get('message', '')
258 if self._has_skip(head_msg):
259 return changes
260 commits = payload['commits']
261 if payload.get('created'):
262 commits = [payload['head_commit']]
263 for commit in commits:
264 files = []
265 for kind in ('added', 'modified', 'removed'):
266 files.extend(commit.get(kind, []))
267
268 when_timestamp = dateparse(commit['timestamp'])
269
270 log.msg("New revision: {}".format(commit['id'][:8]))
271
272 change = {
273 'author': u'{} <{}>'.format(commit['author']['name'],
274 commit['author']['email']),
275 'files': files,
276 'comments': commit['message'],
277 'revision': commit['id'],
278 'when_timestamp': when_timestamp,
279 'branch': branch,
280 'revlink': commit['url'],
281 'repository': repo_url,
282 'project': project,
283 'properties': {
284 'github_distinct': commit.get('distinct', True),
285 'event': event,
286 },
287 'category': category
288 }
289 # Update with any white-listed github event properties
290 change['properties'].update(properties)
291
292 if callable(self._codebase):
293 change['codebase'] = self._codebase(payload)
294 elif self._codebase is not None:
295 change['codebase'] = self._codebase
296
297 changes.append(change)
298
299 return changes
300
301 def _has_skip(self, msg):
302 '''
303 The message contains the skipping keyword no not.
304
305 :return type: Bool
306 '''
307 for skip in self.skips:
308 if re.search(skip, msg):
309 return True
310 return False
311
312 # for GitHub, we do another level of indirection because
313 # we already had documented API that encouraged people to subclass GitHubEventHandler
314 # so we need to be careful not breaking that API.
315
316
317 class GitHubHandler(BaseHookHandler):
318 def __init__(self, master, options):
319 if options is None:
320 options = {}
321 BaseHookHandler.__init__(self, master, options)
322
323 klass = options.get('class', GitHubEventHandler)
324 klass_kwargs = {
325 'master': master,
326 'codebase': options.get('codebase', None),
327 'github_property_whitelist': options.get('github_property_whitelist', None),
328 'skips': options.get('skips', None),
329 'github_api_endpoint': options.get('github_api_endpoint', None) or 'https://api.github.com',
330 'token': options.get('token', None),
331 'debug': options.get('debug', None) or False,
332 'verify': options.get('verify', None) or False,
333 }
334 handler = klass(options.get('secret', None),
335 options.get('strict', False),
336 **klass_kwargs)
337 self.handler = handler
338
339 def getChanges(self, request):
340 return self.handler.process(request)
341
342
343 github = GitHubHandler
344
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/master/buildbot/www/hooks/github.py b/master/buildbot/www/hooks/github.py
--- a/master/buildbot/www/hooks/github.py
+++ b/master/buildbot/www/hooks/github.py
@@ -48,6 +48,7 @@
master=None,
skips=None,
github_api_endpoint=None,
+ pullrequest_ref=None,
token=None,
debug=False,
verify=False):
@@ -55,6 +56,7 @@
self._strict = strict
self._token = token
self._codebase = codebase
+ self.pullrequest_ref = pullrequest_ref
self.github_property_whitelist = github_property_whitelist
self.skips = skips
self.github_api_endpoint = github_api_endpoint
@@ -156,7 +158,7 @@
def handle_pull_request(self, payload, event):
changes = []
number = payload['number']
- refname = 'refs/pull/{}/merge'.format(number)
+ refname = 'refs/pull/{}/{}'.format(number, self.pullrequest_ref)
commits = payload['pull_request']['commits']
title = payload['pull_request']['title']
comments = payload['pull_request']['body']
@@ -327,6 +329,7 @@
'github_property_whitelist': options.get('github_property_whitelist', None),
'skips': options.get('skips', None),
'github_api_endpoint': options.get('github_api_endpoint', None) or 'https://api.github.com',
+ 'pullrequest_ref': options.get('pullrequest_ref', None) or 'merge',
'token': options.get('token', None),
'debug': options.get('debug', None) or False,
'verify': options.get('verify', None) or False,
| {"golden_diff": "diff --git a/master/buildbot/www/hooks/github.py b/master/buildbot/www/hooks/github.py\n--- a/master/buildbot/www/hooks/github.py\n+++ b/master/buildbot/www/hooks/github.py\n@@ -48,6 +48,7 @@\n master=None,\n skips=None,\n github_api_endpoint=None,\n+ pullrequest_ref=None,\n token=None,\n debug=False,\n verify=False):\n@@ -55,6 +56,7 @@\n self._strict = strict\n self._token = token\n self._codebase = codebase\n+ self.pullrequest_ref = pullrequest_ref\n self.github_property_whitelist = github_property_whitelist\n self.skips = skips\n self.github_api_endpoint = github_api_endpoint\n@@ -156,7 +158,7 @@\n def handle_pull_request(self, payload, event):\n changes = []\n number = payload['number']\n- refname = 'refs/pull/{}/merge'.format(number)\n+ refname = 'refs/pull/{}/{}'.format(number, self.pullrequest_ref)\n commits = payload['pull_request']['commits']\n title = payload['pull_request']['title']\n comments = payload['pull_request']['body']\n@@ -327,6 +329,7 @@\n 'github_property_whitelist': options.get('github_property_whitelist', None),\n 'skips': options.get('skips', None),\n 'github_api_endpoint': options.get('github_api_endpoint', None) or 'https://api.github.com',\n+ 'pullrequest_ref': options.get('pullrequest_ref', None) or 'merge',\n 'token': options.get('token', None),\n 'debug': options.get('debug', None) or False,\n 'verify': options.get('verify', None) or False,\n", "issue": "GitHub source step fails on missing magic `/merge` ref of GitHub\nCurrently I run into an issue where the changehook is too fast for the magic `merge` ref in GitHub to be created on Pull Requests.\r\n\r\nAn easy fix would be to try again in this case or wait a reasonable amount in `startVC()` of the GitHub step if the remote ref ends in `merge`.\r\n\r\nThoughts?\r\n\n", "before_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\nfrom __future__ import absolute_import\nfrom __future__ import print_function\n\nimport hmac\nimport json\nimport logging\nimport re\nfrom hashlib import sha1\n\nfrom dateutil.parser import parse as dateparse\n\nfrom twisted.internet import defer\nfrom twisted.python import log\n\nfrom buildbot.changes.github import PullRequestMixin\nfrom buildbot.util import bytes2unicode\nfrom buildbot.util import httpclientservice\nfrom buildbot.util import unicode2bytes\nfrom buildbot.www.hooks.base import BaseHookHandler\n\n_HEADER_EVENT = b'X-GitHub-Event'\n_HEADER_SIGNATURE = b'X-Hub-Signature'\n\nDEFAULT_SKIPS_PATTERN = (r'\\[ *skip *ci *\\]', r'\\[ *ci *skip *\\]')\nDEFAULT_GITHUB_API_URL = 'https://api.github.com'\n\n\nclass GitHubEventHandler(PullRequestMixin):\n\n def __init__(self, secret, strict,\n codebase=None,\n github_property_whitelist=None,\n master=None,\n skips=None,\n github_api_endpoint=None,\n token=None,\n debug=False,\n verify=False):\n self._secret = secret\n self._strict = strict\n self._token = token\n self._codebase = codebase\n self.github_property_whitelist = github_property_whitelist\n self.skips = skips\n self.github_api_endpoint = github_api_endpoint\n self.master = master\n if github_property_whitelist is None:\n self.github_property_whitelist = []\n if skips is None:\n self.skips = DEFAULT_SKIPS_PATTERN\n if github_api_endpoint is None:\n self.github_api_endpoint = DEFAULT_GITHUB_API_URL\n\n if self._strict and not self._secret:\n raise ValueError('Strict mode is requested '\n 'while no secret is provided')\n self.debug = debug\n self.verify = verify\n\n @defer.inlineCallbacks\n def process(self, request):\n payload = self._get_payload(request)\n\n event_type = request.getHeader(_HEADER_EVENT)\n event_type = bytes2unicode(event_type)\n log.msg(\"X-GitHub-Event: {}\".format(\n event_type), logLevel=logging.DEBUG)\n\n handler = getattr(self, 'handle_{}'.format(event_type), None)\n\n if handler is None:\n raise ValueError('Unknown event: {}'.format(event_type))\n\n result = yield defer.maybeDeferred(lambda: handler(payload, event_type))\n defer.returnValue(result)\n\n def _get_payload(self, request):\n content = request.content.read()\n content = bytes2unicode(content)\n\n signature = request.getHeader(_HEADER_SIGNATURE)\n signature = bytes2unicode(signature)\n\n if not signature and self._strict:\n raise ValueError('Request has no required signature')\n\n if self._secret and signature:\n try:\n hash_type, hexdigest = signature.split('=')\n except ValueError:\n raise ValueError(\n 'Wrong signature format: {}'.format(signature))\n\n if hash_type != 'sha1':\n raise ValueError('Unknown hash type: {}'.format(hash_type))\n\n mac = hmac.new(unicode2bytes(self._secret),\n msg=unicode2bytes(content),\n digestmod=sha1)\n # NOTE: hmac.compare_digest should be used, but it's only available\n # starting Python 2.7.7\n if mac.hexdigest() != hexdigest:\n raise ValueError('Hash mismatch')\n\n content_type = request.getHeader(b'Content-Type')\n\n if content_type == b'application/json':\n payload = json.loads(content)\n elif content_type == b'application/x-www-form-urlencoded':\n payload = json.loads(bytes2unicode(request.args[b'payload'][0]))\n else:\n raise ValueError('Unknown content type: {}'.format(content_type))\n\n log.msg(\"Payload: {}\".format(payload), logLevel=logging.DEBUG)\n\n return payload\n\n def handle_ping(self, _, __):\n return [], 'git'\n\n def handle_push(self, payload, event):\n # This field is unused:\n user = None\n # user = payload['pusher']['name']\n repo = payload['repository']['name']\n repo_url = payload['repository']['html_url']\n # NOTE: what would be a reasonable value for project?\n # project = request.args.get('project', [''])[0]\n project = payload['repository']['full_name']\n\n # Inject some additional white-listed event payload properties\n properties = self.extractProperties(payload)\n changes = self._process_change(payload, user, repo, repo_url, project,\n event, properties)\n\n log.msg(\"Received {} changes from github\".format(len(changes)))\n\n return changes, 'git'\n\n @defer.inlineCallbacks\n def handle_pull_request(self, payload, event):\n changes = []\n number = payload['number']\n refname = 'refs/pull/{}/merge'.format(number)\n commits = payload['pull_request']['commits']\n title = payload['pull_request']['title']\n comments = payload['pull_request']['body']\n repo_full_name = payload['repository']['full_name']\n head_sha = payload['pull_request']['head']['sha']\n\n log.msg('Processing GitHub PR #{}'.format(number),\n logLevel=logging.DEBUG)\n\n head_msg = yield self._get_commit_msg(repo_full_name, head_sha)\n if self._has_skip(head_msg):\n log.msg(\"GitHub PR #{}, Ignoring: \"\n \"head commit message contains skip pattern\".format(number))\n defer.returnValue(([], 'git'))\n\n action = payload.get('action')\n if action not in ('opened', 'reopened', 'synchronize'):\n log.msg(\"GitHub PR #{} {}, ignoring\".format(number, action))\n defer.returnValue((changes, 'git'))\n\n properties = self.extractProperties(payload['pull_request'])\n properties.update({'event': event})\n change = {\n 'revision': payload['pull_request']['head']['sha'],\n 'when_timestamp': dateparse(payload['pull_request']['created_at']),\n 'branch': refname,\n 'revlink': payload['pull_request']['_links']['html']['href'],\n 'repository': payload['repository']['html_url'],\n 'project': payload['pull_request']['base']['repo']['full_name'],\n 'category': 'pull',\n # TODO: Get author name based on login id using txgithub module\n 'author': payload['sender']['login'],\n 'comments': u'GitHub Pull Request #{0} ({1} commit{2})\\n{3}\\n{4}'.format(\n number, commits, 's' if commits != 1 else '', title, comments),\n 'properties': properties,\n }\n\n if callable(self._codebase):\n change['codebase'] = self._codebase(payload)\n elif self._codebase is not None:\n change['codebase'] = self._codebase\n\n changes.append(change)\n\n log.msg(\"Received {} changes from GitHub PR #{}\".format(\n len(changes), number))\n defer.returnValue((changes, 'git'))\n\n @defer.inlineCallbacks\n def _get_commit_msg(self, repo, sha):\n '''\n :param repo: the repo full name, ``{owner}/{project}``.\n e.g. ``buildbot/buildbot``\n '''\n headers = {\n 'User-Agent': 'Buildbot'\n }\n if self._token:\n headers['Authorization'] = 'token ' + self._token\n\n url = '/repos/{}/commits/{}'.format(repo, sha)\n http = yield httpclientservice.HTTPClientService.getService(\n self.master, self.github_api_endpoint, headers=headers,\n debug=self.debug, verify=self.verify)\n res = yield http.get(url)\n data = yield res.json()\n msg = data['commit']['message']\n defer.returnValue(msg)\n\n def _process_change(self, payload, user, repo, repo_url, project, event,\n properties):\n \"\"\"\n Consumes the JSON as a python object and actually starts the build.\n\n :arguments:\n payload\n Python Object that represents the JSON sent by GitHub Service\n Hook.\n \"\"\"\n changes = []\n refname = payload['ref']\n\n # We only care about regular heads or tags\n match = re.match(r\"^refs/(heads|tags)/(.+)$\", refname)\n if not match:\n log.msg(\"Ignoring refname `{}': Not a branch\".format(refname))\n return changes\n category = None # None is the legacy category for when hook only supported push\n if match.group(1) == \"tags\":\n category = \"tag\"\n\n branch = match.group(2)\n if payload.get('deleted'):\n log.msg(\"Branch `{}' deleted, ignoring\".format(branch))\n return changes\n\n # check skip pattern in commit message. e.g.: [ci skip] and [skip ci]\n head_msg = payload['head_commit'].get('message', '')\n if self._has_skip(head_msg):\n return changes\n commits = payload['commits']\n if payload.get('created'):\n commits = [payload['head_commit']]\n for commit in commits:\n files = []\n for kind in ('added', 'modified', 'removed'):\n files.extend(commit.get(kind, []))\n\n when_timestamp = dateparse(commit['timestamp'])\n\n log.msg(\"New revision: {}\".format(commit['id'][:8]))\n\n change = {\n 'author': u'{} <{}>'.format(commit['author']['name'],\n commit['author']['email']),\n 'files': files,\n 'comments': commit['message'],\n 'revision': commit['id'],\n 'when_timestamp': when_timestamp,\n 'branch': branch,\n 'revlink': commit['url'],\n 'repository': repo_url,\n 'project': project,\n 'properties': {\n 'github_distinct': commit.get('distinct', True),\n 'event': event,\n },\n 'category': category\n }\n # Update with any white-listed github event properties\n change['properties'].update(properties)\n\n if callable(self._codebase):\n change['codebase'] = self._codebase(payload)\n elif self._codebase is not None:\n change['codebase'] = self._codebase\n\n changes.append(change)\n\n return changes\n\n def _has_skip(self, msg):\n '''\n The message contains the skipping keyword no not.\n\n :return type: Bool\n '''\n for skip in self.skips:\n if re.search(skip, msg):\n return True\n return False\n\n# for GitHub, we do another level of indirection because\n# we already had documented API that encouraged people to subclass GitHubEventHandler\n# so we need to be careful not breaking that API.\n\n\nclass GitHubHandler(BaseHookHandler):\n def __init__(self, master, options):\n if options is None:\n options = {}\n BaseHookHandler.__init__(self, master, options)\n\n klass = options.get('class', GitHubEventHandler)\n klass_kwargs = {\n 'master': master,\n 'codebase': options.get('codebase', None),\n 'github_property_whitelist': options.get('github_property_whitelist', None),\n 'skips': options.get('skips', None),\n 'github_api_endpoint': options.get('github_api_endpoint', None) or 'https://api.github.com',\n 'token': options.get('token', None),\n 'debug': options.get('debug', None) or False,\n 'verify': options.get('verify', None) or False,\n }\n handler = klass(options.get('secret', None),\n options.get('strict', False),\n **klass_kwargs)\n self.handler = handler\n\n def getChanges(self, request):\n return self.handler.process(request)\n\n\ngithub = GitHubHandler\n", "path": "master/buildbot/www/hooks/github.py"}], "after_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\nfrom __future__ import absolute_import\nfrom __future__ import print_function\n\nimport hmac\nimport json\nimport logging\nimport re\nfrom hashlib import sha1\n\nfrom dateutil.parser import parse as dateparse\n\nfrom twisted.internet import defer\nfrom twisted.python import log\n\nfrom buildbot.changes.github import PullRequestMixin\nfrom buildbot.util import bytes2unicode\nfrom buildbot.util import httpclientservice\nfrom buildbot.util import unicode2bytes\nfrom buildbot.www.hooks.base import BaseHookHandler\n\n_HEADER_EVENT = b'X-GitHub-Event'\n_HEADER_SIGNATURE = b'X-Hub-Signature'\n\nDEFAULT_SKIPS_PATTERN = (r'\\[ *skip *ci *\\]', r'\\[ *ci *skip *\\]')\nDEFAULT_GITHUB_API_URL = 'https://api.github.com'\n\n\nclass GitHubEventHandler(PullRequestMixin):\n\n def __init__(self, secret, strict,\n codebase=None,\n github_property_whitelist=None,\n master=None,\n skips=None,\n github_api_endpoint=None,\n pullrequest_ref=None,\n token=None,\n debug=False,\n verify=False):\n self._secret = secret\n self._strict = strict\n self._token = token\n self._codebase = codebase\n self.pullrequest_ref = pullrequest_ref\n self.github_property_whitelist = github_property_whitelist\n self.skips = skips\n self.github_api_endpoint = github_api_endpoint\n self.master = master\n if github_property_whitelist is None:\n self.github_property_whitelist = []\n if skips is None:\n self.skips = DEFAULT_SKIPS_PATTERN\n if github_api_endpoint is None:\n self.github_api_endpoint = DEFAULT_GITHUB_API_URL\n\n if self._strict and not self._secret:\n raise ValueError('Strict mode is requested '\n 'while no secret is provided')\n self.debug = debug\n self.verify = verify\n\n @defer.inlineCallbacks\n def process(self, request):\n payload = self._get_payload(request)\n\n event_type = request.getHeader(_HEADER_EVENT)\n event_type = bytes2unicode(event_type)\n log.msg(\"X-GitHub-Event: {}\".format(\n event_type), logLevel=logging.DEBUG)\n\n handler = getattr(self, 'handle_{}'.format(event_type), None)\n\n if handler is None:\n raise ValueError('Unknown event: {}'.format(event_type))\n\n result = yield defer.maybeDeferred(lambda: handler(payload, event_type))\n defer.returnValue(result)\n\n def _get_payload(self, request):\n content = request.content.read()\n content = bytes2unicode(content)\n\n signature = request.getHeader(_HEADER_SIGNATURE)\n signature = bytes2unicode(signature)\n\n if not signature and self._strict:\n raise ValueError('Request has no required signature')\n\n if self._secret and signature:\n try:\n hash_type, hexdigest = signature.split('=')\n except ValueError:\n raise ValueError(\n 'Wrong signature format: {}'.format(signature))\n\n if hash_type != 'sha1':\n raise ValueError('Unknown hash type: {}'.format(hash_type))\n\n mac = hmac.new(unicode2bytes(self._secret),\n msg=unicode2bytes(content),\n digestmod=sha1)\n # NOTE: hmac.compare_digest should be used, but it's only available\n # starting Python 2.7.7\n if mac.hexdigest() != hexdigest:\n raise ValueError('Hash mismatch')\n\n content_type = request.getHeader(b'Content-Type')\n\n if content_type == b'application/json':\n payload = json.loads(content)\n elif content_type == b'application/x-www-form-urlencoded':\n payload = json.loads(bytes2unicode(request.args[b'payload'][0]))\n else:\n raise ValueError('Unknown content type: {}'.format(content_type))\n\n log.msg(\"Payload: {}\".format(payload), logLevel=logging.DEBUG)\n\n return payload\n\n def handle_ping(self, _, __):\n return [], 'git'\n\n def handle_push(self, payload, event):\n # This field is unused:\n user = None\n # user = payload['pusher']['name']\n repo = payload['repository']['name']\n repo_url = payload['repository']['html_url']\n # NOTE: what would be a reasonable value for project?\n # project = request.args.get('project', [''])[0]\n project = payload['repository']['full_name']\n\n # Inject some additional white-listed event payload properties\n properties = self.extractProperties(payload)\n changes = self._process_change(payload, user, repo, repo_url, project,\n event, properties)\n\n log.msg(\"Received {} changes from github\".format(len(changes)))\n\n return changes, 'git'\n\n @defer.inlineCallbacks\n def handle_pull_request(self, payload, event):\n changes = []\n number = payload['number']\n refname = 'refs/pull/{}/{}'.format(number, self.pullrequest_ref)\n commits = payload['pull_request']['commits']\n title = payload['pull_request']['title']\n comments = payload['pull_request']['body']\n repo_full_name = payload['repository']['full_name']\n head_sha = payload['pull_request']['head']['sha']\n\n log.msg('Processing GitHub PR #{}'.format(number),\n logLevel=logging.DEBUG)\n\n head_msg = yield self._get_commit_msg(repo_full_name, head_sha)\n if self._has_skip(head_msg):\n log.msg(\"GitHub PR #{}, Ignoring: \"\n \"head commit message contains skip pattern\".format(number))\n defer.returnValue(([], 'git'))\n\n action = payload.get('action')\n if action not in ('opened', 'reopened', 'synchronize'):\n log.msg(\"GitHub PR #{} {}, ignoring\".format(number, action))\n defer.returnValue((changes, 'git'))\n\n properties = self.extractProperties(payload['pull_request'])\n properties.update({'event': event})\n change = {\n 'revision': payload['pull_request']['head']['sha'],\n 'when_timestamp': dateparse(payload['pull_request']['created_at']),\n 'branch': refname,\n 'revlink': payload['pull_request']['_links']['html']['href'],\n 'repository': payload['repository']['html_url'],\n 'project': payload['pull_request']['base']['repo']['full_name'],\n 'category': 'pull',\n # TODO: Get author name based on login id using txgithub module\n 'author': payload['sender']['login'],\n 'comments': u'GitHub Pull Request #{0} ({1} commit{2})\\n{3}\\n{4}'.format(\n number, commits, 's' if commits != 1 else '', title, comments),\n 'properties': properties,\n }\n\n if callable(self._codebase):\n change['codebase'] = self._codebase(payload)\n elif self._codebase is not None:\n change['codebase'] = self._codebase\n\n changes.append(change)\n\n log.msg(\"Received {} changes from GitHub PR #{}\".format(\n len(changes), number))\n defer.returnValue((changes, 'git'))\n\n @defer.inlineCallbacks\n def _get_commit_msg(self, repo, sha):\n '''\n :param repo: the repo full name, ``{owner}/{project}``.\n e.g. ``buildbot/buildbot``\n '''\n headers = {\n 'User-Agent': 'Buildbot'\n }\n if self._token:\n headers['Authorization'] = 'token ' + self._token\n\n url = '/repos/{}/commits/{}'.format(repo, sha)\n http = yield httpclientservice.HTTPClientService.getService(\n self.master, self.github_api_endpoint, headers=headers,\n debug=self.debug, verify=self.verify)\n res = yield http.get(url)\n data = yield res.json()\n msg = data['commit']['message']\n defer.returnValue(msg)\n\n def _process_change(self, payload, user, repo, repo_url, project, event,\n properties):\n \"\"\"\n Consumes the JSON as a python object and actually starts the build.\n\n :arguments:\n payload\n Python Object that represents the JSON sent by GitHub Service\n Hook.\n \"\"\"\n changes = []\n refname = payload['ref']\n\n # We only care about regular heads or tags\n match = re.match(r\"^refs/(heads|tags)/(.+)$\", refname)\n if not match:\n log.msg(\"Ignoring refname `{}': Not a branch\".format(refname))\n return changes\n category = None # None is the legacy category for when hook only supported push\n if match.group(1) == \"tags\":\n category = \"tag\"\n\n branch = match.group(2)\n if payload.get('deleted'):\n log.msg(\"Branch `{}' deleted, ignoring\".format(branch))\n return changes\n\n # check skip pattern in commit message. e.g.: [ci skip] and [skip ci]\n head_msg = payload['head_commit'].get('message', '')\n if self._has_skip(head_msg):\n return changes\n commits = payload['commits']\n if payload.get('created'):\n commits = [payload['head_commit']]\n for commit in commits:\n files = []\n for kind in ('added', 'modified', 'removed'):\n files.extend(commit.get(kind, []))\n\n when_timestamp = dateparse(commit['timestamp'])\n\n log.msg(\"New revision: {}\".format(commit['id'][:8]))\n\n change = {\n 'author': u'{} <{}>'.format(commit['author']['name'],\n commit['author']['email']),\n 'files': files,\n 'comments': commit['message'],\n 'revision': commit['id'],\n 'when_timestamp': when_timestamp,\n 'branch': branch,\n 'revlink': commit['url'],\n 'repository': repo_url,\n 'project': project,\n 'properties': {\n 'github_distinct': commit.get('distinct', True),\n 'event': event,\n },\n 'category': category\n }\n # Update with any white-listed github event properties\n change['properties'].update(properties)\n\n if callable(self._codebase):\n change['codebase'] = self._codebase(payload)\n elif self._codebase is not None:\n change['codebase'] = self._codebase\n\n changes.append(change)\n\n return changes\n\n def _has_skip(self, msg):\n '''\n The message contains the skipping keyword no not.\n\n :return type: Bool\n '''\n for skip in self.skips:\n if re.search(skip, msg):\n return True\n return False\n\n# for GitHub, we do another level of indirection because\n# we already had documented API that encouraged people to subclass GitHubEventHandler\n# so we need to be careful not breaking that API.\n\n\nclass GitHubHandler(BaseHookHandler):\n def __init__(self, master, options):\n if options is None:\n options = {}\n BaseHookHandler.__init__(self, master, options)\n\n klass = options.get('class', GitHubEventHandler)\n klass_kwargs = {\n 'master': master,\n 'codebase': options.get('codebase', None),\n 'github_property_whitelist': options.get('github_property_whitelist', None),\n 'skips': options.get('skips', None),\n 'github_api_endpoint': options.get('github_api_endpoint', None) or 'https://api.github.com',\n 'pullrequest_ref': options.get('pullrequest_ref', None) or 'merge',\n 'token': options.get('token', None),\n 'debug': options.get('debug', None) or False,\n 'verify': options.get('verify', None) or False,\n }\n handler = klass(options.get('secret', None),\n options.get('strict', False),\n **klass_kwargs)\n self.handler = handler\n\n def getChanges(self, request):\n return self.handler.process(request)\n\n\ngithub = GitHubHandler\n", "path": "master/buildbot/www/hooks/github.py"}]} | 3,948 | 387 |
gh_patches_debug_2955 | rasdani/github-patches | git_diff | facebookresearch__hydra-2729 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CI failing: `./tools/configen/configen/utils.py:4:1: F401 'typing.Tuple' imported but unused`
```
./tools/configen/configen/utils.py:4:1: F401 'typing.Tuple' imported but unused
nox > [2023-07-24 22:16:52,631] Command flake8 --config .flake8 failed with exit code 1
nox > [2023-07-24 22:16:52,632] Session lint-3.10 failed.
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/configen/configen/utils.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 import sys
3 from enum import Enum
4 from typing import Any, Dict, Iterable, List, Optional, Set, Tuple
5
6 from omegaconf._utils import (
7 _resolve_optional,
8 get_dict_key_value_types,
9 get_list_element_type,
10 is_dict_annotation,
11 is_list_annotation,
12 is_primitive_type_annotation,
13 )
14
15
16 # borrowed from OmegaConf
17 def type_str(t: Any) -> str:
18 is_optional, t = _resolve_optional(t)
19 if t is None:
20 return type(t).__name__
21 if t is Any:
22 return "Any"
23 if t is ...:
24 return "..."
25
26 if sys.version_info < (3, 7, 0): # pragma: no cover
27 # Python 3.6
28 if hasattr(t, "__name__"):
29 name = str(t.__name__)
30 else:
31 if t.__origin__ is not None:
32 name = type_str(t.__origin__)
33 else:
34 name = str(t)
35 if name.startswith("typing."):
36 name = name[len("typing.") :]
37 else: # pragma: no cover
38 # Python >= 3.7
39 if hasattr(t, "__name__"):
40 name = str(t.__name__)
41 else:
42 if t._name is None:
43 if t.__origin__ is not None:
44 name = type_str(t.__origin__)
45 else:
46 name = str(t._name)
47
48 args = getattr(t, "__args__", None)
49 if args is not None:
50 args = ", ".join(type_str(t) for t in t.__args__)
51 ret = f"{name}[{args}]"
52 else:
53 ret = name
54 if is_optional:
55 return f"Optional[{ret}]"
56 else:
57 return ret
58
59
60 def is_tuple_annotation(type_: Any) -> bool:
61 origin = getattr(type_, "__origin__", None)
62 return origin is tuple
63
64
65 def convert_imports(imports: Set[Any], string_imports: Iterable[str]) -> List[str]:
66 tmp = set()
67 for imp in string_imports:
68 tmp.add(imp)
69 for t in imports:
70 s = None
71 origin = getattr(t, "__origin__", None)
72 if t is Any:
73 classname = "Any"
74 elif t is Optional:
75 classname = "Optional"
76 else:
77 if origin is list:
78 classname = "List"
79 elif origin is tuple:
80 classname = "Tuple"
81 elif origin is dict:
82 classname = "Dict"
83 else:
84 classname = t.__name__
85
86 if not is_primitive_type_annotation(t) or issubclass(t, Enum):
87 s = f"from {t.__module__} import {classname}"
88
89 if s is not None:
90 tmp.add(s)
91 return sorted(list(tmp))
92
93
94 def collect_imports(imports: Set[Any], type_: Any) -> None:
95 if is_list_annotation(type_):
96 collect_imports(imports, get_list_element_type(type_))
97 type_ = List
98 elif is_dict_annotation(type_):
99 kvt = get_dict_key_value_types(type_)
100 collect_imports(imports, kvt[0])
101 collect_imports(imports, kvt[1])
102 type_ = Dict
103 else:
104 is_optional = _resolve_optional(type_)[0]
105 if is_optional and type_ is not Any:
106 type_ = Optional
107 imports.add(type_)
108
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tools/configen/configen/utils.py b/tools/configen/configen/utils.py
--- a/tools/configen/configen/utils.py
+++ b/tools/configen/configen/utils.py
@@ -1,7 +1,7 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
import sys
from enum import Enum
-from typing import Any, Dict, Iterable, List, Optional, Set, Tuple
+from typing import Any, Dict, Iterable, List, Optional, Set
from omegaconf._utils import (
_resolve_optional,
| {"golden_diff": "diff --git a/tools/configen/configen/utils.py b/tools/configen/configen/utils.py\n--- a/tools/configen/configen/utils.py\n+++ b/tools/configen/configen/utils.py\n@@ -1,7 +1,7 @@\n # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n import sys\n from enum import Enum\n-from typing import Any, Dict, Iterable, List, Optional, Set, Tuple\n+from typing import Any, Dict, Iterable, List, Optional, Set\n \n from omegaconf._utils import (\n _resolve_optional,\n", "issue": "CI failing: `./tools/configen/configen/utils.py:4:1: F401 'typing.Tuple' imported but unused`\n```\r\n./tools/configen/configen/utils.py:4:1: F401 'typing.Tuple' imported but unused\r\nnox > [2023-07-24 22:16:52,631] Command flake8 --config .flake8 failed with exit code 1\r\nnox > [2023-07-24 22:16:52,632] Session lint-3.10 failed.\r\n```\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport sys\nfrom enum import Enum\nfrom typing import Any, Dict, Iterable, List, Optional, Set, Tuple\n\nfrom omegaconf._utils import (\n _resolve_optional,\n get_dict_key_value_types,\n get_list_element_type,\n is_dict_annotation,\n is_list_annotation,\n is_primitive_type_annotation,\n)\n\n\n# borrowed from OmegaConf\ndef type_str(t: Any) -> str:\n is_optional, t = _resolve_optional(t)\n if t is None:\n return type(t).__name__\n if t is Any:\n return \"Any\"\n if t is ...:\n return \"...\"\n\n if sys.version_info < (3, 7, 0): # pragma: no cover\n # Python 3.6\n if hasattr(t, \"__name__\"):\n name = str(t.__name__)\n else:\n if t.__origin__ is not None:\n name = type_str(t.__origin__)\n else:\n name = str(t)\n if name.startswith(\"typing.\"):\n name = name[len(\"typing.\") :]\n else: # pragma: no cover\n # Python >= 3.7\n if hasattr(t, \"__name__\"):\n name = str(t.__name__)\n else:\n if t._name is None:\n if t.__origin__ is not None:\n name = type_str(t.__origin__)\n else:\n name = str(t._name)\n\n args = getattr(t, \"__args__\", None)\n if args is not None:\n args = \", \".join(type_str(t) for t in t.__args__)\n ret = f\"{name}[{args}]\"\n else:\n ret = name\n if is_optional:\n return f\"Optional[{ret}]\"\n else:\n return ret\n\n\ndef is_tuple_annotation(type_: Any) -> bool:\n origin = getattr(type_, \"__origin__\", None)\n return origin is tuple\n\n\ndef convert_imports(imports: Set[Any], string_imports: Iterable[str]) -> List[str]:\n tmp = set()\n for imp in string_imports:\n tmp.add(imp)\n for t in imports:\n s = None\n origin = getattr(t, \"__origin__\", None)\n if t is Any:\n classname = \"Any\"\n elif t is Optional:\n classname = \"Optional\"\n else:\n if origin is list:\n classname = \"List\"\n elif origin is tuple:\n classname = \"Tuple\"\n elif origin is dict:\n classname = \"Dict\"\n else:\n classname = t.__name__\n\n if not is_primitive_type_annotation(t) or issubclass(t, Enum):\n s = f\"from {t.__module__} import {classname}\"\n\n if s is not None:\n tmp.add(s)\n return sorted(list(tmp))\n\n\ndef collect_imports(imports: Set[Any], type_: Any) -> None:\n if is_list_annotation(type_):\n collect_imports(imports, get_list_element_type(type_))\n type_ = List\n elif is_dict_annotation(type_):\n kvt = get_dict_key_value_types(type_)\n collect_imports(imports, kvt[0])\n collect_imports(imports, kvt[1])\n type_ = Dict\n else:\n is_optional = _resolve_optional(type_)[0]\n if is_optional and type_ is not Any:\n type_ = Optional\n imports.add(type_)\n", "path": "tools/configen/configen/utils.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport sys\nfrom enum import Enum\nfrom typing import Any, Dict, Iterable, List, Optional, Set\n\nfrom omegaconf._utils import (\n _resolve_optional,\n get_dict_key_value_types,\n get_list_element_type,\n is_dict_annotation,\n is_list_annotation,\n is_primitive_type_annotation,\n)\n\n\n# borrowed from OmegaConf\ndef type_str(t: Any) -> str:\n is_optional, t = _resolve_optional(t)\n if t is None:\n return type(t).__name__\n if t is Any:\n return \"Any\"\n if t is ...:\n return \"...\"\n\n if sys.version_info < (3, 7, 0): # pragma: no cover\n # Python 3.6\n if hasattr(t, \"__name__\"):\n name = str(t.__name__)\n else:\n if t.__origin__ is not None:\n name = type_str(t.__origin__)\n else:\n name = str(t)\n if name.startswith(\"typing.\"):\n name = name[len(\"typing.\") :]\n else: # pragma: no cover\n # Python >= 3.7\n if hasattr(t, \"__name__\"):\n name = str(t.__name__)\n else:\n if t._name is None:\n if t.__origin__ is not None:\n name = type_str(t.__origin__)\n else:\n name = str(t._name)\n\n args = getattr(t, \"__args__\", None)\n if args is not None:\n args = \", \".join(type_str(t) for t in t.__args__)\n ret = f\"{name}[{args}]\"\n else:\n ret = name\n if is_optional:\n return f\"Optional[{ret}]\"\n else:\n return ret\n\n\ndef is_tuple_annotation(type_: Any) -> bool:\n origin = getattr(type_, \"__origin__\", None)\n return origin is tuple\n\n\ndef convert_imports(imports: Set[Any], string_imports: Iterable[str]) -> List[str]:\n tmp = set()\n for imp in string_imports:\n tmp.add(imp)\n for t in imports:\n s = None\n origin = getattr(t, \"__origin__\", None)\n if t is Any:\n classname = \"Any\"\n elif t is Optional:\n classname = \"Optional\"\n else:\n if origin is list:\n classname = \"List\"\n elif origin is tuple:\n classname = \"Tuple\"\n elif origin is dict:\n classname = \"Dict\"\n else:\n classname = t.__name__\n\n if not is_primitive_type_annotation(t) or issubclass(t, Enum):\n s = f\"from {t.__module__} import {classname}\"\n\n if s is not None:\n tmp.add(s)\n return sorted(list(tmp))\n\n\ndef collect_imports(imports: Set[Any], type_: Any) -> None:\n if is_list_annotation(type_):\n collect_imports(imports, get_list_element_type(type_))\n type_ = List\n elif is_dict_annotation(type_):\n kvt = get_dict_key_value_types(type_)\n collect_imports(imports, kvt[0])\n collect_imports(imports, kvt[1])\n type_ = Dict\n else:\n is_optional = _resolve_optional(type_)[0]\n if is_optional and type_ is not Any:\n type_ = Optional\n imports.add(type_)\n", "path": "tools/configen/configen/utils.py"}]} | 1,368 | 120 |
gh_patches_debug_37823 | rasdani/github-patches | git_diff | nipy__nipype-3194 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
NiftyReg interface raises ValueError: invalid version number
### Summary
Version number is "invalid" when I try to create a `RegAladin` node.
### Actual behavior
```python-traceback
C:\Users\fernando\tmp
(vesseg) λ python r.py
180612-15:47:29,796 interface WARNING:
version_from_command member of CommandLine was Deprecated in nipype-1.0.0 and deleted in 1.1.0
Traceback (most recent call last):
File "r.py", line 7, in <module>
aladin = niftyreg.RegAladin()
File "C:\Users\fernando\Miniconda3\envs\vesseg\lib\site-packages\nipype\interfaces\niftyreg\base.py", line 66, in __init__
StrictVersion(_version) < StrictVersion(self._min_version):
File "C:\Users\fernando\Miniconda3\envs\vesseg\lib\distutils\version.py", line 40, in __init__
self.parse(vstring)
File "C:\Users\fernando\Miniconda3\envs\vesseg\lib\distutils\version.py", line 137, in parse
raise ValueError("invalid version number '%s'" % vstring)
ValueError: invalid version number '1.5.58
'
```
I have temporarily hacked this by replacing [this line](https://github.com/nipy/nipype/blob/master/nipype/interfaces/niftyreg/base.py#L59):
```python
_version = self.version_from_command()
```
by:
```python
_version = bytes('1.5.58', "utf-8")
```
### How to replicate the behavior
Run
```python
from nipype.interfaces import niftyreg
aladin = niftyreg.RegAladin()
```
### Platform details:
```python
In [1]: import nipype
In [2]: nipype.get_info()
Out[2]:
{'pkg_path': 'C:\\Users\\fernando\\Miniconda3\\envs\\vesseg\\lib\\site-packages\\nipype',
'commit_source': 'archive substitution',
'commit_hash': '%h',
'nipype_version': '1.0.4',
'sys_version': '3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)]',
'sys_executable': 'C:\\Users\\fernando\\Miniconda3\\envs\\vesseg\\python.exe',
'sys_platform': 'win32',
'numpy_version': '1.14.3',
'scipy_version': '1.1.0',
'networkx_version': '2.1',
'nibabel_version': '2.2.1',
'traits_version': '4.6.0'}
In [3]: nipype.__version__
Out[3]: '1.0.4'
```
### Execution environment
I'm not sure what this means. I'm running Python from a `conda` environment.
NiftyReg interface raises ValueError: invalid version number
### Summary
Version number is "invalid" when I try to create a `RegAladin` node.
### Actual behavior
```python-traceback
C:\Users\fernando\tmp
(vesseg) λ python r.py
180612-15:47:29,796 interface WARNING:
version_from_command member of CommandLine was Deprecated in nipype-1.0.0 and deleted in 1.1.0
Traceback (most recent call last):
File "r.py", line 7, in <module>
aladin = niftyreg.RegAladin()
File "C:\Users\fernando\Miniconda3\envs\vesseg\lib\site-packages\nipype\interfaces\niftyreg\base.py", line 66, in __init__
StrictVersion(_version) < StrictVersion(self._min_version):
File "C:\Users\fernando\Miniconda3\envs\vesseg\lib\distutils\version.py", line 40, in __init__
self.parse(vstring)
File "C:\Users\fernando\Miniconda3\envs\vesseg\lib\distutils\version.py", line 137, in parse
raise ValueError("invalid version number '%s'" % vstring)
ValueError: invalid version number '1.5.58
'
```
I have temporarily hacked this by replacing [this line](https://github.com/nipy/nipype/blob/master/nipype/interfaces/niftyreg/base.py#L59):
```python
_version = self.version_from_command()
```
by:
```python
_version = bytes('1.5.58', "utf-8")
```
### How to replicate the behavior
Run
```python
from nipype.interfaces import niftyreg
aladin = niftyreg.RegAladin()
```
### Platform details:
```python
In [1]: import nipype
In [2]: nipype.get_info()
Out[2]:
{'pkg_path': 'C:\\Users\\fernando\\Miniconda3\\envs\\vesseg\\lib\\site-packages\\nipype',
'commit_source': 'archive substitution',
'commit_hash': '%h',
'nipype_version': '1.0.4',
'sys_version': '3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)]',
'sys_executable': 'C:\\Users\\fernando\\Miniconda3\\envs\\vesseg\\python.exe',
'sys_platform': 'win32',
'numpy_version': '1.14.3',
'scipy_version': '1.1.0',
'networkx_version': '2.1',
'nibabel_version': '2.2.1',
'traits_version': '4.6.0'}
In [3]: nipype.__version__
Out[3]: '1.0.4'
```
### Execution environment
I'm not sure what this means. I'm running Python from a `conda` environment.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nipype/interfaces/niftyreg/base.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-
3 # vi: set ft=python sts=4 ts=4 sw=4 et:
4 """
5 The niftyreg module provides classes for interfacing with `niftyreg
6 <http://sourceforge.net/projects/niftyreg/>`_ command line tools.
7
8 These are the base tools for working with niftyreg.
9
10 Registration tools are found in niftyreg/reg.py
11 Every other tool is found in niftyreg/regutils.py
12
13 Examples
14 --------
15 See the docstrings of the individual classes for examples.
16
17 """
18 from distutils.version import StrictVersion
19 import os
20
21 from ... import logging
22 from ..base import CommandLine, CommandLineInputSpec, traits, Undefined
23 from ...utils.filemanip import split_filename
24
25 iflogger = logging.getLogger("nipype.interface")
26
27
28 def get_custom_path(command, env_dir="NIFTYREGDIR"):
29 return os.path.join(os.getenv(env_dir, ""), command)
30
31
32 class NiftyRegCommandInputSpec(CommandLineInputSpec):
33 """Input Spec for niftyreg interfaces."""
34
35 # Set the number of omp thread to use
36 omp_core_val = traits.Int(
37 int(os.environ.get("OMP_NUM_THREADS", "1")),
38 desc="Number of openmp thread to use",
39 argstr="-omp %i",
40 usedefault=True,
41 )
42
43
44 class NiftyRegCommand(CommandLine):
45 """
46 Base support interface for NiftyReg commands.
47 """
48
49 _suffix = "_nr"
50 _min_version = "1.5.30"
51
52 input_spec = NiftyRegCommandInputSpec
53
54 def __init__(self, required_version=None, **inputs):
55 self.num_threads = 1
56 super(NiftyRegCommand, self).__init__(**inputs)
57 self.required_version = required_version
58 _version = self.version_from_command()
59 if _version:
60 _version = _version.decode("utf-8")
61 if self._min_version is not None and StrictVersion(
62 _version
63 ) < StrictVersion(self._min_version):
64 msg = "A later version of Niftyreg is required (%s < %s)"
65 iflogger.warning(msg, _version, self._min_version)
66 if required_version is not None:
67 if StrictVersion(_version) != StrictVersion(required_version):
68 msg = "The version of NiftyReg differs from the required"
69 msg += "(%s != %s)"
70 iflogger.warning(msg, _version, self.required_version)
71 self.inputs.on_trait_change(self._omp_update, "omp_core_val")
72 self.inputs.on_trait_change(self._environ_update, "environ")
73 self._omp_update()
74
75 def _omp_update(self):
76 if self.inputs.omp_core_val:
77 self.inputs.environ["OMP_NUM_THREADS"] = str(self.inputs.omp_core_val)
78 self.num_threads = self.inputs.omp_core_val
79 else:
80 if "OMP_NUM_THREADS" in self.inputs.environ:
81 del self.inputs.environ["OMP_NUM_THREADS"]
82 self.num_threads = 1
83
84 def _environ_update(self):
85 if self.inputs.environ:
86 if "OMP_NUM_THREADS" in self.inputs.environ:
87 self.inputs.omp_core_val = int(self.inputs.environ["OMP_NUM_THREADS"])
88 else:
89 self.inputs.omp_core_val = Undefined
90 else:
91 self.inputs.omp_core_val = Undefined
92
93 def check_version(self):
94 _version = self.version_from_command()
95 if not _version:
96 raise Exception("Niftyreg not found")
97 # Decoding to string:
98 _version = _version.decode("utf-8")
99 if StrictVersion(_version) < StrictVersion(self._min_version):
100 err = "A later version of Niftyreg is required (%s < %s)"
101 raise ValueError(err % (_version, self._min_version))
102 if self.required_version:
103 if StrictVersion(_version) != StrictVersion(self.required_version):
104 err = "The version of NiftyReg differs from the required"
105 err += "(%s != %s)"
106 raise ValueError(err % (_version, self.required_version))
107
108 @property
109 def version(self):
110 return self.version_from_command()
111
112 def exists(self):
113 return self.version_from_command() is not None
114
115 def _format_arg(self, name, spec, value):
116 if name == "omp_core_val":
117 self.numthreads = value
118 return super(NiftyRegCommand, self)._format_arg(name, spec, value)
119
120 def _gen_fname(self, basename, out_dir=None, suffix=None, ext=None):
121 if basename == "":
122 msg = "Unable to generate filename for command %s. " % self.cmd
123 msg += "basename is not set!"
124 raise ValueError(msg)
125 _, final_bn, final_ext = split_filename(basename)
126 if out_dir is None:
127 out_dir = os.getcwd()
128 if ext is not None:
129 final_ext = ext
130 if suffix is not None:
131 final_bn = "".join((final_bn, suffix))
132 return os.path.abspath(os.path.join(out_dir, final_bn + final_ext))
133
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nipype/interfaces/niftyreg/base.py b/nipype/interfaces/niftyreg/base.py
--- a/nipype/interfaces/niftyreg/base.py
+++ b/nipype/interfaces/niftyreg/base.py
@@ -19,7 +19,8 @@
import os
from ... import logging
-from ..base import CommandLine, CommandLineInputSpec, traits, Undefined
+from ..base import (CommandLine, CommandLineInputSpec, traits, Undefined,
+ PackageInfo)
from ...utils.filemanip import split_filename
iflogger = logging.getLogger("nipype.interface")
@@ -29,6 +30,14 @@
return os.path.join(os.getenv(env_dir, ""), command)
+class Info(PackageInfo):
+ version_cmd = get_custom_path('reg_aladin') + ' --version'
+
+ @staticmethod
+ def parse_version(raw_info):
+ return raw_info
+
+
class NiftyRegCommandInputSpec(CommandLineInputSpec):
"""Input Spec for niftyreg interfaces."""
@@ -55,9 +64,8 @@
self.num_threads = 1
super(NiftyRegCommand, self).__init__(**inputs)
self.required_version = required_version
- _version = self.version_from_command()
+ _version = self.version
if _version:
- _version = _version.decode("utf-8")
if self._min_version is not None and StrictVersion(
_version
) < StrictVersion(self._min_version):
@@ -91,11 +99,9 @@
self.inputs.omp_core_val = Undefined
def check_version(self):
- _version = self.version_from_command()
+ _version = self.version
if not _version:
raise Exception("Niftyreg not found")
- # Decoding to string:
- _version = _version.decode("utf-8")
if StrictVersion(_version) < StrictVersion(self._min_version):
err = "A later version of Niftyreg is required (%s < %s)"
raise ValueError(err % (_version, self._min_version))
@@ -107,10 +113,10 @@
@property
def version(self):
- return self.version_from_command()
+ return Info.version()
def exists(self):
- return self.version_from_command() is not None
+ return self.version is not None
def _format_arg(self, name, spec, value):
if name == "omp_core_val":
| {"golden_diff": "diff --git a/nipype/interfaces/niftyreg/base.py b/nipype/interfaces/niftyreg/base.py\n--- a/nipype/interfaces/niftyreg/base.py\n+++ b/nipype/interfaces/niftyreg/base.py\n@@ -19,7 +19,8 @@\n import os\n \n from ... import logging\n-from ..base import CommandLine, CommandLineInputSpec, traits, Undefined\n+from ..base import (CommandLine, CommandLineInputSpec, traits, Undefined,\n+ PackageInfo)\n from ...utils.filemanip import split_filename\n \n iflogger = logging.getLogger(\"nipype.interface\")\n@@ -29,6 +30,14 @@\n return os.path.join(os.getenv(env_dir, \"\"), command)\n \n \n+class Info(PackageInfo):\n+ version_cmd = get_custom_path('reg_aladin') + ' --version'\n+\n+ @staticmethod\n+ def parse_version(raw_info):\n+ return raw_info\n+\n+\n class NiftyRegCommandInputSpec(CommandLineInputSpec):\n \"\"\"Input Spec for niftyreg interfaces.\"\"\"\n \n@@ -55,9 +64,8 @@\n self.num_threads = 1\n super(NiftyRegCommand, self).__init__(**inputs)\n self.required_version = required_version\n- _version = self.version_from_command()\n+ _version = self.version\n if _version:\n- _version = _version.decode(\"utf-8\")\n if self._min_version is not None and StrictVersion(\n _version\n ) < StrictVersion(self._min_version):\n@@ -91,11 +99,9 @@\n self.inputs.omp_core_val = Undefined\n \n def check_version(self):\n- _version = self.version_from_command()\n+ _version = self.version\n if not _version:\n raise Exception(\"Niftyreg not found\")\n- # Decoding to string:\n- _version = _version.decode(\"utf-8\")\n if StrictVersion(_version) < StrictVersion(self._min_version):\n err = \"A later version of Niftyreg is required (%s < %s)\"\n raise ValueError(err % (_version, self._min_version))\n@@ -107,10 +113,10 @@\n \n @property\n def version(self):\n- return self.version_from_command()\n+ return Info.version()\n \n def exists(self):\n- return self.version_from_command() is not None\n+ return self.version is not None\n \n def _format_arg(self, name, spec, value):\n if name == \"omp_core_val\":\n", "issue": "NiftyReg interface raises ValueError: invalid version number\n### Summary\r\nVersion number is \"invalid\" when I try to create a `RegAladin` node.\r\n\r\n### Actual behavior\r\n```python-traceback\r\nC:\\Users\\fernando\\tmp\r\n(vesseg) \u03bb python r.py\r\n180612-15:47:29,796 interface WARNING:\r\n version_from_command member of CommandLine was Deprecated in nipype-1.0.0 and deleted in 1.1.0\r\nTraceback (most recent call last):\r\n File \"r.py\", line 7, in <module>\r\n aladin = niftyreg.RegAladin()\r\n File \"C:\\Users\\fernando\\Miniconda3\\envs\\vesseg\\lib\\site-packages\\nipype\\interfaces\\niftyreg\\base.py\", line 66, in __init__\r\n StrictVersion(_version) < StrictVersion(self._min_version):\r\n File \"C:\\Users\\fernando\\Miniconda3\\envs\\vesseg\\lib\\distutils\\version.py\", line 40, in __init__\r\n self.parse(vstring)\r\n File \"C:\\Users\\fernando\\Miniconda3\\envs\\vesseg\\lib\\distutils\\version.py\", line 137, in parse\r\n raise ValueError(\"invalid version number '%s'\" % vstring)\r\nValueError: invalid version number '1.5.58\r\n'\r\n```\r\n\r\nI have temporarily hacked this by replacing [this line](https://github.com/nipy/nipype/blob/master/nipype/interfaces/niftyreg/base.py#L59):\r\n```python\r\n_version = self.version_from_command()\r\n```\r\nby:\r\n```python\r\n_version = bytes('1.5.58', \"utf-8\")\r\n```\r\n\r\n### How to replicate the behavior\r\nRun\r\n```python\r\nfrom nipype.interfaces import niftyreg\r\naladin = niftyreg.RegAladin()\r\n```\r\n\r\n### Platform details:\r\n```python\r\nIn [1]: import nipype\r\n\r\nIn [2]: nipype.get_info()\r\nOut[2]:\r\n{'pkg_path': 'C:\\\\Users\\\\fernando\\\\Miniconda3\\\\envs\\\\vesseg\\\\lib\\\\site-packages\\\\nipype',\r\n 'commit_source': 'archive substitution',\r\n 'commit_hash': '%h',\r\n 'nipype_version': '1.0.4',\r\n 'sys_version': '3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)]',\r\n 'sys_executable': 'C:\\\\Users\\\\fernando\\\\Miniconda3\\\\envs\\\\vesseg\\\\python.exe',\r\n 'sys_platform': 'win32',\r\n 'numpy_version': '1.14.3',\r\n 'scipy_version': '1.1.0',\r\n 'networkx_version': '2.1',\r\n 'nibabel_version': '2.2.1',\r\n 'traits_version': '4.6.0'}\r\n\r\nIn [3]: nipype.__version__\r\nOut[3]: '1.0.4'\r\n```\r\n\r\n### Execution environment\r\nI'm not sure what this means. I'm running Python from a `conda` environment.\r\n\r\n\nNiftyReg interface raises ValueError: invalid version number\n### Summary\r\nVersion number is \"invalid\" when I try to create a `RegAladin` node.\r\n\r\n### Actual behavior\r\n```python-traceback\r\nC:\\Users\\fernando\\tmp\r\n(vesseg) \u03bb python r.py\r\n180612-15:47:29,796 interface WARNING:\r\n version_from_command member of CommandLine was Deprecated in nipype-1.0.0 and deleted in 1.1.0\r\nTraceback (most recent call last):\r\n File \"r.py\", line 7, in <module>\r\n aladin = niftyreg.RegAladin()\r\n File \"C:\\Users\\fernando\\Miniconda3\\envs\\vesseg\\lib\\site-packages\\nipype\\interfaces\\niftyreg\\base.py\", line 66, in __init__\r\n StrictVersion(_version) < StrictVersion(self._min_version):\r\n File \"C:\\Users\\fernando\\Miniconda3\\envs\\vesseg\\lib\\distutils\\version.py\", line 40, in __init__\r\n self.parse(vstring)\r\n File \"C:\\Users\\fernando\\Miniconda3\\envs\\vesseg\\lib\\distutils\\version.py\", line 137, in parse\r\n raise ValueError(\"invalid version number '%s'\" % vstring)\r\nValueError: invalid version number '1.5.58\r\n'\r\n```\r\n\r\nI have temporarily hacked this by replacing [this line](https://github.com/nipy/nipype/blob/master/nipype/interfaces/niftyreg/base.py#L59):\r\n```python\r\n_version = self.version_from_command()\r\n```\r\nby:\r\n```python\r\n_version = bytes('1.5.58', \"utf-8\")\r\n```\r\n\r\n### How to replicate the behavior\r\nRun\r\n```python\r\nfrom nipype.interfaces import niftyreg\r\naladin = niftyreg.RegAladin()\r\n```\r\n\r\n### Platform details:\r\n```python\r\nIn [1]: import nipype\r\n\r\nIn [2]: nipype.get_info()\r\nOut[2]:\r\n{'pkg_path': 'C:\\\\Users\\\\fernando\\\\Miniconda3\\\\envs\\\\vesseg\\\\lib\\\\site-packages\\\\nipype',\r\n 'commit_source': 'archive substitution',\r\n 'commit_hash': '%h',\r\n 'nipype_version': '1.0.4',\r\n 'sys_version': '3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)]',\r\n 'sys_executable': 'C:\\\\Users\\\\fernando\\\\Miniconda3\\\\envs\\\\vesseg\\\\python.exe',\r\n 'sys_platform': 'win32',\r\n 'numpy_version': '1.14.3',\r\n 'scipy_version': '1.1.0',\r\n 'networkx_version': '2.1',\r\n 'nibabel_version': '2.2.1',\r\n 'traits_version': '4.6.0'}\r\n\r\nIn [3]: nipype.__version__\r\nOut[3]: '1.0.4'\r\n```\r\n\r\n### Execution environment\r\nI'm not sure what this means. I'm running Python from a `conda` environment.\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-\n# vi: set ft=python sts=4 ts=4 sw=4 et:\n\"\"\"\nThe niftyreg module provides classes for interfacing with `niftyreg\n<http://sourceforge.net/projects/niftyreg/>`_ command line tools.\n\nThese are the base tools for working with niftyreg.\n\nRegistration tools are found in niftyreg/reg.py\nEvery other tool is found in niftyreg/regutils.py\n\nExamples\n--------\nSee the docstrings of the individual classes for examples.\n\n\"\"\"\nfrom distutils.version import StrictVersion\nimport os\n\nfrom ... import logging\nfrom ..base import CommandLine, CommandLineInputSpec, traits, Undefined\nfrom ...utils.filemanip import split_filename\n\niflogger = logging.getLogger(\"nipype.interface\")\n\n\ndef get_custom_path(command, env_dir=\"NIFTYREGDIR\"):\n return os.path.join(os.getenv(env_dir, \"\"), command)\n\n\nclass NiftyRegCommandInputSpec(CommandLineInputSpec):\n \"\"\"Input Spec for niftyreg interfaces.\"\"\"\n\n # Set the number of omp thread to use\n omp_core_val = traits.Int(\n int(os.environ.get(\"OMP_NUM_THREADS\", \"1\")),\n desc=\"Number of openmp thread to use\",\n argstr=\"-omp %i\",\n usedefault=True,\n )\n\n\nclass NiftyRegCommand(CommandLine):\n \"\"\"\n Base support interface for NiftyReg commands.\n \"\"\"\n\n _suffix = \"_nr\"\n _min_version = \"1.5.30\"\n\n input_spec = NiftyRegCommandInputSpec\n\n def __init__(self, required_version=None, **inputs):\n self.num_threads = 1\n super(NiftyRegCommand, self).__init__(**inputs)\n self.required_version = required_version\n _version = self.version_from_command()\n if _version:\n _version = _version.decode(\"utf-8\")\n if self._min_version is not None and StrictVersion(\n _version\n ) < StrictVersion(self._min_version):\n msg = \"A later version of Niftyreg is required (%s < %s)\"\n iflogger.warning(msg, _version, self._min_version)\n if required_version is not None:\n if StrictVersion(_version) != StrictVersion(required_version):\n msg = \"The version of NiftyReg differs from the required\"\n msg += \"(%s != %s)\"\n iflogger.warning(msg, _version, self.required_version)\n self.inputs.on_trait_change(self._omp_update, \"omp_core_val\")\n self.inputs.on_trait_change(self._environ_update, \"environ\")\n self._omp_update()\n\n def _omp_update(self):\n if self.inputs.omp_core_val:\n self.inputs.environ[\"OMP_NUM_THREADS\"] = str(self.inputs.omp_core_val)\n self.num_threads = self.inputs.omp_core_val\n else:\n if \"OMP_NUM_THREADS\" in self.inputs.environ:\n del self.inputs.environ[\"OMP_NUM_THREADS\"]\n self.num_threads = 1\n\n def _environ_update(self):\n if self.inputs.environ:\n if \"OMP_NUM_THREADS\" in self.inputs.environ:\n self.inputs.omp_core_val = int(self.inputs.environ[\"OMP_NUM_THREADS\"])\n else:\n self.inputs.omp_core_val = Undefined\n else:\n self.inputs.omp_core_val = Undefined\n\n def check_version(self):\n _version = self.version_from_command()\n if not _version:\n raise Exception(\"Niftyreg not found\")\n # Decoding to string:\n _version = _version.decode(\"utf-8\")\n if StrictVersion(_version) < StrictVersion(self._min_version):\n err = \"A later version of Niftyreg is required (%s < %s)\"\n raise ValueError(err % (_version, self._min_version))\n if self.required_version:\n if StrictVersion(_version) != StrictVersion(self.required_version):\n err = \"The version of NiftyReg differs from the required\"\n err += \"(%s != %s)\"\n raise ValueError(err % (_version, self.required_version))\n\n @property\n def version(self):\n return self.version_from_command()\n\n def exists(self):\n return self.version_from_command() is not None\n\n def _format_arg(self, name, spec, value):\n if name == \"omp_core_val\":\n self.numthreads = value\n return super(NiftyRegCommand, self)._format_arg(name, spec, value)\n\n def _gen_fname(self, basename, out_dir=None, suffix=None, ext=None):\n if basename == \"\":\n msg = \"Unable to generate filename for command %s. \" % self.cmd\n msg += \"basename is not set!\"\n raise ValueError(msg)\n _, final_bn, final_ext = split_filename(basename)\n if out_dir is None:\n out_dir = os.getcwd()\n if ext is not None:\n final_ext = ext\n if suffix is not None:\n final_bn = \"\".join((final_bn, suffix))\n return os.path.abspath(os.path.join(out_dir, final_bn + final_ext))\n", "path": "nipype/interfaces/niftyreg/base.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-\n# vi: set ft=python sts=4 ts=4 sw=4 et:\n\"\"\"\nThe niftyreg module provides classes for interfacing with `niftyreg\n<http://sourceforge.net/projects/niftyreg/>`_ command line tools.\n\nThese are the base tools for working with niftyreg.\n\nRegistration tools are found in niftyreg/reg.py\nEvery other tool is found in niftyreg/regutils.py\n\nExamples\n--------\nSee the docstrings of the individual classes for examples.\n\n\"\"\"\nfrom distutils.version import StrictVersion\nimport os\n\nfrom ... import logging\nfrom ..base import (CommandLine, CommandLineInputSpec, traits, Undefined,\n PackageInfo)\nfrom ...utils.filemanip import split_filename\n\niflogger = logging.getLogger(\"nipype.interface\")\n\n\ndef get_custom_path(command, env_dir=\"NIFTYREGDIR\"):\n return os.path.join(os.getenv(env_dir, \"\"), command)\n\n\nclass Info(PackageInfo):\n version_cmd = get_custom_path('reg_aladin') + ' --version'\n\n @staticmethod\n def parse_version(raw_info):\n return raw_info\n\n\nclass NiftyRegCommandInputSpec(CommandLineInputSpec):\n \"\"\"Input Spec for niftyreg interfaces.\"\"\"\n\n # Set the number of omp thread to use\n omp_core_val = traits.Int(\n int(os.environ.get(\"OMP_NUM_THREADS\", \"1\")),\n desc=\"Number of openmp thread to use\",\n argstr=\"-omp %i\",\n usedefault=True,\n )\n\n\nclass NiftyRegCommand(CommandLine):\n \"\"\"\n Base support interface for NiftyReg commands.\n \"\"\"\n\n _suffix = \"_nr\"\n _min_version = \"1.5.30\"\n\n input_spec = NiftyRegCommandInputSpec\n\n def __init__(self, required_version=None, **inputs):\n self.num_threads = 1\n super(NiftyRegCommand, self).__init__(**inputs)\n self.required_version = required_version\n _version = self.version\n if _version:\n if self._min_version is not None and StrictVersion(\n _version\n ) < StrictVersion(self._min_version):\n msg = \"A later version of Niftyreg is required (%s < %s)\"\n iflogger.warning(msg, _version, self._min_version)\n if required_version is not None:\n if StrictVersion(_version) != StrictVersion(required_version):\n msg = \"The version of NiftyReg differs from the required\"\n msg += \"(%s != %s)\"\n iflogger.warning(msg, _version, self.required_version)\n self.inputs.on_trait_change(self._omp_update, \"omp_core_val\")\n self.inputs.on_trait_change(self._environ_update, \"environ\")\n self._omp_update()\n\n def _omp_update(self):\n if self.inputs.omp_core_val:\n self.inputs.environ[\"OMP_NUM_THREADS\"] = str(self.inputs.omp_core_val)\n self.num_threads = self.inputs.omp_core_val\n else:\n if \"OMP_NUM_THREADS\" in self.inputs.environ:\n del self.inputs.environ[\"OMP_NUM_THREADS\"]\n self.num_threads = 1\n\n def _environ_update(self):\n if self.inputs.environ:\n if \"OMP_NUM_THREADS\" in self.inputs.environ:\n self.inputs.omp_core_val = int(self.inputs.environ[\"OMP_NUM_THREADS\"])\n else:\n self.inputs.omp_core_val = Undefined\n else:\n self.inputs.omp_core_val = Undefined\n\n def check_version(self):\n _version = self.version\n if not _version:\n raise Exception(\"Niftyreg not found\")\n if StrictVersion(_version) < StrictVersion(self._min_version):\n err = \"A later version of Niftyreg is required (%s < %s)\"\n raise ValueError(err % (_version, self._min_version))\n if self.required_version:\n if StrictVersion(_version) != StrictVersion(self.required_version):\n err = \"The version of NiftyReg differs from the required\"\n err += \"(%s != %s)\"\n raise ValueError(err % (_version, self.required_version))\n\n @property\n def version(self):\n return Info.version()\n\n def exists(self):\n return self.version is not None\n\n def _format_arg(self, name, spec, value):\n if name == \"omp_core_val\":\n self.numthreads = value\n return super(NiftyRegCommand, self)._format_arg(name, spec, value)\n\n def _gen_fname(self, basename, out_dir=None, suffix=None, ext=None):\n if basename == \"\":\n msg = \"Unable to generate filename for command %s. \" % self.cmd\n msg += \"basename is not set!\"\n raise ValueError(msg)\n _, final_bn, final_ext = split_filename(basename)\n if out_dir is None:\n out_dir = os.getcwd()\n if ext is not None:\n final_ext = ext\n if suffix is not None:\n final_bn = \"\".join((final_bn, suffix))\n return os.path.abspath(os.path.join(out_dir, final_bn + final_ext))\n", "path": "nipype/interfaces/niftyreg/base.py"}]} | 3,068 | 545 |
gh_patches_debug_22401 | rasdani/github-patches | git_diff | talonhub__community-244 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
formatters should apply to selected text if they are spoken alone
for example `yeller` while some text is selected should pass the selected text through the formatter and replace the selected text with the formatted text.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `code/formatters.py`
Content:
```
1 from talon import Module, Context, actions, ui, imgui
2 from talon.grammar import Phrase
3 from typing import List, Union
4
5 ctx = Context()
6 key = actions.key
7
8 words_to_keep_lowercase = "a,an,the,at,by,for,in,is,of,on,to,up,and,as,but,or,nor".split(
9 ","
10 )
11
12 # last_phrase has the last phrase spoken, WITHOUT formatting.
13 # This is needed for reformatting.
14 last_phrase = ""
15
16 # formatted_phrase_history keeps the most recent formatted phrases, WITH formatting.
17 formatted_phrase_history = []
18 formatted_phrase_history_length = 20
19
20
21 def surround(by):
22 def func(i, word, last):
23 if i == 0:
24 word = by + word
25 if last:
26 word += by
27 return word
28
29 return func
30
31
32 def format_phrase(m: Union[str, Phrase], fmtrs: str):
33 global last_phrase
34 last_phrase = m
35 words = []
36 if isinstance(m, str):
37 words = m.split(" ")
38 else:
39 if m.words[-1] == "over":
40 m.words = m.words[:-1]
41
42 words = actions.dictate.parse_words(m)
43 words = actions.dictate.replace_words(words)
44
45 result = format_phrase_no_history(words, fmtrs)
46
47 # Add result to history.
48 global formatted_phrase_history
49 formatted_phrase_history.insert(0, result)
50 formatted_phrase_history = formatted_phrase_history[
51 :formatted_phrase_history_length
52 ]
53
54 return result
55
56
57 def format_phrase_no_history(word_list, fmtrs: str):
58 fmtr_list = fmtrs.split(",")
59 words = []
60 spaces = True
61 for i, w in enumerate(word_list):
62 for name in reversed(fmtr_list):
63 smash, func = all_formatters[name]
64 w = func(i, w, i == len(word_list) - 1)
65 spaces = spaces and not smash
66 words.append(w)
67 sep = " " if spaces else ""
68 return sep.join(words)
69
70
71 NOSEP = True
72 SEP = False
73
74
75 def words_with_joiner(joiner):
76 """Pass through words unchanged, but add a separator between them."""
77
78 def formatter_function(i, word, _):
79 return word if i == 0 else joiner + word
80
81 return (NOSEP, formatter_function)
82
83
84 def first_vs_rest(first_func, rest_func=lambda w: w):
85 """Supply one or two transformer functions for the first and rest of
86 words respectively.
87
88 Leave second argument out if you want all but the first word to be passed
89 through unchanged.
90 Set first argument to None if you want the first word to be passed
91 through unchanged."""
92 if first_func is None:
93 first_func = lambda w: w
94
95 def formatter_function(i, word, _):
96 return first_func(word) if i == 0 else rest_func(word)
97
98 return formatter_function
99
100
101 def every_word(word_func):
102 """Apply one function to every word."""
103
104 def formatter_function(i, word, _):
105 return word_func(word)
106
107 return formatter_function
108
109
110 formatters_dict = {
111 "NOOP": (SEP, lambda i, word, _: word),
112 "DOUBLE_UNDERSCORE": (NOSEP, first_vs_rest(lambda w: "__%s__" % w)),
113 "PRIVATE_CAMEL_CASE": (NOSEP, first_vs_rest(lambda w: w, lambda w: w.capitalize())),
114 "PROTECTED_CAMEL_CASE": (
115 NOSEP,
116 first_vs_rest(lambda w: w, lambda w: w.capitalize()),
117 ),
118 "PUBLIC_CAMEL_CASE": (NOSEP, every_word(lambda w: w.capitalize())),
119 "SNAKE_CASE": (
120 NOSEP,
121 first_vs_rest(lambda w: w.lower(), lambda w: "_" + w.lower()),
122 ),
123 "NO_SPACES": (NOSEP, every_word(lambda w: w)),
124 "DASH_SEPARATED": words_with_joiner("-"),
125 "TERMINAL_DASH_SEPARATED": (
126 NOSEP,
127 first_vs_rest(lambda w: " --" + w.lower(), lambda w: "-" + w.lower()),
128 ),
129 "DOUBLE_COLON_SEPARATED": words_with_joiner("::"),
130 "ALL_CAPS": (SEP, every_word(lambda w: w.upper())),
131 "ALL_LOWERCASE": (SEP, every_word(lambda w: w.lower())),
132 "DOUBLE_QUOTED_STRING": (SEP, surround('"')),
133 "SINGLE_QUOTED_STRING": (SEP, surround("'")),
134 "SPACE_SURROUNDED_STRING": (SEP, surround(" ")),
135 "DOT_SEPARATED": words_with_joiner("."),
136 "DOT_SNAKE": (NOSEP, lambda i, word, _: "." + word if i == 0 else "_" + word),
137 "SLASH_SEPARATED": (NOSEP, every_word(lambda w: "/" + w)),
138 "CAPITALIZE_FIRST_WORD": (SEP, first_vs_rest(lambda w: w.capitalize())),
139 "CAPITALIZE_ALL_WORDS": (
140 SEP,
141 lambda i, word, _: word.capitalize()
142 if i == 0 or word not in words_to_keep_lowercase
143 else word,
144 ),
145 "FIRST_THREE": (NOSEP, lambda i, word, _: word[0:3]),
146 "FIRST_FOUR": (NOSEP, lambda i, word, _: word[0:4]),
147 "FIRST_FIVE": (NOSEP, lambda i, word, _: word[0:5]),
148 }
149
150 # This is the mapping from spoken phrases to formatters
151 formatters_words = {
152 "allcaps": formatters_dict["ALL_CAPS"],
153 "alldown": formatters_dict["ALL_LOWERCASE"],
154 "camel": formatters_dict["PRIVATE_CAMEL_CASE"],
155 "dotted": formatters_dict["DOT_SEPARATED"],
156 "dubstring": formatters_dict["DOUBLE_QUOTED_STRING"],
157 "dunder": formatters_dict["DOUBLE_UNDERSCORE"],
158 "hammer": formatters_dict["PUBLIC_CAMEL_CASE"],
159 "kebab": formatters_dict["DASH_SEPARATED"],
160 "packed": formatters_dict["DOUBLE_COLON_SEPARATED"],
161 "padded": formatters_dict["SPACE_SURROUNDED_STRING"],
162 # "say": formatters_dict["NOOP"],
163 "sentence": formatters_dict["CAPITALIZE_FIRST_WORD"],
164 "slasher": formatters_dict["SLASH_SEPARATED"],
165 "smash": formatters_dict["NO_SPACES"],
166 "snake": formatters_dict["SNAKE_CASE"],
167 # "speak": formatters_dict["NOOP"],
168 "string": formatters_dict["SINGLE_QUOTED_STRING"],
169 "title": formatters_dict["CAPITALIZE_ALL_WORDS"],
170 # disable a few formatters for now
171 # "tree": formatters_dict["FIRST_THREE"],
172 # "quad": formatters_dict["FIRST_FOUR"],
173 # "fiver": formatters_dict["FIRST_FIVE"],
174 }
175
176 all_formatters = {}
177 all_formatters.update(formatters_dict)
178 all_formatters.update(formatters_words)
179
180 mod = Module()
181 mod.list("formatters", desc="list of formatters")
182
183
184 @mod.capture
185 def formatters(m) -> str:
186 "Returns a comma-separated string of formatters e.g. 'SNAKE,DUBSTRING'"
187
188
189 @mod.capture
190 def format_text(m) -> str:
191 "Formats the text and returns a string"
192
193
194 @mod.action_class
195 class Actions:
196 def formatted_text(phrase: Union[str, Phrase], formatters: str) -> str:
197 """Formats a phrase according to formatters. formatters is a comma-separated string of formatters (e.g. 'CAPITALIZE_ALL_WORDS,DOUBLE_QUOTED_STRING')"""
198 return format_phrase(phrase, formatters)
199
200 def formatters_help_toggle():
201 """Lists all formatters"""
202 if gui.showing:
203 gui.hide()
204 else:
205 gui.show()
206
207 def formatters_recent_toggle():
208 """Toggles list of recent formatters"""
209 if recent_gui.showing:
210 recent_gui.hide()
211 else:
212 recent_gui.show()
213
214 def formatters_recent_select(number: int):
215 """Inserts a recent formatter"""
216 if len(formatted_phrase_history) >= number:
217 return formatted_phrase_history[number - 1]
218 return ""
219
220 def formatters_clear_last():
221 """Clears the last formatted phrase"""
222 if len(formatted_phrase_history) > 0:
223 for character in formatted_phrase_history[0]:
224 actions.edit.delete()
225
226 def formatters_reformat_last(formatters: str) -> str:
227 """Reformats last formatted phrase"""
228 global last_phrase
229 return format_phrase(last_phrase, formatters)
230
231
232 @ctx.capture(rule="{self.formatters}+")
233 def formatters(m):
234 return ",".join(m.formatters_list)
235
236
237 @ctx.capture(rule="<self.formatters> <user.text>")
238 def format_text(m):
239 return format_phrase(m.text, m.formatters)
240
241
242 ctx.lists["self.formatters"] = formatters_words.keys()
243
244
245 @imgui.open(software=False)
246 def gui(gui: imgui.GUI):
247 gui.text("List formatters")
248 gui.line()
249 for name in sorted(set(formatters_words.keys())):
250 gui.text(f"{name} | {format_phrase_no_history(['one', 'two', 'three'], name)}")
251
252
253 @imgui.open(software=False)
254 def recent_gui(gui: imgui.GUI):
255 gui.text("Recent formatters")
256 gui.line()
257 for index, result in enumerate(formatted_phrase_history, 1):
258 gui.text("{}. {}".format(index, result))
259
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/code/formatters.py b/code/formatters.py
--- a/code/formatters.py
+++ b/code/formatters.py
@@ -1,9 +1,11 @@
from talon import Module, Context, actions, ui, imgui
from talon.grammar import Phrase
from typing import List, Union
+import re
ctx = Context()
key = actions.key
+edit = actions.edit
words_to_keep_lowercase = "a,an,the,at,by,for,in,is,of,on,to,up,and,as,but,or,nor".split(
","
@@ -228,6 +230,19 @@
global last_phrase
return format_phrase(last_phrase, formatters)
+ def formatters_reformat_selection(formatters: str) -> str:
+ """Reformats the current selection."""
+ selected = edit.selected_text()
+ unformatted = re.sub(r"[^a-zA-Z0-9]+", " ", selected).lower()
+ # TODO: Separate out camelcase & studleycase vars
+
+ # Delete separately for compatibility with programs that don't overwrite
+ # selected text (e.g. Emacs)
+ edit.delete()
+ text = actions.self.formatted_text(unformatted, formatters)
+ actions.insert(text)
+ return text
+
@ctx.capture(rule="{self.formatters}+")
def formatters(m):
| {"golden_diff": "diff --git a/code/formatters.py b/code/formatters.py\n--- a/code/formatters.py\n+++ b/code/formatters.py\n@@ -1,9 +1,11 @@\n from talon import Module, Context, actions, ui, imgui\n from talon.grammar import Phrase\n from typing import List, Union\n+import re\n \n ctx = Context()\n key = actions.key\n+edit = actions.edit\n \n words_to_keep_lowercase = \"a,an,the,at,by,for,in,is,of,on,to,up,and,as,but,or,nor\".split(\n \",\"\n@@ -228,6 +230,19 @@\n global last_phrase\n return format_phrase(last_phrase, formatters)\n \n+ def formatters_reformat_selection(formatters: str) -> str:\n+ \"\"\"Reformats the current selection.\"\"\"\n+ selected = edit.selected_text()\n+ unformatted = re.sub(r\"[^a-zA-Z0-9]+\", \" \", selected).lower()\n+ # TODO: Separate out camelcase & studleycase vars\n+\n+ # Delete separately for compatibility with programs that don't overwrite\n+ # selected text (e.g. Emacs)\n+ edit.delete()\n+ text = actions.self.formatted_text(unformatted, formatters)\n+ actions.insert(text)\n+ return text\n+\n \n @ctx.capture(rule=\"{self.formatters}+\")\n def formatters(m):\n", "issue": "formatters should apply to selected text if they are spoken alone\nfor example `yeller` while some text is selected should pass the selected text through the formatter and replace the selected text with the formatted text.\n", "before_files": [{"content": "from talon import Module, Context, actions, ui, imgui\nfrom talon.grammar import Phrase\nfrom typing import List, Union\n\nctx = Context()\nkey = actions.key\n\nwords_to_keep_lowercase = \"a,an,the,at,by,for,in,is,of,on,to,up,and,as,but,or,nor\".split(\n \",\"\n)\n\n# last_phrase has the last phrase spoken, WITHOUT formatting.\n# This is needed for reformatting.\nlast_phrase = \"\"\n\n# formatted_phrase_history keeps the most recent formatted phrases, WITH formatting.\nformatted_phrase_history = []\nformatted_phrase_history_length = 20\n\n\ndef surround(by):\n def func(i, word, last):\n if i == 0:\n word = by + word\n if last:\n word += by\n return word\n\n return func\n\n\ndef format_phrase(m: Union[str, Phrase], fmtrs: str):\n global last_phrase\n last_phrase = m\n words = []\n if isinstance(m, str):\n words = m.split(\" \")\n else:\n if m.words[-1] == \"over\":\n m.words = m.words[:-1]\n\n words = actions.dictate.parse_words(m)\n words = actions.dictate.replace_words(words)\n\n result = format_phrase_no_history(words, fmtrs)\n\n # Add result to history.\n global formatted_phrase_history\n formatted_phrase_history.insert(0, result)\n formatted_phrase_history = formatted_phrase_history[\n :formatted_phrase_history_length\n ]\n\n return result\n\n\ndef format_phrase_no_history(word_list, fmtrs: str):\n fmtr_list = fmtrs.split(\",\")\n words = []\n spaces = True\n for i, w in enumerate(word_list):\n for name in reversed(fmtr_list):\n smash, func = all_formatters[name]\n w = func(i, w, i == len(word_list) - 1)\n spaces = spaces and not smash\n words.append(w)\n sep = \" \" if spaces else \"\"\n return sep.join(words)\n\n\nNOSEP = True\nSEP = False\n\n\ndef words_with_joiner(joiner):\n \"\"\"Pass through words unchanged, but add a separator between them.\"\"\"\n\n def formatter_function(i, word, _):\n return word if i == 0 else joiner + word\n\n return (NOSEP, formatter_function)\n\n\ndef first_vs_rest(first_func, rest_func=lambda w: w):\n \"\"\"Supply one or two transformer functions for the first and rest of\n words respectively.\n\n Leave second argument out if you want all but the first word to be passed\n through unchanged.\n Set first argument to None if you want the first word to be passed\n through unchanged.\"\"\"\n if first_func is None:\n first_func = lambda w: w\n\n def formatter_function(i, word, _):\n return first_func(word) if i == 0 else rest_func(word)\n\n return formatter_function\n\n\ndef every_word(word_func):\n \"\"\"Apply one function to every word.\"\"\"\n\n def formatter_function(i, word, _):\n return word_func(word)\n\n return formatter_function\n\n\nformatters_dict = {\n \"NOOP\": (SEP, lambda i, word, _: word),\n \"DOUBLE_UNDERSCORE\": (NOSEP, first_vs_rest(lambda w: \"__%s__\" % w)),\n \"PRIVATE_CAMEL_CASE\": (NOSEP, first_vs_rest(lambda w: w, lambda w: w.capitalize())),\n \"PROTECTED_CAMEL_CASE\": (\n NOSEP,\n first_vs_rest(lambda w: w, lambda w: w.capitalize()),\n ),\n \"PUBLIC_CAMEL_CASE\": (NOSEP, every_word(lambda w: w.capitalize())),\n \"SNAKE_CASE\": (\n NOSEP,\n first_vs_rest(lambda w: w.lower(), lambda w: \"_\" + w.lower()),\n ),\n \"NO_SPACES\": (NOSEP, every_word(lambda w: w)),\n \"DASH_SEPARATED\": words_with_joiner(\"-\"),\n \"TERMINAL_DASH_SEPARATED\": (\n NOSEP,\n first_vs_rest(lambda w: \" --\" + w.lower(), lambda w: \"-\" + w.lower()),\n ),\n \"DOUBLE_COLON_SEPARATED\": words_with_joiner(\"::\"),\n \"ALL_CAPS\": (SEP, every_word(lambda w: w.upper())),\n \"ALL_LOWERCASE\": (SEP, every_word(lambda w: w.lower())),\n \"DOUBLE_QUOTED_STRING\": (SEP, surround('\"')),\n \"SINGLE_QUOTED_STRING\": (SEP, surround(\"'\")),\n \"SPACE_SURROUNDED_STRING\": (SEP, surround(\" \")),\n \"DOT_SEPARATED\": words_with_joiner(\".\"),\n \"DOT_SNAKE\": (NOSEP, lambda i, word, _: \".\" + word if i == 0 else \"_\" + word),\n \"SLASH_SEPARATED\": (NOSEP, every_word(lambda w: \"/\" + w)),\n \"CAPITALIZE_FIRST_WORD\": (SEP, first_vs_rest(lambda w: w.capitalize())),\n \"CAPITALIZE_ALL_WORDS\": (\n SEP,\n lambda i, word, _: word.capitalize()\n if i == 0 or word not in words_to_keep_lowercase\n else word,\n ),\n \"FIRST_THREE\": (NOSEP, lambda i, word, _: word[0:3]),\n \"FIRST_FOUR\": (NOSEP, lambda i, word, _: word[0:4]),\n \"FIRST_FIVE\": (NOSEP, lambda i, word, _: word[0:5]),\n}\n\n# This is the mapping from spoken phrases to formatters\nformatters_words = {\n \"allcaps\": formatters_dict[\"ALL_CAPS\"],\n \"alldown\": formatters_dict[\"ALL_LOWERCASE\"],\n \"camel\": formatters_dict[\"PRIVATE_CAMEL_CASE\"],\n \"dotted\": formatters_dict[\"DOT_SEPARATED\"],\n \"dubstring\": formatters_dict[\"DOUBLE_QUOTED_STRING\"],\n \"dunder\": formatters_dict[\"DOUBLE_UNDERSCORE\"],\n \"hammer\": formatters_dict[\"PUBLIC_CAMEL_CASE\"],\n \"kebab\": formatters_dict[\"DASH_SEPARATED\"],\n \"packed\": formatters_dict[\"DOUBLE_COLON_SEPARATED\"],\n \"padded\": formatters_dict[\"SPACE_SURROUNDED_STRING\"],\n # \"say\": formatters_dict[\"NOOP\"],\n \"sentence\": formatters_dict[\"CAPITALIZE_FIRST_WORD\"],\n \"slasher\": formatters_dict[\"SLASH_SEPARATED\"],\n \"smash\": formatters_dict[\"NO_SPACES\"],\n \"snake\": formatters_dict[\"SNAKE_CASE\"],\n # \"speak\": formatters_dict[\"NOOP\"],\n \"string\": formatters_dict[\"SINGLE_QUOTED_STRING\"],\n \"title\": formatters_dict[\"CAPITALIZE_ALL_WORDS\"],\n # disable a few formatters for now\n # \"tree\": formatters_dict[\"FIRST_THREE\"],\n # \"quad\": formatters_dict[\"FIRST_FOUR\"],\n # \"fiver\": formatters_dict[\"FIRST_FIVE\"],\n}\n\nall_formatters = {}\nall_formatters.update(formatters_dict)\nall_formatters.update(formatters_words)\n\nmod = Module()\nmod.list(\"formatters\", desc=\"list of formatters\")\n\n\[email protected]\ndef formatters(m) -> str:\n \"Returns a comma-separated string of formatters e.g. 'SNAKE,DUBSTRING'\"\n\n\[email protected]\ndef format_text(m) -> str:\n \"Formats the text and returns a string\"\n\n\[email protected]_class\nclass Actions:\n def formatted_text(phrase: Union[str, Phrase], formatters: str) -> str:\n \"\"\"Formats a phrase according to formatters. formatters is a comma-separated string of formatters (e.g. 'CAPITALIZE_ALL_WORDS,DOUBLE_QUOTED_STRING')\"\"\"\n return format_phrase(phrase, formatters)\n\n def formatters_help_toggle():\n \"\"\"Lists all formatters\"\"\"\n if gui.showing:\n gui.hide()\n else:\n gui.show()\n\n def formatters_recent_toggle():\n \"\"\"Toggles list of recent formatters\"\"\"\n if recent_gui.showing:\n recent_gui.hide()\n else:\n recent_gui.show()\n\n def formatters_recent_select(number: int):\n \"\"\"Inserts a recent formatter\"\"\"\n if len(formatted_phrase_history) >= number:\n return formatted_phrase_history[number - 1]\n return \"\"\n\n def formatters_clear_last():\n \"\"\"Clears the last formatted phrase\"\"\"\n if len(formatted_phrase_history) > 0:\n for character in formatted_phrase_history[0]:\n actions.edit.delete()\n\n def formatters_reformat_last(formatters: str) -> str:\n \"\"\"Reformats last formatted phrase\"\"\"\n global last_phrase\n return format_phrase(last_phrase, formatters)\n\n\[email protected](rule=\"{self.formatters}+\")\ndef formatters(m):\n return \",\".join(m.formatters_list)\n\n\[email protected](rule=\"<self.formatters> <user.text>\")\ndef format_text(m):\n return format_phrase(m.text, m.formatters)\n\n\nctx.lists[\"self.formatters\"] = formatters_words.keys()\n\n\[email protected](software=False)\ndef gui(gui: imgui.GUI):\n gui.text(\"List formatters\")\n gui.line()\n for name in sorted(set(formatters_words.keys())):\n gui.text(f\"{name} | {format_phrase_no_history(['one', 'two', 'three'], name)}\")\n\n\[email protected](software=False)\ndef recent_gui(gui: imgui.GUI):\n gui.text(\"Recent formatters\")\n gui.line()\n for index, result in enumerate(formatted_phrase_history, 1):\n gui.text(\"{}. {}\".format(index, result))\n", "path": "code/formatters.py"}], "after_files": [{"content": "from talon import Module, Context, actions, ui, imgui\nfrom talon.grammar import Phrase\nfrom typing import List, Union\nimport re\n\nctx = Context()\nkey = actions.key\nedit = actions.edit\n\nwords_to_keep_lowercase = \"a,an,the,at,by,for,in,is,of,on,to,up,and,as,but,or,nor\".split(\n \",\"\n)\n\n# last_phrase has the last phrase spoken, WITHOUT formatting.\n# This is needed for reformatting.\nlast_phrase = \"\"\n\n# formatted_phrase_history keeps the most recent formatted phrases, WITH formatting.\nformatted_phrase_history = []\nformatted_phrase_history_length = 20\n\n\ndef surround(by):\n def func(i, word, last):\n if i == 0:\n word = by + word\n if last:\n word += by\n return word\n\n return func\n\n\ndef format_phrase(m: Union[str, Phrase], fmtrs: str):\n global last_phrase\n last_phrase = m\n words = []\n if isinstance(m, str):\n words = m.split(\" \")\n else:\n if m.words[-1] == \"over\":\n m.words = m.words[:-1]\n\n words = actions.dictate.parse_words(m)\n words = actions.dictate.replace_words(words)\n\n result = format_phrase_no_history(words, fmtrs)\n\n # Add result to history.\n global formatted_phrase_history\n formatted_phrase_history.insert(0, result)\n formatted_phrase_history = formatted_phrase_history[\n :formatted_phrase_history_length\n ]\n\n return result\n\n\ndef format_phrase_no_history(word_list, fmtrs: str):\n fmtr_list = fmtrs.split(\",\")\n words = []\n spaces = True\n for i, w in enumerate(word_list):\n for name in reversed(fmtr_list):\n smash, func = all_formatters[name]\n w = func(i, w, i == len(word_list) - 1)\n spaces = spaces and not smash\n words.append(w)\n sep = \" \" if spaces else \"\"\n return sep.join(words)\n\n\nNOSEP = True\nSEP = False\n\n\ndef words_with_joiner(joiner):\n \"\"\"Pass through words unchanged, but add a separator between them.\"\"\"\n\n def formatter_function(i, word, _):\n return word if i == 0 else joiner + word\n\n return (NOSEP, formatter_function)\n\n\ndef first_vs_rest(first_func, rest_func=lambda w: w):\n \"\"\"Supply one or two transformer functions for the first and rest of\n words respectively.\n\n Leave second argument out if you want all but the first word to be passed\n through unchanged.\n Set first argument to None if you want the first word to be passed\n through unchanged.\"\"\"\n if first_func is None:\n first_func = lambda w: w\n\n def formatter_function(i, word, _):\n return first_func(word) if i == 0 else rest_func(word)\n\n return formatter_function\n\n\ndef every_word(word_func):\n \"\"\"Apply one function to every word.\"\"\"\n\n def formatter_function(i, word, _):\n return word_func(word)\n\n return formatter_function\n\n\nformatters_dict = {\n \"NOOP\": (SEP, lambda i, word, _: word),\n \"DOUBLE_UNDERSCORE\": (NOSEP, first_vs_rest(lambda w: \"__%s__\" % w)),\n \"PRIVATE_CAMEL_CASE\": (NOSEP, first_vs_rest(lambda w: w, lambda w: w.capitalize())),\n \"PROTECTED_CAMEL_CASE\": (\n NOSEP,\n first_vs_rest(lambda w: w, lambda w: w.capitalize()),\n ),\n \"PUBLIC_CAMEL_CASE\": (NOSEP, every_word(lambda w: w.capitalize())),\n \"SNAKE_CASE\": (\n NOSEP,\n first_vs_rest(lambda w: w.lower(), lambda w: \"_\" + w.lower()),\n ),\n \"NO_SPACES\": (NOSEP, every_word(lambda w: w)),\n \"DASH_SEPARATED\": words_with_joiner(\"-\"),\n \"TERMINAL_DASH_SEPARATED\": (\n NOSEP,\n first_vs_rest(lambda w: \" --\" + w.lower(), lambda w: \"-\" + w.lower()),\n ),\n \"DOUBLE_COLON_SEPARATED\": words_with_joiner(\"::\"),\n \"ALL_CAPS\": (SEP, every_word(lambda w: w.upper())),\n \"ALL_LOWERCASE\": (SEP, every_word(lambda w: w.lower())),\n \"DOUBLE_QUOTED_STRING\": (SEP, surround('\"')),\n \"SINGLE_QUOTED_STRING\": (SEP, surround(\"'\")),\n \"SPACE_SURROUNDED_STRING\": (SEP, surround(\" \")),\n \"DOT_SEPARATED\": words_with_joiner(\".\"),\n \"DOT_SNAKE\": (NOSEP, lambda i, word, _: \".\" + word if i == 0 else \"_\" + word),\n \"SLASH_SEPARATED\": (NOSEP, every_word(lambda w: \"/\" + w)),\n \"CAPITALIZE_FIRST_WORD\": (SEP, first_vs_rest(lambda w: w.capitalize())),\n \"CAPITALIZE_ALL_WORDS\": (\n SEP,\n lambda i, word, _: word.capitalize()\n if i == 0 or word not in words_to_keep_lowercase\n else word,\n ),\n \"FIRST_THREE\": (NOSEP, lambda i, word, _: word[0:3]),\n \"FIRST_FOUR\": (NOSEP, lambda i, word, _: word[0:4]),\n \"FIRST_FIVE\": (NOSEP, lambda i, word, _: word[0:5]),\n}\n\n# This is the mapping from spoken phrases to formatters\nformatters_words = {\n \"allcaps\": formatters_dict[\"ALL_CAPS\"],\n \"alldown\": formatters_dict[\"ALL_LOWERCASE\"],\n \"camel\": formatters_dict[\"PRIVATE_CAMEL_CASE\"],\n \"dotted\": formatters_dict[\"DOT_SEPARATED\"],\n \"dubstring\": formatters_dict[\"DOUBLE_QUOTED_STRING\"],\n \"dunder\": formatters_dict[\"DOUBLE_UNDERSCORE\"],\n \"hammer\": formatters_dict[\"PUBLIC_CAMEL_CASE\"],\n \"kebab\": formatters_dict[\"DASH_SEPARATED\"],\n \"packed\": formatters_dict[\"DOUBLE_COLON_SEPARATED\"],\n \"padded\": formatters_dict[\"SPACE_SURROUNDED_STRING\"],\n # \"say\": formatters_dict[\"NOOP\"],\n \"sentence\": formatters_dict[\"CAPITALIZE_FIRST_WORD\"],\n \"slasher\": formatters_dict[\"SLASH_SEPARATED\"],\n \"smash\": formatters_dict[\"NO_SPACES\"],\n \"snake\": formatters_dict[\"SNAKE_CASE\"],\n # \"speak\": formatters_dict[\"NOOP\"],\n \"string\": formatters_dict[\"SINGLE_QUOTED_STRING\"],\n \"title\": formatters_dict[\"CAPITALIZE_ALL_WORDS\"],\n # disable a few formatters for now\n # \"tree\": formatters_dict[\"FIRST_THREE\"],\n # \"quad\": formatters_dict[\"FIRST_FOUR\"],\n # \"fiver\": formatters_dict[\"FIRST_FIVE\"],\n}\n\nall_formatters = {}\nall_formatters.update(formatters_dict)\nall_formatters.update(formatters_words)\n\nmod = Module()\nmod.list(\"formatters\", desc=\"list of formatters\")\n\n\[email protected]\ndef formatters(m) -> str:\n \"Returns a comma-separated string of formatters e.g. 'SNAKE,DUBSTRING'\"\n\n\[email protected]\ndef format_text(m) -> str:\n \"Formats the text and returns a string\"\n\n\[email protected]_class\nclass Actions:\n def formatted_text(phrase: Union[str, Phrase], formatters: str) -> str:\n \"\"\"Formats a phrase according to formatters. formatters is a comma-separated string of formatters (e.g. 'CAPITALIZE_ALL_WORDS,DOUBLE_QUOTED_STRING')\"\"\"\n return format_phrase(phrase, formatters)\n\n def formatters_help_toggle():\n \"\"\"Lists all formatters\"\"\"\n if gui.showing:\n gui.hide()\n else:\n gui.show()\n\n def formatters_recent_toggle():\n \"\"\"Toggles list of recent formatters\"\"\"\n if recent_gui.showing:\n recent_gui.hide()\n else:\n recent_gui.show()\n\n def formatters_recent_select(number: int):\n \"\"\"Inserts a recent formatter\"\"\"\n if len(formatted_phrase_history) >= number:\n return formatted_phrase_history[number - 1]\n return \"\"\n\n def formatters_clear_last():\n \"\"\"Clears the last formatted phrase\"\"\"\n if len(formatted_phrase_history) > 0:\n for character in formatted_phrase_history[0]:\n actions.edit.delete()\n\n def formatters_reformat_last(formatters: str) -> str:\n \"\"\"Reformats last formatted phrase\"\"\"\n global last_phrase\n return format_phrase(last_phrase, formatters)\n\n def formatters_reformat_selection(formatters: str) -> str:\n \"\"\"Reformats the current selection.\"\"\"\n selected = edit.selected_text()\n unformatted = re.sub(r\"[^a-zA-Z0-9]+\", \" \", selected).lower()\n # TODO: Separate out camelcase & studleycase vars\n\n # Delete separately for compatibility with programs that don't overwrite\n # selected text (e.g. Emacs)\n edit.delete()\n text = actions.self.formatted_text(unformatted, formatters)\n actions.insert(text)\n return text\n\n\[email protected](rule=\"{self.formatters}+\")\ndef formatters(m):\n return \",\".join(m.formatters_list)\n\n\[email protected](rule=\"<self.formatters> <user.text>\")\ndef format_text(m):\n return format_phrase(m.text, m.formatters)\n\n\nctx.lists[\"self.formatters\"] = formatters_words.keys()\n\n\[email protected](software=False)\ndef gui(gui: imgui.GUI):\n gui.text(\"List formatters\")\n gui.line()\n for name in sorted(set(formatters_words.keys())):\n gui.text(f\"{name} | {format_phrase_no_history(['one', 'two', 'three'], name)}\")\n\n\[email protected](software=False)\ndef recent_gui(gui: imgui.GUI):\n gui.text(\"Recent formatters\")\n gui.line()\n for index, result in enumerate(formatted_phrase_history, 1):\n gui.text(\"{}. {}\".format(index, result))\n", "path": "code/formatters.py"}]} | 3,055 | 303 |
gh_patches_debug_2926 | rasdani/github-patches | git_diff | Mailu__Mailu-2116 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error 404 not found when opening admin after upgrade 1.8 to master
## Before you open your issue
- [X] Check if no issue or pull-request for this already exists.
- [X] Check [documentation](https://mailu.io/master/) and [FAQ](https://mailu.io/master/faq.html). (Tip, use the search function on the documentation page)
- [X] You understand `Mailu` is made by volunteers in their **free time** — be conscise, civil and accept that delays can occur.
- [X] The title of the issue should be short and simple. It should contain specific terms related to the actual issue. Be specific while writing the title.
## Environment & Versions
### Environment
- [X] docker-compose
### Versions
Before upgrade: Docker 1.8 images.
After upgrade: Docker master images (pulled 30 December 2021).
## Description
**Mailu 1.8** image redirects `/admin` to `/admin/ui`.
**Mailu master** image no longer redirects `/admin/ui` as the `ui` part in the URL has been removed according to [Tomcat 1929.enhacement](https://github.com/Mailu/Mailu/blob/master/towncrier/newsfragments/1929.enhancement):
> Removed the /admin/ prefix to reduce complexity of routing with Mailu. Admin is accessible directly via /admin instead of /admin/ui
After the upgrade from `1.8` to `master` and visiting the admin page, the browser still uses the cached URL `/admin/ui` and results in 404 not found.
## Replication Steps
1. Create 1.8 production environment on AMD64 platform using `mailu 1.8 Docker images`.
2. Make sure the Admin page works.
3. Remove docker containers (`docker-compose down`).
4. Recreate **all** containers at the same time using `mailu master Docker images`.
5. Open root mail domain. The browser uses the cached URL `admin/ui` and shows Error 404 not found.
Note: Tested with `TLS_FLAVOR=letsencrypt`, admin and roundcube and Firefox.
## Expected behaviour
Backwards compatibility after Mailu 1.8 upgrade without the need of removing browser caches.
## Front log
```
front_1 | <IP> - - [30/Dec/2021:10:14:35 +0000] "GET /admin/ui/ HTTP/2.0" 404 198 "https://mail.mydomain.nl/sso/login" "Mozilla/5.0 (X11; Linux x86_64; rv:95.0) Gecko/20100101 Firefox/95.0"
```
## Bugfix
Proposal is to redirect `/admin/ui` always to `/admin` to prevent browser caching problems after the upgrade.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/admin/mailu/ui/views/base.py`
Content:
```
1 from mailu import models, utils
2 from mailu.ui import ui, forms, access
3
4 from flask import current_app as app
5 import flask
6 import flask_login
7
8
9 @ui.route('/', methods=["GET"])
10 @access.authenticated
11 def index():
12 return flask.redirect(flask.url_for('.user_settings'))
13
14 @ui.route('/announcement', methods=['GET', 'POST'])
15 @access.global_admin
16 def announcement():
17 form = forms.AnnouncementForm()
18 if form.validate_on_submit():
19 for user in models.User.query.all():
20 user.sendmail(form.announcement_subject.data,
21 form.announcement_body.data)
22 # Force-empty the form
23 form.announcement_subject.data = ''
24 form.announcement_body.data = ''
25 flask.flash('Your announcement was sent', 'success')
26 return flask.render_template('announcement.html', form=form)
27
28 @ui.route('/webmail', methods=['GET'])
29 def webmail():
30 return flask.redirect(app.config['WEB_WEBMAIL'])
31
32 @ui.route('/client', methods=['GET'])
33 def client():
34 return flask.render_template('client.html')
35
36 @ui.route('/webui_antispam', methods=['GET'])
37 def antispam():
38 return flask.render_template('antispam.html')
39
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/admin/mailu/ui/views/base.py b/core/admin/mailu/ui/views/base.py
--- a/core/admin/mailu/ui/views/base.py
+++ b/core/admin/mailu/ui/views/base.py
@@ -11,6 +11,10 @@
def index():
return flask.redirect(flask.url_for('.user_settings'))
[email protected]('/ui/')
+def redirect_old_path():
+ return flask.redirect(flask.url_for('.index'), code=301)
+
@ui.route('/announcement', methods=['GET', 'POST'])
@access.global_admin
def announcement():
| {"golden_diff": "diff --git a/core/admin/mailu/ui/views/base.py b/core/admin/mailu/ui/views/base.py\n--- a/core/admin/mailu/ui/views/base.py\n+++ b/core/admin/mailu/ui/views/base.py\n@@ -11,6 +11,10 @@\n def index():\n return flask.redirect(flask.url_for('.user_settings'))\n \[email protected]('/ui/')\n+def redirect_old_path():\n+ return flask.redirect(flask.url_for('.index'), code=301)\n+\n @ui.route('/announcement', methods=['GET', 'POST'])\n @access.global_admin\n def announcement():\n", "issue": "Error 404 not found when opening admin after upgrade 1.8 to master\n## Before you open your issue\r\n- [X] Check if no issue or pull-request for this already exists.\r\n- [X] Check [documentation](https://mailu.io/master/) and [FAQ](https://mailu.io/master/faq.html). (Tip, use the search function on the documentation page)\r\n- [X] You understand `Mailu` is made by volunteers in their **free time** \u2014 be conscise, civil and accept that delays can occur.\r\n- [X] The title of the issue should be short and simple. It should contain specific terms related to the actual issue. Be specific while writing the title.\r\n\r\n## Environment & Versions\r\n### Environment\r\n - [X] docker-compose\r\n\r\n### Versions\r\n\r\nBefore upgrade: Docker 1.8 images.\r\nAfter upgrade: Docker master images (pulled 30 December 2021).\r\n\r\n## Description\r\n\r\n**Mailu 1.8** image redirects `/admin` to `/admin/ui`.\r\n\r\n**Mailu master** image no longer redirects `/admin/ui` as the `ui` part in the URL has been removed according to [Tomcat 1929.enhacement](https://github.com/Mailu/Mailu/blob/master/towncrier/newsfragments/1929.enhancement):\r\n\r\n> Removed the /admin/ prefix to reduce complexity of routing with Mailu. Admin is accessible directly via /admin instead of /admin/ui\r\n\r\nAfter the upgrade from `1.8` to `master` and visiting the admin page, the browser still uses the cached URL `/admin/ui` and results in 404 not found.\r\n\r\n\r\n## Replication Steps\r\n\r\n1. Create 1.8 production environment on AMD64 platform using `mailu 1.8 Docker images`.\r\n2. Make sure the Admin page works.\r\n3. Remove docker containers (`docker-compose down`).\r\n4. Recreate **all** containers at the same time using `mailu master Docker images`.\r\n5. Open root mail domain. The browser uses the cached URL `admin/ui` and shows Error 404 not found.\r\n\r\nNote: Tested with `TLS_FLAVOR=letsencrypt`, admin and roundcube and Firefox.\r\n\r\n\r\n## Expected behaviour\r\n\r\nBackwards compatibility after Mailu 1.8 upgrade without the need of removing browser caches.\r\n\r\n## Front log\r\n\r\n```\r\nfront_1 | <IP> - - [30/Dec/2021:10:14:35 +0000] \"GET /admin/ui/ HTTP/2.0\" 404 198 \"https://mail.mydomain.nl/sso/login\" \"Mozilla/5.0 (X11; Linux x86_64; rv:95.0) Gecko/20100101 Firefox/95.0\"\r\n```\r\n\r\n## Bugfix\r\n\r\nProposal is to redirect `/admin/ui` always to `/admin` to prevent browser caching problems after the upgrade.\n", "before_files": [{"content": "from mailu import models, utils\nfrom mailu.ui import ui, forms, access\n\nfrom flask import current_app as app\nimport flask\nimport flask_login\n\n\[email protected]('/', methods=[\"GET\"])\[email protected]\ndef index():\n return flask.redirect(flask.url_for('.user_settings'))\n\[email protected]('/announcement', methods=['GET', 'POST'])\[email protected]_admin\ndef announcement():\n form = forms.AnnouncementForm()\n if form.validate_on_submit():\n for user in models.User.query.all():\n user.sendmail(form.announcement_subject.data,\n form.announcement_body.data)\n # Force-empty the form\n form.announcement_subject.data = ''\n form.announcement_body.data = ''\n flask.flash('Your announcement was sent', 'success')\n return flask.render_template('announcement.html', form=form)\n\[email protected]('/webmail', methods=['GET'])\ndef webmail():\n return flask.redirect(app.config['WEB_WEBMAIL'])\n\[email protected]('/client', methods=['GET'])\ndef client():\n return flask.render_template('client.html')\n\[email protected]('/webui_antispam', methods=['GET'])\ndef antispam():\n return flask.render_template('antispam.html')\n", "path": "core/admin/mailu/ui/views/base.py"}], "after_files": [{"content": "from mailu import models, utils\nfrom mailu.ui import ui, forms, access\n\nfrom flask import current_app as app\nimport flask\nimport flask_login\n\n\[email protected]('/', methods=[\"GET\"])\[email protected]\ndef index():\n return flask.redirect(flask.url_for('.user_settings'))\n\[email protected]('/ui/')\ndef redirect_old_path():\n return flask.redirect(flask.url_for('.index'), code=301)\n\[email protected]('/announcement', methods=['GET', 'POST'])\[email protected]_admin\ndef announcement():\n form = forms.AnnouncementForm()\n if form.validate_on_submit():\n for user in models.User.query.all():\n user.sendmail(form.announcement_subject.data,\n form.announcement_body.data)\n # Force-empty the form\n form.announcement_subject.data = ''\n form.announcement_body.data = ''\n flask.flash('Your announcement was sent', 'success')\n return flask.render_template('announcement.html', form=form)\n\[email protected]('/webmail', methods=['GET'])\ndef webmail():\n return flask.redirect(app.config['WEB_WEBMAIL'])\n\[email protected]('/client', methods=['GET'])\ndef client():\n return flask.render_template('client.html')\n\[email protected]('/webui_antispam', methods=['GET'])\ndef antispam():\n return flask.render_template('antispam.html')\n", "path": "core/admin/mailu/ui/views/base.py"}]} | 1,230 | 126 |
gh_patches_debug_12516 | rasdani/github-patches | git_diff | getnikola__nikola-1908 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
nikola check --clean-files should delete empty directories
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nikola/plugins/command/check.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2012-2015 Roberto Alsina and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 from __future__ import print_function
28 from collections import defaultdict
29 import os
30 import re
31 import sys
32 try:
33 from urllib import unquote
34 from urlparse import urlparse, urljoin, urldefrag
35 except ImportError:
36 from urllib.parse import unquote, urlparse, urljoin, urldefrag # NOQA
37
38 from doit.loader import generate_tasks
39 import lxml.html
40 import requests
41
42 from nikola.plugin_categories import Command
43 from nikola.utils import get_logger, STDERR_HANDLER
44
45
46 def _call_nikola_list(site):
47 files = []
48 deps = defaultdict(list)
49 for task in generate_tasks('render_site', site.gen_tasks('render_site', "Task", '')):
50 files.extend(task.targets)
51 for target in task.targets:
52 deps[target].extend(task.file_dep)
53 for task in generate_tasks('post_render', site.gen_tasks('render_site', "LateTask", '')):
54 files.extend(task.targets)
55 for target in task.targets:
56 deps[target].extend(task.file_dep)
57 return files, deps
58
59
60 def real_scan_files(site):
61 task_fnames = set([])
62 real_fnames = set([])
63 output_folder = site.config['OUTPUT_FOLDER']
64 # First check that all targets are generated in the right places
65 for fname in _call_nikola_list(site)[0]:
66 fname = fname.strip()
67 if fname.startswith(output_folder):
68 task_fnames.add(fname)
69 # And now check that there are no non-target files
70 for root, dirs, files in os.walk(output_folder, followlinks=True):
71 for src_name in files:
72 fname = os.path.join(root, src_name)
73 real_fnames.add(fname)
74
75 only_on_output = list(real_fnames - task_fnames)
76
77 only_on_input = list(task_fnames - real_fnames)
78
79 return (only_on_output, only_on_input)
80
81
82 def fs_relpath_from_url_path(url_path):
83 """Expects as input an urlparse(s).path"""
84 url_path = unquote(url_path)
85 # in windows relative paths don't begin with os.sep
86 if sys.platform == 'win32' and len(url_path):
87 url_path = url_path.replace('/', '\\')
88 return url_path
89
90
91 class CommandCheck(Command):
92 """Check the generated site."""
93
94 name = "check"
95 logger = None
96
97 doc_usage = "[-v] (-l [--find-sources] [-r] | -f [--clean-files])"
98 doc_purpose = "check links and files in the generated site"
99 cmd_options = [
100 {
101 'name': 'links',
102 'short': 'l',
103 'long': 'check-links',
104 'type': bool,
105 'default': False,
106 'help': 'Check for dangling links',
107 },
108 {
109 'name': 'files',
110 'short': 'f',
111 'long': 'check-files',
112 'type': bool,
113 'default': False,
114 'help': 'Check for unknown (orphaned and not generated) files',
115 },
116 {
117 'name': 'clean',
118 'long': 'clean-files',
119 'type': bool,
120 'default': False,
121 'help': 'Remove all unknown files, use with caution',
122 },
123 {
124 'name': 'find_sources',
125 'long': 'find-sources',
126 'type': bool,
127 'default': False,
128 'help': 'List possible source files for files with broken links.',
129 },
130 {
131 'name': 'verbose',
132 'long': 'verbose',
133 'short': 'v',
134 'type': bool,
135 'default': False,
136 'help': 'Be more verbose.',
137 },
138 {
139 'name': 'remote',
140 'long': 'remote',
141 'short': 'r',
142 'type': bool,
143 'default': False,
144 'help': 'Check that remote links work.',
145 },
146 ]
147
148 def _execute(self, options, args):
149 """Check the generated site."""
150 self.logger = get_logger('check', STDERR_HANDLER)
151
152 if not options['links'] and not options['files'] and not options['clean']:
153 print(self.help())
154 return False
155 if options['verbose']:
156 self.logger.level = 1
157 else:
158 self.logger.level = 4
159 if options['links']:
160 failure = self.scan_links(options['find_sources'], options['remote'])
161 if options['files']:
162 failure = self.scan_files()
163 if options['clean']:
164 failure = self.clean_files()
165 if failure:
166 return 1
167
168 existing_targets = set([])
169 checked_remote_targets = {}
170
171 def analyze(self, fname, find_sources=False, check_remote=False):
172 rv = False
173 self.whitelist = [re.compile(x) for x in self.site.config['LINK_CHECK_WHITELIST']]
174 base_url = urlparse(self.site.config['BASE_URL'])
175 self.existing_targets.add(self.site.config['SITE_URL'])
176 self.existing_targets.add(self.site.config['BASE_URL'])
177 url_type = self.site.config['URL_TYPE']
178
179 deps = {}
180 if find_sources:
181 deps = _call_nikola_list(self.site)[1]
182
183 if url_type in ('absolute', 'full_path'):
184 url_netloc_to_root = urlparse(self.site.config['BASE_URL']).path
185 try:
186 filename = fname
187
188 if filename.startswith(self.site.config['CACHE_FOLDER']):
189 # Do not look at links in the cache, which are not parsed by
190 # anyone and may result in false positives. Problems arise
191 # with galleries, for example. Full rationale: (Issue #1447)
192 self.logger.notice("Ignoring {0} (in cache, links may be incorrect)".format(filename))
193 return False
194
195 if not os.path.exists(fname):
196 # Quietly ignore files that don’t exist; use `nikola check -f` instead (Issue #1831)
197 return False
198
199 d = lxml.html.fromstring(open(filename, 'rb').read())
200 for l in d.iterlinks():
201 target = l[2]
202 if target == "#":
203 continue
204 target, _ = urldefrag(target)
205 parsed = urlparse(target)
206
207 # Warn about links from https to http (mixed-security)
208 if base_url.netloc == parsed.netloc and base_url.scheme == "https" and parsed.scheme == "http":
209 self.logger.warn("Mixed-content security for link in {0}: {1}".format(filename, target))
210
211 # Absolute links to other domains, skip
212 # Absolute links when using only paths, skip.
213 if ((parsed.scheme or target.startswith('//')) and parsed.netloc != base_url.netloc) or \
214 ((parsed.scheme or target.startswith('//')) and url_type in ('rel_path', 'full_path')):
215 if not check_remote or parsed.scheme not in ["http", "https"]:
216 continue
217 if parsed.netloc == base_url.netloc: # absolute URL to self.site
218 continue
219 if target in self.checked_remote_targets: # already checked this exact target
220 if self.checked_remote_targets[target] > 399:
221 self.logger.warn("Broken link in {0}: {1} [Error {2}]".format(filename, target, self.checked_remote_targets[target]))
222 continue
223 # Check the remote link works
224 req_headers = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Firefox/45.0 (Nikola)'} # I’m a real boy!
225 resp = requests.head(target, headers=req_headers)
226 self.checked_remote_targets[target] = resp.status_code
227 if resp.status_code > 399: # Error
228 self.logger.warn("Broken link in {0}: {1} [Error {2}]".format(filename, target, resp.status_code))
229 continue
230 elif resp.status_code <= 399: # The address leads *somewhere* that is not an error
231 self.logger.debug("Successfully checked remote link in {0}: {1} [HTTP: {2}]".format(filename, target, resp.status_code))
232 continue
233 self.logger.warn("Could not check remote link in {0}: {1} [Unknown problem]".format(filename, target))
234 continue
235
236 if url_type == 'rel_path':
237 if target.startswith('/'):
238 target_filename = os.path.abspath(
239 os.path.join(self.site.config['OUTPUT_FOLDER'], unquote(target.lstrip('/'))))
240 else: # Relative path
241 target_filename = os.path.abspath(
242 os.path.join(os.path.dirname(filename), unquote(target)))
243
244 elif url_type in ('full_path', 'absolute'):
245 if url_type == 'absolute':
246 # convert to 'full_path' case, ie url relative to root
247 url_rel_path = parsed.path[len(url_netloc_to_root):]
248 else:
249 # convert to relative to base path
250 url_rel_path = target[len(url_netloc_to_root):]
251 if url_rel_path == '' or url_rel_path.endswith('/'):
252 url_rel_path = urljoin(url_rel_path, self.site.config['INDEX_FILE'])
253 fs_rel_path = fs_relpath_from_url_path(url_rel_path)
254 target_filename = os.path.join(self.site.config['OUTPUT_FOLDER'], fs_rel_path)
255
256 if any(re.search(x, target_filename) for x in self.whitelist):
257 continue
258 elif target_filename not in self.existing_targets:
259 if os.path.exists(target_filename):
260 self.logger.notice("Good link {0} => {1}".format(target, target_filename))
261 self.existing_targets.add(target_filename)
262 else:
263 rv = True
264 self.logger.warn("Broken link in {0}: {1}".format(filename, target))
265 if find_sources:
266 self.logger.warn("Possible sources:")
267 self.logger.warn("\n".join(deps[filename]))
268 self.logger.warn("===============================\n")
269 except Exception as exc:
270 self.logger.error("Error with: {0} {1}".format(filename, exc))
271 return rv
272
273 def scan_links(self, find_sources=False, check_remote=False):
274 self.logger.info("Checking Links:")
275 self.logger.info("===============\n")
276 self.logger.notice("{0} mode".format(self.site.config['URL_TYPE']))
277 failure = False
278 # Maybe we should just examine all HTML files
279 output_folder = self.site.config['OUTPUT_FOLDER']
280 for fname in _call_nikola_list(self.site)[0]:
281 if fname.startswith(output_folder) and '.html' == fname[-5:]:
282 if self.analyze(fname, find_sources, check_remote):
283 failure = True
284 if not failure:
285 self.logger.info("All links checked.")
286 return failure
287
288 def scan_files(self):
289 failure = False
290 self.logger.info("Checking Files:")
291 self.logger.info("===============\n")
292 only_on_output, only_on_input = real_scan_files(self.site)
293
294 # Ignore folders
295 only_on_output = [p for p in only_on_output if not os.path.isdir(p)]
296 only_on_input = [p for p in only_on_input if not os.path.isdir(p)]
297
298 if only_on_output:
299 only_on_output.sort()
300 self.logger.warn("Files from unknown origins (orphans):")
301 for f in only_on_output:
302 self.logger.warn(f)
303 failure = True
304 if only_on_input:
305 only_on_input.sort()
306 self.logger.warn("Files not generated:")
307 for f in only_on_input:
308 self.logger.warn(f)
309 if not failure:
310 self.logger.info("All files checked.")
311 return failure
312
313 def clean_files(self):
314 only_on_output, _ = real_scan_files(self.site)
315 for f in only_on_output:
316 os.unlink(f)
317 return True
318
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nikola/plugins/command/check.py b/nikola/plugins/command/check.py
--- a/nikola/plugins/command/check.py
+++ b/nikola/plugins/command/check.py
@@ -313,5 +313,19 @@
def clean_files(self):
only_on_output, _ = real_scan_files(self.site)
for f in only_on_output:
+ self.logger.info('removed: {0}'.format(f))
os.unlink(f)
+
+ # Find empty directories and remove them
+ output_folder = self.site.config['OUTPUT_FOLDER']
+ all_dirs = []
+ for root, dirs, files in os.walk(output_folder, followlinks=True):
+ all_dirs.append(root)
+ all_dirs.sort(key=len, reverse=True)
+ for d in all_dirs:
+ try:
+ os.rmdir(d)
+ self.logger.info('removed: {0}/'.format(d))
+ except OSError:
+ pass
return True
| {"golden_diff": "diff --git a/nikola/plugins/command/check.py b/nikola/plugins/command/check.py\n--- a/nikola/plugins/command/check.py\n+++ b/nikola/plugins/command/check.py\n@@ -313,5 +313,19 @@\n def clean_files(self):\n only_on_output, _ = real_scan_files(self.site)\n for f in only_on_output:\n+ self.logger.info('removed: {0}'.format(f))\n os.unlink(f)\n+\n+ # Find empty directories and remove them\n+ output_folder = self.site.config['OUTPUT_FOLDER']\n+ all_dirs = []\n+ for root, dirs, files in os.walk(output_folder, followlinks=True):\n+ all_dirs.append(root)\n+ all_dirs.sort(key=len, reverse=True)\n+ for d in all_dirs:\n+ try:\n+ os.rmdir(d)\n+ self.logger.info('removed: {0}/'.format(d))\n+ except OSError:\n+ pass\n return True\n", "issue": "nikola check --clean-files should delete empty directories\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2015 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\nfrom __future__ import print_function\nfrom collections import defaultdict\nimport os\nimport re\nimport sys\ntry:\n from urllib import unquote\n from urlparse import urlparse, urljoin, urldefrag\nexcept ImportError:\n from urllib.parse import unquote, urlparse, urljoin, urldefrag # NOQA\n\nfrom doit.loader import generate_tasks\nimport lxml.html\nimport requests\n\nfrom nikola.plugin_categories import Command\nfrom nikola.utils import get_logger, STDERR_HANDLER\n\n\ndef _call_nikola_list(site):\n files = []\n deps = defaultdict(list)\n for task in generate_tasks('render_site', site.gen_tasks('render_site', \"Task\", '')):\n files.extend(task.targets)\n for target in task.targets:\n deps[target].extend(task.file_dep)\n for task in generate_tasks('post_render', site.gen_tasks('render_site', \"LateTask\", '')):\n files.extend(task.targets)\n for target in task.targets:\n deps[target].extend(task.file_dep)\n return files, deps\n\n\ndef real_scan_files(site):\n task_fnames = set([])\n real_fnames = set([])\n output_folder = site.config['OUTPUT_FOLDER']\n # First check that all targets are generated in the right places\n for fname in _call_nikola_list(site)[0]:\n fname = fname.strip()\n if fname.startswith(output_folder):\n task_fnames.add(fname)\n # And now check that there are no non-target files\n for root, dirs, files in os.walk(output_folder, followlinks=True):\n for src_name in files:\n fname = os.path.join(root, src_name)\n real_fnames.add(fname)\n\n only_on_output = list(real_fnames - task_fnames)\n\n only_on_input = list(task_fnames - real_fnames)\n\n return (only_on_output, only_on_input)\n\n\ndef fs_relpath_from_url_path(url_path):\n \"\"\"Expects as input an urlparse(s).path\"\"\"\n url_path = unquote(url_path)\n # in windows relative paths don't begin with os.sep\n if sys.platform == 'win32' and len(url_path):\n url_path = url_path.replace('/', '\\\\')\n return url_path\n\n\nclass CommandCheck(Command):\n \"\"\"Check the generated site.\"\"\"\n\n name = \"check\"\n logger = None\n\n doc_usage = \"[-v] (-l [--find-sources] [-r] | -f [--clean-files])\"\n doc_purpose = \"check links and files in the generated site\"\n cmd_options = [\n {\n 'name': 'links',\n 'short': 'l',\n 'long': 'check-links',\n 'type': bool,\n 'default': False,\n 'help': 'Check for dangling links',\n },\n {\n 'name': 'files',\n 'short': 'f',\n 'long': 'check-files',\n 'type': bool,\n 'default': False,\n 'help': 'Check for unknown (orphaned and not generated) files',\n },\n {\n 'name': 'clean',\n 'long': 'clean-files',\n 'type': bool,\n 'default': False,\n 'help': 'Remove all unknown files, use with caution',\n },\n {\n 'name': 'find_sources',\n 'long': 'find-sources',\n 'type': bool,\n 'default': False,\n 'help': 'List possible source files for files with broken links.',\n },\n {\n 'name': 'verbose',\n 'long': 'verbose',\n 'short': 'v',\n 'type': bool,\n 'default': False,\n 'help': 'Be more verbose.',\n },\n {\n 'name': 'remote',\n 'long': 'remote',\n 'short': 'r',\n 'type': bool,\n 'default': False,\n 'help': 'Check that remote links work.',\n },\n ]\n\n def _execute(self, options, args):\n \"\"\"Check the generated site.\"\"\"\n self.logger = get_logger('check', STDERR_HANDLER)\n\n if not options['links'] and not options['files'] and not options['clean']:\n print(self.help())\n return False\n if options['verbose']:\n self.logger.level = 1\n else:\n self.logger.level = 4\n if options['links']:\n failure = self.scan_links(options['find_sources'], options['remote'])\n if options['files']:\n failure = self.scan_files()\n if options['clean']:\n failure = self.clean_files()\n if failure:\n return 1\n\n existing_targets = set([])\n checked_remote_targets = {}\n\n def analyze(self, fname, find_sources=False, check_remote=False):\n rv = False\n self.whitelist = [re.compile(x) for x in self.site.config['LINK_CHECK_WHITELIST']]\n base_url = urlparse(self.site.config['BASE_URL'])\n self.existing_targets.add(self.site.config['SITE_URL'])\n self.existing_targets.add(self.site.config['BASE_URL'])\n url_type = self.site.config['URL_TYPE']\n\n deps = {}\n if find_sources:\n deps = _call_nikola_list(self.site)[1]\n\n if url_type in ('absolute', 'full_path'):\n url_netloc_to_root = urlparse(self.site.config['BASE_URL']).path\n try:\n filename = fname\n\n if filename.startswith(self.site.config['CACHE_FOLDER']):\n # Do not look at links in the cache, which are not parsed by\n # anyone and may result in false positives. Problems arise\n # with galleries, for example. Full rationale: (Issue #1447)\n self.logger.notice(\"Ignoring {0} (in cache, links may be incorrect)\".format(filename))\n return False\n\n if not os.path.exists(fname):\n # Quietly ignore files that don\u2019t exist; use `nikola check -f` instead (Issue #1831)\n return False\n\n d = lxml.html.fromstring(open(filename, 'rb').read())\n for l in d.iterlinks():\n target = l[2]\n if target == \"#\":\n continue\n target, _ = urldefrag(target)\n parsed = urlparse(target)\n\n # Warn about links from https to http (mixed-security)\n if base_url.netloc == parsed.netloc and base_url.scheme == \"https\" and parsed.scheme == \"http\":\n self.logger.warn(\"Mixed-content security for link in {0}: {1}\".format(filename, target))\n\n # Absolute links to other domains, skip\n # Absolute links when using only paths, skip.\n if ((parsed.scheme or target.startswith('//')) and parsed.netloc != base_url.netloc) or \\\n ((parsed.scheme or target.startswith('//')) and url_type in ('rel_path', 'full_path')):\n if not check_remote or parsed.scheme not in [\"http\", \"https\"]:\n continue\n if parsed.netloc == base_url.netloc: # absolute URL to self.site\n continue\n if target in self.checked_remote_targets: # already checked this exact target\n if self.checked_remote_targets[target] > 399:\n self.logger.warn(\"Broken link in {0}: {1} [Error {2}]\".format(filename, target, self.checked_remote_targets[target]))\n continue\n # Check the remote link works\n req_headers = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Firefox/45.0 (Nikola)'} # I\u2019m a real boy!\n resp = requests.head(target, headers=req_headers)\n self.checked_remote_targets[target] = resp.status_code\n if resp.status_code > 399: # Error\n self.logger.warn(\"Broken link in {0}: {1} [Error {2}]\".format(filename, target, resp.status_code))\n continue\n elif resp.status_code <= 399: # The address leads *somewhere* that is not an error\n self.logger.debug(\"Successfully checked remote link in {0}: {1} [HTTP: {2}]\".format(filename, target, resp.status_code))\n continue\n self.logger.warn(\"Could not check remote link in {0}: {1} [Unknown problem]\".format(filename, target))\n continue\n\n if url_type == 'rel_path':\n if target.startswith('/'):\n target_filename = os.path.abspath(\n os.path.join(self.site.config['OUTPUT_FOLDER'], unquote(target.lstrip('/'))))\n else: # Relative path\n target_filename = os.path.abspath(\n os.path.join(os.path.dirname(filename), unquote(target)))\n\n elif url_type in ('full_path', 'absolute'):\n if url_type == 'absolute':\n # convert to 'full_path' case, ie url relative to root\n url_rel_path = parsed.path[len(url_netloc_to_root):]\n else:\n # convert to relative to base path\n url_rel_path = target[len(url_netloc_to_root):]\n if url_rel_path == '' or url_rel_path.endswith('/'):\n url_rel_path = urljoin(url_rel_path, self.site.config['INDEX_FILE'])\n fs_rel_path = fs_relpath_from_url_path(url_rel_path)\n target_filename = os.path.join(self.site.config['OUTPUT_FOLDER'], fs_rel_path)\n\n if any(re.search(x, target_filename) for x in self.whitelist):\n continue\n elif target_filename not in self.existing_targets:\n if os.path.exists(target_filename):\n self.logger.notice(\"Good link {0} => {1}\".format(target, target_filename))\n self.existing_targets.add(target_filename)\n else:\n rv = True\n self.logger.warn(\"Broken link in {0}: {1}\".format(filename, target))\n if find_sources:\n self.logger.warn(\"Possible sources:\")\n self.logger.warn(\"\\n\".join(deps[filename]))\n self.logger.warn(\"===============================\\n\")\n except Exception as exc:\n self.logger.error(\"Error with: {0} {1}\".format(filename, exc))\n return rv\n\n def scan_links(self, find_sources=False, check_remote=False):\n self.logger.info(\"Checking Links:\")\n self.logger.info(\"===============\\n\")\n self.logger.notice(\"{0} mode\".format(self.site.config['URL_TYPE']))\n failure = False\n # Maybe we should just examine all HTML files\n output_folder = self.site.config['OUTPUT_FOLDER']\n for fname in _call_nikola_list(self.site)[0]:\n if fname.startswith(output_folder) and '.html' == fname[-5:]:\n if self.analyze(fname, find_sources, check_remote):\n failure = True\n if not failure:\n self.logger.info(\"All links checked.\")\n return failure\n\n def scan_files(self):\n failure = False\n self.logger.info(\"Checking Files:\")\n self.logger.info(\"===============\\n\")\n only_on_output, only_on_input = real_scan_files(self.site)\n\n # Ignore folders\n only_on_output = [p for p in only_on_output if not os.path.isdir(p)]\n only_on_input = [p for p in only_on_input if not os.path.isdir(p)]\n\n if only_on_output:\n only_on_output.sort()\n self.logger.warn(\"Files from unknown origins (orphans):\")\n for f in only_on_output:\n self.logger.warn(f)\n failure = True\n if only_on_input:\n only_on_input.sort()\n self.logger.warn(\"Files not generated:\")\n for f in only_on_input:\n self.logger.warn(f)\n if not failure:\n self.logger.info(\"All files checked.\")\n return failure\n\n def clean_files(self):\n only_on_output, _ = real_scan_files(self.site)\n for f in only_on_output:\n os.unlink(f)\n return True\n", "path": "nikola/plugins/command/check.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2015 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\nfrom __future__ import print_function\nfrom collections import defaultdict\nimport os\nimport re\nimport sys\ntry:\n from urllib import unquote\n from urlparse import urlparse, urljoin, urldefrag\nexcept ImportError:\n from urllib.parse import unquote, urlparse, urljoin, urldefrag # NOQA\n\nfrom doit.loader import generate_tasks\nimport lxml.html\nimport requests\n\nfrom nikola.plugin_categories import Command\nfrom nikola.utils import get_logger, STDERR_HANDLER\n\n\ndef _call_nikola_list(site):\n files = []\n deps = defaultdict(list)\n for task in generate_tasks('render_site', site.gen_tasks('render_site', \"Task\", '')):\n files.extend(task.targets)\n for target in task.targets:\n deps[target].extend(task.file_dep)\n for task in generate_tasks('post_render', site.gen_tasks('render_site', \"LateTask\", '')):\n files.extend(task.targets)\n for target in task.targets:\n deps[target].extend(task.file_dep)\n return files, deps\n\n\ndef real_scan_files(site):\n task_fnames = set([])\n real_fnames = set([])\n output_folder = site.config['OUTPUT_FOLDER']\n # First check that all targets are generated in the right places\n for fname in _call_nikola_list(site)[0]:\n fname = fname.strip()\n if fname.startswith(output_folder):\n task_fnames.add(fname)\n # And now check that there are no non-target files\n for root, dirs, files in os.walk(output_folder, followlinks=True):\n for src_name in files:\n fname = os.path.join(root, src_name)\n real_fnames.add(fname)\n\n only_on_output = list(real_fnames - task_fnames)\n\n only_on_input = list(task_fnames - real_fnames)\n\n return (only_on_output, only_on_input)\n\n\ndef fs_relpath_from_url_path(url_path):\n \"\"\"Expects as input an urlparse(s).path\"\"\"\n url_path = unquote(url_path)\n # in windows relative paths don't begin with os.sep\n if sys.platform == 'win32' and len(url_path):\n url_path = url_path.replace('/', '\\\\')\n return url_path\n\n\nclass CommandCheck(Command):\n \"\"\"Check the generated site.\"\"\"\n\n name = \"check\"\n logger = None\n\n doc_usage = \"[-v] (-l [--find-sources] [-r] | -f [--clean-files])\"\n doc_purpose = \"check links and files in the generated site\"\n cmd_options = [\n {\n 'name': 'links',\n 'short': 'l',\n 'long': 'check-links',\n 'type': bool,\n 'default': False,\n 'help': 'Check for dangling links',\n },\n {\n 'name': 'files',\n 'short': 'f',\n 'long': 'check-files',\n 'type': bool,\n 'default': False,\n 'help': 'Check for unknown (orphaned and not generated) files',\n },\n {\n 'name': 'clean',\n 'long': 'clean-files',\n 'type': bool,\n 'default': False,\n 'help': 'Remove all unknown files, use with caution',\n },\n {\n 'name': 'find_sources',\n 'long': 'find-sources',\n 'type': bool,\n 'default': False,\n 'help': 'List possible source files for files with broken links.',\n },\n {\n 'name': 'verbose',\n 'long': 'verbose',\n 'short': 'v',\n 'type': bool,\n 'default': False,\n 'help': 'Be more verbose.',\n },\n {\n 'name': 'remote',\n 'long': 'remote',\n 'short': 'r',\n 'type': bool,\n 'default': False,\n 'help': 'Check that remote links work.',\n },\n ]\n\n def _execute(self, options, args):\n \"\"\"Check the generated site.\"\"\"\n self.logger = get_logger('check', STDERR_HANDLER)\n\n if not options['links'] and not options['files'] and not options['clean']:\n print(self.help())\n return False\n if options['verbose']:\n self.logger.level = 1\n else:\n self.logger.level = 4\n if options['links']:\n failure = self.scan_links(options['find_sources'], options['remote'])\n if options['files']:\n failure = self.scan_files()\n if options['clean']:\n failure = self.clean_files()\n if failure:\n return 1\n\n existing_targets = set([])\n checked_remote_targets = {}\n\n def analyze(self, fname, find_sources=False, check_remote=False):\n rv = False\n self.whitelist = [re.compile(x) for x in self.site.config['LINK_CHECK_WHITELIST']]\n base_url = urlparse(self.site.config['BASE_URL'])\n self.existing_targets.add(self.site.config['SITE_URL'])\n self.existing_targets.add(self.site.config['BASE_URL'])\n url_type = self.site.config['URL_TYPE']\n\n deps = {}\n if find_sources:\n deps = _call_nikola_list(self.site)[1]\n\n if url_type in ('absolute', 'full_path'):\n url_netloc_to_root = urlparse(self.site.config['BASE_URL']).path\n try:\n filename = fname\n\n if filename.startswith(self.site.config['CACHE_FOLDER']):\n # Do not look at links in the cache, which are not parsed by\n # anyone and may result in false positives. Problems arise\n # with galleries, for example. Full rationale: (Issue #1447)\n self.logger.notice(\"Ignoring {0} (in cache, links may be incorrect)\".format(filename))\n return False\n\n if not os.path.exists(fname):\n # Quietly ignore files that don\u2019t exist; use `nikola check -f` instead (Issue #1831)\n return False\n\n d = lxml.html.fromstring(open(filename, 'rb').read())\n for l in d.iterlinks():\n target = l[2]\n if target == \"#\":\n continue\n target, _ = urldefrag(target)\n parsed = urlparse(target)\n\n # Warn about links from https to http (mixed-security)\n if base_url.netloc == parsed.netloc and base_url.scheme == \"https\" and parsed.scheme == \"http\":\n self.logger.warn(\"Mixed-content security for link in {0}: {1}\".format(filename, target))\n\n # Absolute links to other domains, skip\n # Absolute links when using only paths, skip.\n if ((parsed.scheme or target.startswith('//')) and parsed.netloc != base_url.netloc) or \\\n ((parsed.scheme or target.startswith('//')) and url_type in ('rel_path', 'full_path')):\n if not check_remote or parsed.scheme not in [\"http\", \"https\"]:\n continue\n if parsed.netloc == base_url.netloc: # absolute URL to self.site\n continue\n if target in self.checked_remote_targets: # already checked this exact target\n if self.checked_remote_targets[target] > 399:\n self.logger.warn(\"Broken link in {0}: {1} [Error {2}]\".format(filename, target, self.checked_remote_targets[target]))\n continue\n # Check the remote link works\n req_headers = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Firefox/45.0 (Nikola)'} # I\u2019m a real boy!\n resp = requests.head(target, headers=req_headers)\n self.checked_remote_targets[target] = resp.status_code\n if resp.status_code > 399: # Error\n self.logger.warn(\"Broken link in {0}: {1} [Error {2}]\".format(filename, target, resp.status_code))\n continue\n elif resp.status_code <= 399: # The address leads *somewhere* that is not an error\n self.logger.debug(\"Successfully checked remote link in {0}: {1} [HTTP: {2}]\".format(filename, target, resp.status_code))\n continue\n self.logger.warn(\"Could not check remote link in {0}: {1} [Unknown problem]\".format(filename, target))\n continue\n\n if url_type == 'rel_path':\n if target.startswith('/'):\n target_filename = os.path.abspath(\n os.path.join(self.site.config['OUTPUT_FOLDER'], unquote(target.lstrip('/'))))\n else: # Relative path\n target_filename = os.path.abspath(\n os.path.join(os.path.dirname(filename), unquote(target)))\n\n elif url_type in ('full_path', 'absolute'):\n if url_type == 'absolute':\n # convert to 'full_path' case, ie url relative to root\n url_rel_path = parsed.path[len(url_netloc_to_root):]\n else:\n # convert to relative to base path\n url_rel_path = target[len(url_netloc_to_root):]\n if url_rel_path == '' or url_rel_path.endswith('/'):\n url_rel_path = urljoin(url_rel_path, self.site.config['INDEX_FILE'])\n fs_rel_path = fs_relpath_from_url_path(url_rel_path)\n target_filename = os.path.join(self.site.config['OUTPUT_FOLDER'], fs_rel_path)\n\n if any(re.search(x, target_filename) for x in self.whitelist):\n continue\n elif target_filename not in self.existing_targets:\n if os.path.exists(target_filename):\n self.logger.notice(\"Good link {0} => {1}\".format(target, target_filename))\n self.existing_targets.add(target_filename)\n else:\n rv = True\n self.logger.warn(\"Broken link in {0}: {1}\".format(filename, target))\n if find_sources:\n self.logger.warn(\"Possible sources:\")\n self.logger.warn(\"\\n\".join(deps[filename]))\n self.logger.warn(\"===============================\\n\")\n except Exception as exc:\n self.logger.error(\"Error with: {0} {1}\".format(filename, exc))\n return rv\n\n def scan_links(self, find_sources=False, check_remote=False):\n self.logger.info(\"Checking Links:\")\n self.logger.info(\"===============\\n\")\n self.logger.notice(\"{0} mode\".format(self.site.config['URL_TYPE']))\n failure = False\n # Maybe we should just examine all HTML files\n output_folder = self.site.config['OUTPUT_FOLDER']\n for fname in _call_nikola_list(self.site)[0]:\n if fname.startswith(output_folder) and '.html' == fname[-5:]:\n if self.analyze(fname, find_sources, check_remote):\n failure = True\n if not failure:\n self.logger.info(\"All links checked.\")\n return failure\n\n def scan_files(self):\n failure = False\n self.logger.info(\"Checking Files:\")\n self.logger.info(\"===============\\n\")\n only_on_output, only_on_input = real_scan_files(self.site)\n\n # Ignore folders\n only_on_output = [p for p in only_on_output if not os.path.isdir(p)]\n only_on_input = [p for p in only_on_input if not os.path.isdir(p)]\n\n if only_on_output:\n only_on_output.sort()\n self.logger.warn(\"Files from unknown origins (orphans):\")\n for f in only_on_output:\n self.logger.warn(f)\n failure = True\n if only_on_input:\n only_on_input.sort()\n self.logger.warn(\"Files not generated:\")\n for f in only_on_input:\n self.logger.warn(f)\n if not failure:\n self.logger.info(\"All files checked.\")\n return failure\n\n def clean_files(self):\n only_on_output, _ = real_scan_files(self.site)\n for f in only_on_output:\n self.logger.info('removed: {0}'.format(f))\n os.unlink(f)\n\n # Find empty directories and remove them\n output_folder = self.site.config['OUTPUT_FOLDER']\n all_dirs = []\n for root, dirs, files in os.walk(output_folder, followlinks=True):\n all_dirs.append(root)\n all_dirs.sort(key=len, reverse=True)\n for d in all_dirs:\n try:\n os.rmdir(d)\n self.logger.info('removed: {0}/'.format(d))\n except OSError:\n pass\n return True\n", "path": "nikola/plugins/command/check.py"}]} | 3,916 | 213 |
gh_patches_debug_15945 | rasdani/github-patches | git_diff | vyperlang__vyper-2059 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Runtime error when making an external call to the same contract
This code makes it an error to make an external call to the same contract:
https://github.com/ethereum/vyper/blob/c296b2d7532d913103aad494b749f8179a3acddc/vyper/parser/external_call.py#L75
This is a surprising limitation. It doesn't seem to have a clear benefit, and it could be problematic. As an example, multisig wallets often use self-calls to perform administrative functions. This way the wallet owners have to agree to make a change like lowering the required threshold of signatures. In Vyper, this would produce a runtime error.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vyper/parser/external_call.py`
Content:
```
1 from vyper import ast as vy_ast
2 from vyper.exceptions import (
3 StateAccessViolation,
4 StructureException,
5 TypeCheckFailure,
6 )
7 from vyper.parser.lll_node import LLLnode
8 from vyper.parser.parser_utils import getpos, pack_arguments, unwrap_location
9 from vyper.types import (
10 BaseType,
11 ByteArrayLike,
12 ListType,
13 TupleLike,
14 get_size_of_type,
15 )
16
17
18 def external_call(node, context, interface_name, contract_address, pos, value=None, gas=None):
19 from vyper.parser.expr import Expr
20
21 if value is None:
22 value = 0
23 if gas is None:
24 gas = "gas"
25 if contract_address.value == "address":
26 raise StructureException("External calls to self are not permitted.", node)
27 method_name = node.func.attr
28 sig = context.sigs[interface_name][method_name]
29 inargs, inargsize, _ = pack_arguments(
30 sig, [Expr(arg, context).lll_node for arg in node.args], context, node.func,
31 )
32 output_placeholder, output_size, returner = get_external_call_output(sig, context)
33 sub = [
34 "seq",
35 ["assert", ["extcodesize", contract_address]],
36 ["assert", ["ne", "address", contract_address]],
37 ]
38 if context.is_constant() and not sig.const:
39 # TODO this can probably go
40 raise StateAccessViolation(
41 f"May not call state modifying function '{method_name}' "
42 f"within {context.pp_constancy()}.",
43 node,
44 )
45
46 if context.is_constant() or sig.const:
47 sub.append(
48 [
49 "assert",
50 [
51 "staticcall",
52 gas,
53 contract_address,
54 inargs,
55 inargsize,
56 output_placeholder,
57 output_size,
58 ],
59 ]
60 )
61 else:
62 sub.append(
63 [
64 "assert",
65 [
66 "call",
67 gas,
68 contract_address,
69 value,
70 inargs,
71 inargsize,
72 output_placeholder,
73 output_size,
74 ],
75 ]
76 )
77 sub.extend(returner)
78 o = LLLnode.from_list(sub, typ=sig.output_type, location="memory", pos=getpos(node))
79 return o
80
81
82 def get_external_call_output(sig, context):
83 if not sig.output_type:
84 return 0, 0, []
85 output_placeholder = context.new_placeholder(typ=sig.output_type)
86 output_size = get_size_of_type(sig.output_type) * 32
87 if isinstance(sig.output_type, BaseType):
88 returner = [0, output_placeholder]
89 elif isinstance(sig.output_type, ByteArrayLike):
90 returner = [0, output_placeholder + 32]
91 elif isinstance(sig.output_type, TupleLike):
92 returner = [0, output_placeholder]
93 elif isinstance(sig.output_type, ListType):
94 returner = [0, output_placeholder]
95 else:
96 raise TypeCheckFailure(f"Invalid output type: {sig.output_type}")
97 return output_placeholder, output_size, returner
98
99
100 def get_external_interface_keywords(stmt_expr, context):
101 from vyper.parser.expr import Expr
102
103 value, gas = None, None
104 for kw in stmt_expr.keywords:
105 if kw.arg == "gas":
106 gas = Expr.parse_value_expr(kw.value, context)
107 elif kw.arg == "value":
108 value = Expr.parse_value_expr(kw.value, context)
109 else:
110 raise TypeCheckFailure("Unexpected keyword argument")
111 return value, gas
112
113
114 def make_external_call(stmt_expr, context):
115 from vyper.parser.expr import Expr
116
117 value, gas = get_external_interface_keywords(stmt_expr, context)
118
119 if isinstance(stmt_expr.func, vy_ast.Attribute) and isinstance(
120 stmt_expr.func.value, vy_ast.Call
121 ):
122 contract_name = stmt_expr.func.value.func.id
123 contract_address = Expr.parse_value_expr(stmt_expr.func.value.args[0], context)
124
125 return external_call(
126 stmt_expr,
127 context,
128 contract_name,
129 contract_address,
130 pos=getpos(stmt_expr),
131 value=value,
132 gas=gas,
133 )
134
135 elif (
136 isinstance(stmt_expr.func.value, vy_ast.Attribute)
137 and stmt_expr.func.value.attr in context.sigs
138 ): # noqa: E501
139 contract_name = stmt_expr.func.value.attr
140 var = context.globals[stmt_expr.func.value.attr]
141 contract_address = unwrap_location(
142 LLLnode.from_list(
143 var.pos,
144 typ=var.typ,
145 location="storage",
146 pos=getpos(stmt_expr),
147 annotation="self." + stmt_expr.func.value.attr,
148 )
149 )
150
151 return external_call(
152 stmt_expr,
153 context,
154 contract_name,
155 contract_address,
156 pos=getpos(stmt_expr),
157 value=value,
158 gas=gas,
159 )
160
161 elif (
162 isinstance(stmt_expr.func.value, vy_ast.Attribute)
163 and stmt_expr.func.value.attr in context.globals
164 and hasattr(context.globals[stmt_expr.func.value.attr].typ, "name")
165 ):
166
167 contract_name = context.globals[stmt_expr.func.value.attr].typ.name
168 var = context.globals[stmt_expr.func.value.attr]
169 contract_address = unwrap_location(
170 LLLnode.from_list(
171 var.pos,
172 typ=var.typ,
173 location="storage",
174 pos=getpos(stmt_expr),
175 annotation="self." + stmt_expr.func.value.attr,
176 )
177 )
178
179 return external_call(
180 stmt_expr,
181 context,
182 contract_name,
183 contract_address,
184 pos=getpos(stmt_expr),
185 value=value,
186 gas=gas,
187 )
188
189 else:
190 raise StructureException("Unsupported operator.", stmt_expr)
191
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/vyper/parser/external_call.py b/vyper/parser/external_call.py
--- a/vyper/parser/external_call.py
+++ b/vyper/parser/external_call.py
@@ -22,8 +22,7 @@
value = 0
if gas is None:
gas = "gas"
- if contract_address.value == "address":
- raise StructureException("External calls to self are not permitted.", node)
+
method_name = node.func.attr
sig = context.sigs[interface_name][method_name]
inargs, inargsize, _ = pack_arguments(
@@ -33,7 +32,6 @@
sub = [
"seq",
["assert", ["extcodesize", contract_address]],
- ["assert", ["ne", "address", contract_address]],
]
if context.is_constant() and not sig.const:
# TODO this can probably go
| {"golden_diff": "diff --git a/vyper/parser/external_call.py b/vyper/parser/external_call.py\n--- a/vyper/parser/external_call.py\n+++ b/vyper/parser/external_call.py\n@@ -22,8 +22,7 @@\n value = 0\n if gas is None:\n gas = \"gas\"\n- if contract_address.value == \"address\":\n- raise StructureException(\"External calls to self are not permitted.\", node)\n+\n method_name = node.func.attr\n sig = context.sigs[interface_name][method_name]\n inargs, inargsize, _ = pack_arguments(\n@@ -33,7 +32,6 @@\n sub = [\n \"seq\",\n [\"assert\", [\"extcodesize\", contract_address]],\n- [\"assert\", [\"ne\", \"address\", contract_address]],\n ]\n if context.is_constant() and not sig.const:\n # TODO this can probably go\n", "issue": "Runtime error when making an external call to the same contract\nThis code makes it an error to make an external call to the same contract:\r\n\r\nhttps://github.com/ethereum/vyper/blob/c296b2d7532d913103aad494b749f8179a3acddc/vyper/parser/external_call.py#L75\r\n\r\nThis is a surprising limitation. It doesn't seem to have a clear benefit, and it could be problematic. As an example, multisig wallets often use self-calls to perform administrative functions. This way the wallet owners have to agree to make a change like lowering the required threshold of signatures. In Vyper, this would produce a runtime error.\n", "before_files": [{"content": "from vyper import ast as vy_ast\nfrom vyper.exceptions import (\n StateAccessViolation,\n StructureException,\n TypeCheckFailure,\n)\nfrom vyper.parser.lll_node import LLLnode\nfrom vyper.parser.parser_utils import getpos, pack_arguments, unwrap_location\nfrom vyper.types import (\n BaseType,\n ByteArrayLike,\n ListType,\n TupleLike,\n get_size_of_type,\n)\n\n\ndef external_call(node, context, interface_name, contract_address, pos, value=None, gas=None):\n from vyper.parser.expr import Expr\n\n if value is None:\n value = 0\n if gas is None:\n gas = \"gas\"\n if contract_address.value == \"address\":\n raise StructureException(\"External calls to self are not permitted.\", node)\n method_name = node.func.attr\n sig = context.sigs[interface_name][method_name]\n inargs, inargsize, _ = pack_arguments(\n sig, [Expr(arg, context).lll_node for arg in node.args], context, node.func,\n )\n output_placeholder, output_size, returner = get_external_call_output(sig, context)\n sub = [\n \"seq\",\n [\"assert\", [\"extcodesize\", contract_address]],\n [\"assert\", [\"ne\", \"address\", contract_address]],\n ]\n if context.is_constant() and not sig.const:\n # TODO this can probably go\n raise StateAccessViolation(\n f\"May not call state modifying function '{method_name}' \"\n f\"within {context.pp_constancy()}.\",\n node,\n )\n\n if context.is_constant() or sig.const:\n sub.append(\n [\n \"assert\",\n [\n \"staticcall\",\n gas,\n contract_address,\n inargs,\n inargsize,\n output_placeholder,\n output_size,\n ],\n ]\n )\n else:\n sub.append(\n [\n \"assert\",\n [\n \"call\",\n gas,\n contract_address,\n value,\n inargs,\n inargsize,\n output_placeholder,\n output_size,\n ],\n ]\n )\n sub.extend(returner)\n o = LLLnode.from_list(sub, typ=sig.output_type, location=\"memory\", pos=getpos(node))\n return o\n\n\ndef get_external_call_output(sig, context):\n if not sig.output_type:\n return 0, 0, []\n output_placeholder = context.new_placeholder(typ=sig.output_type)\n output_size = get_size_of_type(sig.output_type) * 32\n if isinstance(sig.output_type, BaseType):\n returner = [0, output_placeholder]\n elif isinstance(sig.output_type, ByteArrayLike):\n returner = [0, output_placeholder + 32]\n elif isinstance(sig.output_type, TupleLike):\n returner = [0, output_placeholder]\n elif isinstance(sig.output_type, ListType):\n returner = [0, output_placeholder]\n else:\n raise TypeCheckFailure(f\"Invalid output type: {sig.output_type}\")\n return output_placeholder, output_size, returner\n\n\ndef get_external_interface_keywords(stmt_expr, context):\n from vyper.parser.expr import Expr\n\n value, gas = None, None\n for kw in stmt_expr.keywords:\n if kw.arg == \"gas\":\n gas = Expr.parse_value_expr(kw.value, context)\n elif kw.arg == \"value\":\n value = Expr.parse_value_expr(kw.value, context)\n else:\n raise TypeCheckFailure(\"Unexpected keyword argument\")\n return value, gas\n\n\ndef make_external_call(stmt_expr, context):\n from vyper.parser.expr import Expr\n\n value, gas = get_external_interface_keywords(stmt_expr, context)\n\n if isinstance(stmt_expr.func, vy_ast.Attribute) and isinstance(\n stmt_expr.func.value, vy_ast.Call\n ):\n contract_name = stmt_expr.func.value.func.id\n contract_address = Expr.parse_value_expr(stmt_expr.func.value.args[0], context)\n\n return external_call(\n stmt_expr,\n context,\n contract_name,\n contract_address,\n pos=getpos(stmt_expr),\n value=value,\n gas=gas,\n )\n\n elif (\n isinstance(stmt_expr.func.value, vy_ast.Attribute)\n and stmt_expr.func.value.attr in context.sigs\n ): # noqa: E501\n contract_name = stmt_expr.func.value.attr\n var = context.globals[stmt_expr.func.value.attr]\n contract_address = unwrap_location(\n LLLnode.from_list(\n var.pos,\n typ=var.typ,\n location=\"storage\",\n pos=getpos(stmt_expr),\n annotation=\"self.\" + stmt_expr.func.value.attr,\n )\n )\n\n return external_call(\n stmt_expr,\n context,\n contract_name,\n contract_address,\n pos=getpos(stmt_expr),\n value=value,\n gas=gas,\n )\n\n elif (\n isinstance(stmt_expr.func.value, vy_ast.Attribute)\n and stmt_expr.func.value.attr in context.globals\n and hasattr(context.globals[stmt_expr.func.value.attr].typ, \"name\")\n ):\n\n contract_name = context.globals[stmt_expr.func.value.attr].typ.name\n var = context.globals[stmt_expr.func.value.attr]\n contract_address = unwrap_location(\n LLLnode.from_list(\n var.pos,\n typ=var.typ,\n location=\"storage\",\n pos=getpos(stmt_expr),\n annotation=\"self.\" + stmt_expr.func.value.attr,\n )\n )\n\n return external_call(\n stmt_expr,\n context,\n contract_name,\n contract_address,\n pos=getpos(stmt_expr),\n value=value,\n gas=gas,\n )\n\n else:\n raise StructureException(\"Unsupported operator.\", stmt_expr)\n", "path": "vyper/parser/external_call.py"}], "after_files": [{"content": "from vyper import ast as vy_ast\nfrom vyper.exceptions import (\n StateAccessViolation,\n StructureException,\n TypeCheckFailure,\n)\nfrom vyper.parser.lll_node import LLLnode\nfrom vyper.parser.parser_utils import getpos, pack_arguments, unwrap_location\nfrom vyper.types import (\n BaseType,\n ByteArrayLike,\n ListType,\n TupleLike,\n get_size_of_type,\n)\n\n\ndef external_call(node, context, interface_name, contract_address, pos, value=None, gas=None):\n from vyper.parser.expr import Expr\n\n if value is None:\n value = 0\n if gas is None:\n gas = \"gas\"\n\n method_name = node.func.attr\n sig = context.sigs[interface_name][method_name]\n inargs, inargsize, _ = pack_arguments(\n sig, [Expr(arg, context).lll_node for arg in node.args], context, node.func,\n )\n output_placeholder, output_size, returner = get_external_call_output(sig, context)\n sub = [\n \"seq\",\n [\"assert\", [\"extcodesize\", contract_address]],\n ]\n if context.is_constant() and not sig.const:\n # TODO this can probably go\n raise StateAccessViolation(\n f\"May not call state modifying function '{method_name}' \"\n f\"within {context.pp_constancy()}.\",\n node,\n )\n\n if context.is_constant() or sig.const:\n sub.append(\n [\n \"assert\",\n [\n \"staticcall\",\n gas,\n contract_address,\n inargs,\n inargsize,\n output_placeholder,\n output_size,\n ],\n ]\n )\n else:\n sub.append(\n [\n \"assert\",\n [\n \"call\",\n gas,\n contract_address,\n value,\n inargs,\n inargsize,\n output_placeholder,\n output_size,\n ],\n ]\n )\n sub.extend(returner)\n o = LLLnode.from_list(sub, typ=sig.output_type, location=\"memory\", pos=getpos(node))\n return o\n\n\ndef get_external_call_output(sig, context):\n if not sig.output_type:\n return 0, 0, []\n output_placeholder = context.new_placeholder(typ=sig.output_type)\n output_size = get_size_of_type(sig.output_type) * 32\n if isinstance(sig.output_type, BaseType):\n returner = [0, output_placeholder]\n elif isinstance(sig.output_type, ByteArrayLike):\n returner = [0, output_placeholder + 32]\n elif isinstance(sig.output_type, TupleLike):\n returner = [0, output_placeholder]\n elif isinstance(sig.output_type, ListType):\n returner = [0, output_placeholder]\n else:\n raise TypeCheckFailure(f\"Invalid output type: {sig.output_type}\")\n return output_placeholder, output_size, returner\n\n\ndef get_external_interface_keywords(stmt_expr, context):\n from vyper.parser.expr import Expr\n\n value, gas = None, None\n for kw in stmt_expr.keywords:\n if kw.arg == \"gas\":\n gas = Expr.parse_value_expr(kw.value, context)\n elif kw.arg == \"value\":\n value = Expr.parse_value_expr(kw.value, context)\n else:\n raise TypeCheckFailure(\"Unexpected keyword argument\")\n return value, gas\n\n\ndef make_external_call(stmt_expr, context):\n from vyper.parser.expr import Expr\n\n value, gas = get_external_interface_keywords(stmt_expr, context)\n\n if isinstance(stmt_expr.func, vy_ast.Attribute) and isinstance(\n stmt_expr.func.value, vy_ast.Call\n ):\n contract_name = stmt_expr.func.value.func.id\n contract_address = Expr.parse_value_expr(stmt_expr.func.value.args[0], context)\n\n return external_call(\n stmt_expr,\n context,\n contract_name,\n contract_address,\n pos=getpos(stmt_expr),\n value=value,\n gas=gas,\n )\n\n elif (\n isinstance(stmt_expr.func.value, vy_ast.Attribute)\n and stmt_expr.func.value.attr in context.sigs\n ): # noqa: E501\n contract_name = stmt_expr.func.value.attr\n var = context.globals[stmt_expr.func.value.attr]\n contract_address = unwrap_location(\n LLLnode.from_list(\n var.pos,\n typ=var.typ,\n location=\"storage\",\n pos=getpos(stmt_expr),\n annotation=\"self.\" + stmt_expr.func.value.attr,\n )\n )\n\n return external_call(\n stmt_expr,\n context,\n contract_name,\n contract_address,\n pos=getpos(stmt_expr),\n value=value,\n gas=gas,\n )\n\n elif (\n isinstance(stmt_expr.func.value, vy_ast.Attribute)\n and stmt_expr.func.value.attr in context.globals\n and hasattr(context.globals[stmt_expr.func.value.attr].typ, \"name\")\n ):\n\n contract_name = context.globals[stmt_expr.func.value.attr].typ.name\n var = context.globals[stmt_expr.func.value.attr]\n contract_address = unwrap_location(\n LLLnode.from_list(\n var.pos,\n typ=var.typ,\n location=\"storage\",\n pos=getpos(stmt_expr),\n annotation=\"self.\" + stmt_expr.func.value.attr,\n )\n )\n\n return external_call(\n stmt_expr,\n context,\n contract_name,\n contract_address,\n pos=getpos(stmt_expr),\n value=value,\n gas=gas,\n )\n\n else:\n raise StructureException(\"Unsupported operator.\", stmt_expr)\n", "path": "vyper/parser/external_call.py"}]} | 2,097 | 198 |
gh_patches_debug_36714 | rasdani/github-patches | git_diff | pytorch__vision-2142 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Resume in the Segmentation example
## 🐛 Bug
The segmentation training [script](https://github.com/pytorch/vision/blob/7b60f4db9707d7afdbb87fd4e8ef6906ca014720/references/segmentation/train.py) doesn't seem to allow correct resuming. I think [lines 131-133](https://github.com/pytorch/vision/blob/7b60f4db9707d7afdbb87fd4e8ef6906ca014720/references/segmentation/train.py#L131) should be changed to:
```
start_epoch = 0
if args.resume:
checkpoint = torch.load(args.resume, map_location='cpu')
model.load_state_dict(checkpoint['model'])
optimizer.load_state_dict(checkpoint['optimizer'])
lr_scheduler.load_state_dict(checkpoint['lr_scheduler'])
start_epoch = checkpoint['epoch'] + 1
```
Then, [line 161](https://github.com/pytorch/vision/blob/7b60f4db9707d7afdbb87fd4e8ef6906ca014720/references/segmentation/train.py#L161) and [lines 167-173](https://github.com/pytorch/vision/blob/7b60f4db9707d7afdbb87fd4e8ef6906ca014720/references/segmentation/train.py#L167) should be changed respectively to:
```
for epoch in range(start_epoch, args.epochs):
```
and
```
utils.save_on_master(
{
'model': model_without_ddp.state_dict(),
'optimizer': optimizer.state_dict(),
'lr_scheduler': lr_scheduler.state_dict(),
'epoch': epoch
},
```
Besides, is there a good reason to set [`batch_size=1`](https://github.com/pytorch/vision/blob/7b60f4db9707d7afdbb87fd4e8ef6906ca014720/references/segmentation/train.py#L120) (instead of `args.batch_size` or even `2*args.batch_size`) for `data_loader_test`?
Thanks.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `references/segmentation/train.py`
Content:
```
1 import datetime
2 import os
3 import time
4
5 import torch
6 import torch.utils.data
7 from torch import nn
8 import torchvision
9
10 from coco_utils import get_coco
11 import transforms as T
12 import utils
13
14
15 def get_dataset(name, image_set, transform):
16 def sbd(*args, **kwargs):
17 return torchvision.datasets.SBDataset(*args, mode='segmentation', **kwargs)
18 paths = {
19 "voc": ('/datasets01/VOC/060817/', torchvision.datasets.VOCSegmentation, 21),
20 "voc_aug": ('/datasets01/SBDD/072318/', sbd, 21),
21 "coco": ('/datasets01/COCO/022719/', get_coco, 21)
22 }
23 p, ds_fn, num_classes = paths[name]
24
25 ds = ds_fn(p, image_set=image_set, transforms=transform)
26 return ds, num_classes
27
28
29 def get_transform(train):
30 base_size = 520
31 crop_size = 480
32
33 min_size = int((0.5 if train else 1.0) * base_size)
34 max_size = int((2.0 if train else 1.0) * base_size)
35 transforms = []
36 transforms.append(T.RandomResize(min_size, max_size))
37 if train:
38 transforms.append(T.RandomHorizontalFlip(0.5))
39 transforms.append(T.RandomCrop(crop_size))
40 transforms.append(T.ToTensor())
41 transforms.append(T.Normalize(mean=[0.485, 0.456, 0.406],
42 std=[0.229, 0.224, 0.225]))
43
44 return T.Compose(transforms)
45
46
47 def criterion(inputs, target):
48 losses = {}
49 for name, x in inputs.items():
50 losses[name] = nn.functional.cross_entropy(x, target, ignore_index=255)
51
52 if len(losses) == 1:
53 return losses['out']
54
55 return losses['out'] + 0.5 * losses['aux']
56
57
58 def evaluate(model, data_loader, device, num_classes):
59 model.eval()
60 confmat = utils.ConfusionMatrix(num_classes)
61 metric_logger = utils.MetricLogger(delimiter=" ")
62 header = 'Test:'
63 with torch.no_grad():
64 for image, target in metric_logger.log_every(data_loader, 100, header):
65 image, target = image.to(device), target.to(device)
66 output = model(image)
67 output = output['out']
68
69 confmat.update(target.flatten(), output.argmax(1).flatten())
70
71 confmat.reduce_from_all_processes()
72
73 return confmat
74
75
76 def train_one_epoch(model, criterion, optimizer, data_loader, lr_scheduler, device, epoch, print_freq):
77 model.train()
78 metric_logger = utils.MetricLogger(delimiter=" ")
79 metric_logger.add_meter('lr', utils.SmoothedValue(window_size=1, fmt='{value}'))
80 header = 'Epoch: [{}]'.format(epoch)
81 for image, target in metric_logger.log_every(data_loader, print_freq, header):
82 image, target = image.to(device), target.to(device)
83 output = model(image)
84 loss = criterion(output, target)
85
86 optimizer.zero_grad()
87 loss.backward()
88 optimizer.step()
89
90 lr_scheduler.step()
91
92 metric_logger.update(loss=loss.item(), lr=optimizer.param_groups[0]["lr"])
93
94
95 def main(args):
96 if args.output_dir:
97 utils.mkdir(args.output_dir)
98
99 utils.init_distributed_mode(args)
100 print(args)
101
102 device = torch.device(args.device)
103
104 dataset, num_classes = get_dataset(args.dataset, "train", get_transform(train=True))
105 dataset_test, _ = get_dataset(args.dataset, "val", get_transform(train=False))
106
107 if args.distributed:
108 train_sampler = torch.utils.data.distributed.DistributedSampler(dataset)
109 test_sampler = torch.utils.data.distributed.DistributedSampler(dataset_test)
110 else:
111 train_sampler = torch.utils.data.RandomSampler(dataset)
112 test_sampler = torch.utils.data.SequentialSampler(dataset_test)
113
114 data_loader = torch.utils.data.DataLoader(
115 dataset, batch_size=args.batch_size,
116 sampler=train_sampler, num_workers=args.workers,
117 collate_fn=utils.collate_fn, drop_last=True)
118
119 data_loader_test = torch.utils.data.DataLoader(
120 dataset_test, batch_size=1,
121 sampler=test_sampler, num_workers=args.workers,
122 collate_fn=utils.collate_fn)
123
124 model = torchvision.models.segmentation.__dict__[args.model](num_classes=num_classes,
125 aux_loss=args.aux_loss,
126 pretrained=args.pretrained)
127 model.to(device)
128 if args.distributed:
129 model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model)
130
131 if args.resume:
132 checkpoint = torch.load(args.resume, map_location='cpu')
133 model.load_state_dict(checkpoint['model'])
134
135 model_without_ddp = model
136 if args.distributed:
137 model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu])
138 model_without_ddp = model.module
139
140 if args.test_only:
141 confmat = evaluate(model, data_loader_test, device=device, num_classes=num_classes)
142 print(confmat)
143 return
144
145 params_to_optimize = [
146 {"params": [p for p in model_without_ddp.backbone.parameters() if p.requires_grad]},
147 {"params": [p for p in model_without_ddp.classifier.parameters() if p.requires_grad]},
148 ]
149 if args.aux_loss:
150 params = [p for p in model_without_ddp.aux_classifier.parameters() if p.requires_grad]
151 params_to_optimize.append({"params": params, "lr": args.lr * 10})
152 optimizer = torch.optim.SGD(
153 params_to_optimize,
154 lr=args.lr, momentum=args.momentum, weight_decay=args.weight_decay)
155
156 lr_scheduler = torch.optim.lr_scheduler.LambdaLR(
157 optimizer,
158 lambda x: (1 - x / (len(data_loader) * args.epochs)) ** 0.9)
159
160 start_time = time.time()
161 for epoch in range(args.epochs):
162 if args.distributed:
163 train_sampler.set_epoch(epoch)
164 train_one_epoch(model, criterion, optimizer, data_loader, lr_scheduler, device, epoch, args.print_freq)
165 confmat = evaluate(model, data_loader_test, device=device, num_classes=num_classes)
166 print(confmat)
167 utils.save_on_master(
168 {
169 'model': model_without_ddp.state_dict(),
170 'optimizer': optimizer.state_dict(),
171 'epoch': epoch,
172 'args': args
173 },
174 os.path.join(args.output_dir, 'model_{}.pth'.format(epoch)))
175
176 total_time = time.time() - start_time
177 total_time_str = str(datetime.timedelta(seconds=int(total_time)))
178 print('Training time {}'.format(total_time_str))
179
180
181 def parse_args():
182 import argparse
183 parser = argparse.ArgumentParser(description='PyTorch Segmentation Training')
184
185 parser.add_argument('--dataset', default='voc', help='dataset')
186 parser.add_argument('--model', default='fcn_resnet101', help='model')
187 parser.add_argument('--aux-loss', action='store_true', help='auxiliar loss')
188 parser.add_argument('--device', default='cuda', help='device')
189 parser.add_argument('-b', '--batch-size', default=8, type=int)
190 parser.add_argument('--epochs', default=30, type=int, metavar='N',
191 help='number of total epochs to run')
192
193 parser.add_argument('-j', '--workers', default=16, type=int, metavar='N',
194 help='number of data loading workers (default: 16)')
195 parser.add_argument('--lr', default=0.01, type=float, help='initial learning rate')
196 parser.add_argument('--momentum', default=0.9, type=float, metavar='M',
197 help='momentum')
198 parser.add_argument('--wd', '--weight-decay', default=1e-4, type=float,
199 metavar='W', help='weight decay (default: 1e-4)',
200 dest='weight_decay')
201 parser.add_argument('--print-freq', default=10, type=int, help='print frequency')
202 parser.add_argument('--output-dir', default='.', help='path where to save')
203 parser.add_argument('--resume', default='', help='resume from checkpoint')
204 parser.add_argument(
205 "--test-only",
206 dest="test_only",
207 help="Only test the model",
208 action="store_true",
209 )
210 parser.add_argument(
211 "--pretrained",
212 dest="pretrained",
213 help="Use pre-trained models from the modelzoo",
214 action="store_true",
215 )
216 # distributed training parameters
217 parser.add_argument('--world-size', default=1, type=int,
218 help='number of distributed processes')
219 parser.add_argument('--dist-url', default='env://', help='url used to set up distributed training')
220
221 args = parser.parse_args()
222 return args
223
224
225 if __name__ == "__main__":
226 args = parse_args()
227 main(args)
228
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/references/segmentation/train.py b/references/segmentation/train.py
--- a/references/segmentation/train.py
+++ b/references/segmentation/train.py
@@ -128,10 +128,6 @@
if args.distributed:
model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model)
- if args.resume:
- checkpoint = torch.load(args.resume, map_location='cpu')
- model.load_state_dict(checkpoint['model'])
-
model_without_ddp = model
if args.distributed:
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu])
@@ -157,8 +153,15 @@
optimizer,
lambda x: (1 - x / (len(data_loader) * args.epochs)) ** 0.9)
+ if args.resume:
+ checkpoint = torch.load(args.resume, map_location='cpu')
+ model_without_ddp.load_state_dict(checkpoint['model'])
+ optimizer.load_state_dict(checkpoint['optimizer'])
+ lr_scheduler.load_state_dict(checkpoint['lr_scheduler'])
+ args.start_epoch = checkpoint['epoch'] + 1
+
start_time = time.time()
- for epoch in range(args.epochs):
+ for epoch in range(args.start_epoch, args.epochs):
if args.distributed:
train_sampler.set_epoch(epoch)
train_one_epoch(model, criterion, optimizer, data_loader, lr_scheduler, device, epoch, args.print_freq)
@@ -168,6 +171,7 @@
{
'model': model_without_ddp.state_dict(),
'optimizer': optimizer.state_dict(),
+ 'lr_scheduler': lr_scheduler.state_dict(),
'epoch': epoch,
'args': args
},
@@ -201,6 +205,8 @@
parser.add_argument('--print-freq', default=10, type=int, help='print frequency')
parser.add_argument('--output-dir', default='.', help='path where to save')
parser.add_argument('--resume', default='', help='resume from checkpoint')
+ parser.add_argument('--start-epoch', default=0, type=int, metavar='N',
+ help='start epoch')
parser.add_argument(
"--test-only",
dest="test_only",
| {"golden_diff": "diff --git a/references/segmentation/train.py b/references/segmentation/train.py\n--- a/references/segmentation/train.py\n+++ b/references/segmentation/train.py\n@@ -128,10 +128,6 @@\n if args.distributed:\n model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model)\n \n- if args.resume:\n- checkpoint = torch.load(args.resume, map_location='cpu')\n- model.load_state_dict(checkpoint['model'])\n-\n model_without_ddp = model\n if args.distributed:\n model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu])\n@@ -157,8 +153,15 @@\n optimizer,\n lambda x: (1 - x / (len(data_loader) * args.epochs)) ** 0.9)\n \n+ if args.resume:\n+ checkpoint = torch.load(args.resume, map_location='cpu')\n+ model_without_ddp.load_state_dict(checkpoint['model'])\n+ optimizer.load_state_dict(checkpoint['optimizer'])\n+ lr_scheduler.load_state_dict(checkpoint['lr_scheduler'])\n+ args.start_epoch = checkpoint['epoch'] + 1\n+\n start_time = time.time()\n- for epoch in range(args.epochs):\n+ for epoch in range(args.start_epoch, args.epochs):\n if args.distributed:\n train_sampler.set_epoch(epoch)\n train_one_epoch(model, criterion, optimizer, data_loader, lr_scheduler, device, epoch, args.print_freq)\n@@ -168,6 +171,7 @@\n {\n 'model': model_without_ddp.state_dict(),\n 'optimizer': optimizer.state_dict(),\n+ 'lr_scheduler': lr_scheduler.state_dict(),\n 'epoch': epoch,\n 'args': args\n },\n@@ -201,6 +205,8 @@\n parser.add_argument('--print-freq', default=10, type=int, help='print frequency')\n parser.add_argument('--output-dir', default='.', help='path where to save')\n parser.add_argument('--resume', default='', help='resume from checkpoint')\n+ parser.add_argument('--start-epoch', default=0, type=int, metavar='N',\n+ help='start epoch')\n parser.add_argument(\n \"--test-only\",\n dest=\"test_only\",\n", "issue": "Resume in the Segmentation example\n## \ud83d\udc1b Bug\r\n\r\nThe segmentation training [script](https://github.com/pytorch/vision/blob/7b60f4db9707d7afdbb87fd4e8ef6906ca014720/references/segmentation/train.py) doesn't seem to allow correct resuming. I think [lines 131-133](https://github.com/pytorch/vision/blob/7b60f4db9707d7afdbb87fd4e8ef6906ca014720/references/segmentation/train.py#L131) should be changed to:\r\n\r\n```\r\n start_epoch = 0\r\n if args.resume:\r\n checkpoint = torch.load(args.resume, map_location='cpu')\r\n model.load_state_dict(checkpoint['model'])\r\n optimizer.load_state_dict(checkpoint['optimizer'])\r\n lr_scheduler.load_state_dict(checkpoint['lr_scheduler'])\r\n start_epoch = checkpoint['epoch'] + 1\r\n```\r\n\r\nThen, [line 161](https://github.com/pytorch/vision/blob/7b60f4db9707d7afdbb87fd4e8ef6906ca014720/references/segmentation/train.py#L161) and [lines 167-173](https://github.com/pytorch/vision/blob/7b60f4db9707d7afdbb87fd4e8ef6906ca014720/references/segmentation/train.py#L167) should be changed respectively to:\r\n\r\n```\r\n for epoch in range(start_epoch, args.epochs):\r\n```\r\n\r\nand\r\n\r\n```\r\nutils.save_on_master(\r\n {\r\n 'model': model_without_ddp.state_dict(),\r\n 'optimizer': optimizer.state_dict(),\r\n 'lr_scheduler': lr_scheduler.state_dict(),\r\n 'epoch': epoch\r\n },\r\n```\r\n\r\nBesides, is there a good reason to set [`batch_size=1`](https://github.com/pytorch/vision/blob/7b60f4db9707d7afdbb87fd4e8ef6906ca014720/references/segmentation/train.py#L120) (instead of `args.batch_size` or even `2*args.batch_size`) for `data_loader_test`?\r\n\r\nThanks.\r\n\n", "before_files": [{"content": "import datetime\nimport os\nimport time\n\nimport torch\nimport torch.utils.data\nfrom torch import nn\nimport torchvision\n\nfrom coco_utils import get_coco\nimport transforms as T\nimport utils\n\n\ndef get_dataset(name, image_set, transform):\n def sbd(*args, **kwargs):\n return torchvision.datasets.SBDataset(*args, mode='segmentation', **kwargs)\n paths = {\n \"voc\": ('/datasets01/VOC/060817/', torchvision.datasets.VOCSegmentation, 21),\n \"voc_aug\": ('/datasets01/SBDD/072318/', sbd, 21),\n \"coco\": ('/datasets01/COCO/022719/', get_coco, 21)\n }\n p, ds_fn, num_classes = paths[name]\n\n ds = ds_fn(p, image_set=image_set, transforms=transform)\n return ds, num_classes\n\n\ndef get_transform(train):\n base_size = 520\n crop_size = 480\n\n min_size = int((0.5 if train else 1.0) * base_size)\n max_size = int((2.0 if train else 1.0) * base_size)\n transforms = []\n transforms.append(T.RandomResize(min_size, max_size))\n if train:\n transforms.append(T.RandomHorizontalFlip(0.5))\n transforms.append(T.RandomCrop(crop_size))\n transforms.append(T.ToTensor())\n transforms.append(T.Normalize(mean=[0.485, 0.456, 0.406],\n std=[0.229, 0.224, 0.225]))\n\n return T.Compose(transforms)\n\n\ndef criterion(inputs, target):\n losses = {}\n for name, x in inputs.items():\n losses[name] = nn.functional.cross_entropy(x, target, ignore_index=255)\n\n if len(losses) == 1:\n return losses['out']\n\n return losses['out'] + 0.5 * losses['aux']\n\n\ndef evaluate(model, data_loader, device, num_classes):\n model.eval()\n confmat = utils.ConfusionMatrix(num_classes)\n metric_logger = utils.MetricLogger(delimiter=\" \")\n header = 'Test:'\n with torch.no_grad():\n for image, target in metric_logger.log_every(data_loader, 100, header):\n image, target = image.to(device), target.to(device)\n output = model(image)\n output = output['out']\n\n confmat.update(target.flatten(), output.argmax(1).flatten())\n\n confmat.reduce_from_all_processes()\n\n return confmat\n\n\ndef train_one_epoch(model, criterion, optimizer, data_loader, lr_scheduler, device, epoch, print_freq):\n model.train()\n metric_logger = utils.MetricLogger(delimiter=\" \")\n metric_logger.add_meter('lr', utils.SmoothedValue(window_size=1, fmt='{value}'))\n header = 'Epoch: [{}]'.format(epoch)\n for image, target in metric_logger.log_every(data_loader, print_freq, header):\n image, target = image.to(device), target.to(device)\n output = model(image)\n loss = criterion(output, target)\n\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n\n lr_scheduler.step()\n\n metric_logger.update(loss=loss.item(), lr=optimizer.param_groups[0][\"lr\"])\n\n\ndef main(args):\n if args.output_dir:\n utils.mkdir(args.output_dir)\n\n utils.init_distributed_mode(args)\n print(args)\n\n device = torch.device(args.device)\n\n dataset, num_classes = get_dataset(args.dataset, \"train\", get_transform(train=True))\n dataset_test, _ = get_dataset(args.dataset, \"val\", get_transform(train=False))\n\n if args.distributed:\n train_sampler = torch.utils.data.distributed.DistributedSampler(dataset)\n test_sampler = torch.utils.data.distributed.DistributedSampler(dataset_test)\n else:\n train_sampler = torch.utils.data.RandomSampler(dataset)\n test_sampler = torch.utils.data.SequentialSampler(dataset_test)\n\n data_loader = torch.utils.data.DataLoader(\n dataset, batch_size=args.batch_size,\n sampler=train_sampler, num_workers=args.workers,\n collate_fn=utils.collate_fn, drop_last=True)\n\n data_loader_test = torch.utils.data.DataLoader(\n dataset_test, batch_size=1,\n sampler=test_sampler, num_workers=args.workers,\n collate_fn=utils.collate_fn)\n\n model = torchvision.models.segmentation.__dict__[args.model](num_classes=num_classes,\n aux_loss=args.aux_loss,\n pretrained=args.pretrained)\n model.to(device)\n if args.distributed:\n model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model)\n\n if args.resume:\n checkpoint = torch.load(args.resume, map_location='cpu')\n model.load_state_dict(checkpoint['model'])\n\n model_without_ddp = model\n if args.distributed:\n model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu])\n model_without_ddp = model.module\n\n if args.test_only:\n confmat = evaluate(model, data_loader_test, device=device, num_classes=num_classes)\n print(confmat)\n return\n\n params_to_optimize = [\n {\"params\": [p for p in model_without_ddp.backbone.parameters() if p.requires_grad]},\n {\"params\": [p for p in model_without_ddp.classifier.parameters() if p.requires_grad]},\n ]\n if args.aux_loss:\n params = [p for p in model_without_ddp.aux_classifier.parameters() if p.requires_grad]\n params_to_optimize.append({\"params\": params, \"lr\": args.lr * 10})\n optimizer = torch.optim.SGD(\n params_to_optimize,\n lr=args.lr, momentum=args.momentum, weight_decay=args.weight_decay)\n\n lr_scheduler = torch.optim.lr_scheduler.LambdaLR(\n optimizer,\n lambda x: (1 - x / (len(data_loader) * args.epochs)) ** 0.9)\n\n start_time = time.time()\n for epoch in range(args.epochs):\n if args.distributed:\n train_sampler.set_epoch(epoch)\n train_one_epoch(model, criterion, optimizer, data_loader, lr_scheduler, device, epoch, args.print_freq)\n confmat = evaluate(model, data_loader_test, device=device, num_classes=num_classes)\n print(confmat)\n utils.save_on_master(\n {\n 'model': model_without_ddp.state_dict(),\n 'optimizer': optimizer.state_dict(),\n 'epoch': epoch,\n 'args': args\n },\n os.path.join(args.output_dir, 'model_{}.pth'.format(epoch)))\n\n total_time = time.time() - start_time\n total_time_str = str(datetime.timedelta(seconds=int(total_time)))\n print('Training time {}'.format(total_time_str))\n\n\ndef parse_args():\n import argparse\n parser = argparse.ArgumentParser(description='PyTorch Segmentation Training')\n\n parser.add_argument('--dataset', default='voc', help='dataset')\n parser.add_argument('--model', default='fcn_resnet101', help='model')\n parser.add_argument('--aux-loss', action='store_true', help='auxiliar loss')\n parser.add_argument('--device', default='cuda', help='device')\n parser.add_argument('-b', '--batch-size', default=8, type=int)\n parser.add_argument('--epochs', default=30, type=int, metavar='N',\n help='number of total epochs to run')\n\n parser.add_argument('-j', '--workers', default=16, type=int, metavar='N',\n help='number of data loading workers (default: 16)')\n parser.add_argument('--lr', default=0.01, type=float, help='initial learning rate')\n parser.add_argument('--momentum', default=0.9, type=float, metavar='M',\n help='momentum')\n parser.add_argument('--wd', '--weight-decay', default=1e-4, type=float,\n metavar='W', help='weight decay (default: 1e-4)',\n dest='weight_decay')\n parser.add_argument('--print-freq', default=10, type=int, help='print frequency')\n parser.add_argument('--output-dir', default='.', help='path where to save')\n parser.add_argument('--resume', default='', help='resume from checkpoint')\n parser.add_argument(\n \"--test-only\",\n dest=\"test_only\",\n help=\"Only test the model\",\n action=\"store_true\",\n )\n parser.add_argument(\n \"--pretrained\",\n dest=\"pretrained\",\n help=\"Use pre-trained models from the modelzoo\",\n action=\"store_true\",\n )\n # distributed training parameters\n parser.add_argument('--world-size', default=1, type=int,\n help='number of distributed processes')\n parser.add_argument('--dist-url', default='env://', help='url used to set up distributed training')\n\n args = parser.parse_args()\n return args\n\n\nif __name__ == \"__main__\":\n args = parse_args()\n main(args)\n", "path": "references/segmentation/train.py"}], "after_files": [{"content": "import datetime\nimport os\nimport time\n\nimport torch\nimport torch.utils.data\nfrom torch import nn\nimport torchvision\n\nfrom coco_utils import get_coco\nimport transforms as T\nimport utils\n\n\ndef get_dataset(name, image_set, transform):\n def sbd(*args, **kwargs):\n return torchvision.datasets.SBDataset(*args, mode='segmentation', **kwargs)\n paths = {\n \"voc\": ('/datasets01/VOC/060817/', torchvision.datasets.VOCSegmentation, 21),\n \"voc_aug\": ('/datasets01/SBDD/072318/', sbd, 21),\n \"coco\": ('/datasets01/COCO/022719/', get_coco, 21)\n }\n p, ds_fn, num_classes = paths[name]\n\n ds = ds_fn(p, image_set=image_set, transforms=transform)\n return ds, num_classes\n\n\ndef get_transform(train):\n base_size = 520\n crop_size = 480\n\n min_size = int((0.5 if train else 1.0) * base_size)\n max_size = int((2.0 if train else 1.0) * base_size)\n transforms = []\n transforms.append(T.RandomResize(min_size, max_size))\n if train:\n transforms.append(T.RandomHorizontalFlip(0.5))\n transforms.append(T.RandomCrop(crop_size))\n transforms.append(T.ToTensor())\n transforms.append(T.Normalize(mean=[0.485, 0.456, 0.406],\n std=[0.229, 0.224, 0.225]))\n\n return T.Compose(transforms)\n\n\ndef criterion(inputs, target):\n losses = {}\n for name, x in inputs.items():\n losses[name] = nn.functional.cross_entropy(x, target, ignore_index=255)\n\n if len(losses) == 1:\n return losses['out']\n\n return losses['out'] + 0.5 * losses['aux']\n\n\ndef evaluate(model, data_loader, device, num_classes):\n model.eval()\n confmat = utils.ConfusionMatrix(num_classes)\n metric_logger = utils.MetricLogger(delimiter=\" \")\n header = 'Test:'\n with torch.no_grad():\n for image, target in metric_logger.log_every(data_loader, 100, header):\n image, target = image.to(device), target.to(device)\n output = model(image)\n output = output['out']\n\n confmat.update(target.flatten(), output.argmax(1).flatten())\n\n confmat.reduce_from_all_processes()\n\n return confmat\n\n\ndef train_one_epoch(model, criterion, optimizer, data_loader, lr_scheduler, device, epoch, print_freq):\n model.train()\n metric_logger = utils.MetricLogger(delimiter=\" \")\n metric_logger.add_meter('lr', utils.SmoothedValue(window_size=1, fmt='{value}'))\n header = 'Epoch: [{}]'.format(epoch)\n for image, target in metric_logger.log_every(data_loader, print_freq, header):\n image, target = image.to(device), target.to(device)\n output = model(image)\n loss = criterion(output, target)\n\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n\n lr_scheduler.step()\n\n metric_logger.update(loss=loss.item(), lr=optimizer.param_groups[0][\"lr\"])\n\n\ndef main(args):\n if args.output_dir:\n utils.mkdir(args.output_dir)\n\n utils.init_distributed_mode(args)\n print(args)\n\n device = torch.device(args.device)\n\n dataset, num_classes = get_dataset(args.dataset, \"train\", get_transform(train=True))\n dataset_test, _ = get_dataset(args.dataset, \"val\", get_transform(train=False))\n\n if args.distributed:\n train_sampler = torch.utils.data.distributed.DistributedSampler(dataset)\n test_sampler = torch.utils.data.distributed.DistributedSampler(dataset_test)\n else:\n train_sampler = torch.utils.data.RandomSampler(dataset)\n test_sampler = torch.utils.data.SequentialSampler(dataset_test)\n\n data_loader = torch.utils.data.DataLoader(\n dataset, batch_size=args.batch_size,\n sampler=train_sampler, num_workers=args.workers,\n collate_fn=utils.collate_fn, drop_last=True)\n\n data_loader_test = torch.utils.data.DataLoader(\n dataset_test, batch_size=1,\n sampler=test_sampler, num_workers=args.workers,\n collate_fn=utils.collate_fn)\n\n model = torchvision.models.segmentation.__dict__[args.model](num_classes=num_classes,\n aux_loss=args.aux_loss,\n pretrained=args.pretrained)\n model.to(device)\n if args.distributed:\n model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model)\n\n model_without_ddp = model\n if args.distributed:\n model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu])\n model_without_ddp = model.module\n\n if args.test_only:\n confmat = evaluate(model, data_loader_test, device=device, num_classes=num_classes)\n print(confmat)\n return\n\n params_to_optimize = [\n {\"params\": [p for p in model_without_ddp.backbone.parameters() if p.requires_grad]},\n {\"params\": [p for p in model_without_ddp.classifier.parameters() if p.requires_grad]},\n ]\n if args.aux_loss:\n params = [p for p in model_without_ddp.aux_classifier.parameters() if p.requires_grad]\n params_to_optimize.append({\"params\": params, \"lr\": args.lr * 10})\n optimizer = torch.optim.SGD(\n params_to_optimize,\n lr=args.lr, momentum=args.momentum, weight_decay=args.weight_decay)\n\n lr_scheduler = torch.optim.lr_scheduler.LambdaLR(\n optimizer,\n lambda x: (1 - x / (len(data_loader) * args.epochs)) ** 0.9)\n\n if args.resume:\n checkpoint = torch.load(args.resume, map_location='cpu')\n model_without_ddp.load_state_dict(checkpoint['model'])\n optimizer.load_state_dict(checkpoint['optimizer'])\n lr_scheduler.load_state_dict(checkpoint['lr_scheduler'])\n args.start_epoch = checkpoint['epoch'] + 1\n\n start_time = time.time()\n for epoch in range(args.start_epoch, args.epochs):\n if args.distributed:\n train_sampler.set_epoch(epoch)\n train_one_epoch(model, criterion, optimizer, data_loader, lr_scheduler, device, epoch, args.print_freq)\n confmat = evaluate(model, data_loader_test, device=device, num_classes=num_classes)\n print(confmat)\n utils.save_on_master(\n {\n 'model': model_without_ddp.state_dict(),\n 'optimizer': optimizer.state_dict(),\n 'lr_scheduler': lr_scheduler.state_dict(),\n 'epoch': epoch,\n 'args': args\n },\n os.path.join(args.output_dir, 'model_{}.pth'.format(epoch)))\n\n total_time = time.time() - start_time\n total_time_str = str(datetime.timedelta(seconds=int(total_time)))\n print('Training time {}'.format(total_time_str))\n\n\ndef parse_args():\n import argparse\n parser = argparse.ArgumentParser(description='PyTorch Segmentation Training')\n\n parser.add_argument('--dataset', default='voc', help='dataset')\n parser.add_argument('--model', default='fcn_resnet101', help='model')\n parser.add_argument('--aux-loss', action='store_true', help='auxiliar loss')\n parser.add_argument('--device', default='cuda', help='device')\n parser.add_argument('-b', '--batch-size', default=8, type=int)\n parser.add_argument('--epochs', default=30, type=int, metavar='N',\n help='number of total epochs to run')\n\n parser.add_argument('-j', '--workers', default=16, type=int, metavar='N',\n help='number of data loading workers (default: 16)')\n parser.add_argument('--lr', default=0.01, type=float, help='initial learning rate')\n parser.add_argument('--momentum', default=0.9, type=float, metavar='M',\n help='momentum')\n parser.add_argument('--wd', '--weight-decay', default=1e-4, type=float,\n metavar='W', help='weight decay (default: 1e-4)',\n dest='weight_decay')\n parser.add_argument('--print-freq', default=10, type=int, help='print frequency')\n parser.add_argument('--output-dir', default='.', help='path where to save')\n parser.add_argument('--resume', default='', help='resume from checkpoint')\n parser.add_argument('--start-epoch', default=0, type=int, metavar='N',\n help='start epoch')\n parser.add_argument(\n \"--test-only\",\n dest=\"test_only\",\n help=\"Only test the model\",\n action=\"store_true\",\n )\n parser.add_argument(\n \"--pretrained\",\n dest=\"pretrained\",\n help=\"Use pre-trained models from the modelzoo\",\n action=\"store_true\",\n )\n # distributed training parameters\n parser.add_argument('--world-size', default=1, type=int,\n help='number of distributed processes')\n parser.add_argument('--dist-url', default='env://', help='url used to set up distributed training')\n\n args = parser.parse_args()\n return args\n\n\nif __name__ == \"__main__\":\n args = parse_args()\n main(args)\n", "path": "references/segmentation/train.py"}]} | 3,353 | 498 |
gh_patches_debug_2851 | rasdani/github-patches | git_diff | google__pytype-144 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Future-proof preconditions.py:_TOKEN_RE
3.7 adds a warning about possible future changes to re: https://bugs.python.org/issue30349
A future version of python will add nested sets, which allows nesting of sets using `[...]`. Escape the inner `[` in the re so it doesn't trigger a nested set.
Closes #140.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pytype/pytd/parse/preconditions.py`
Content:
```
1 # Copyright 2016 Google Inc. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Preconditions for automatic argument checking."""
16
17 import re
18
19 from pytype import utils
20
21
22 class PreconditionError(ValueError):
23 pass
24
25
26 class _Precondition(object):
27 """Base class for preconditions."""
28
29 def check(self, value):
30 """Raise PreconditionError if value does not match condition."""
31 raise NotImplementedError
32
33 def allowed_types(self):
34 """Returns a set of types or typenames that are allowed."""
35 raise NotImplementedError
36
37
38 class _ClassNamePrecondition(_Precondition):
39 """Precondition that expects an instance of a specific class."""
40
41 def __init__(self, class_name):
42 super(_ClassNamePrecondition, self).__init__()
43 self._class_name = class_name
44
45 def check(self, value):
46 actual = type(value).__name__
47 if actual != self._class_name:
48 raise PreconditionError(
49 "actual=%s, expected=%s" % (actual, self._class_name))
50
51 def allowed_types(self):
52 return {self._class_name}
53
54
55 class _IsInstancePrecondition(_Precondition):
56 """Precondition that expects an instance of a class or subclass."""
57
58 def __init__(self, cls):
59 super(_IsInstancePrecondition, self).__init__()
60 self._cls = cls
61
62 def check(self, value):
63 if not isinstance(value, self._cls):
64 raise PreconditionError(
65 "actual=%s, expected_superclass=%s" % (
66 type(value).__name__, self._cls.__name__))
67
68 def allowed_types(self):
69 return {self._cls}
70
71
72 _REGISTERED_CLASSES = {}
73
74
75 def register(cls):
76 """Register a class object for use in {X} syntax."""
77 name = cls.__name__
78 assert name not in _REGISTERED_CLASSES
79 _REGISTERED_CLASSES[name] = _IsInstancePrecondition(cls)
80
81
82 class _TuplePrecondition(_Precondition):
83 """Precondition that expects a tuple."""
84
85 def __init__(self, element_condition):
86 super(_TuplePrecondition, self).__init__()
87 self._element_condition = element_condition
88
89 def check(self, value):
90 if not isinstance(value, tuple):
91 raise PreconditionError(
92 "actual=%s, expected=tuple" % type(value).__name__)
93 for v in value:
94 self._element_condition.check(v)
95
96 def allowed_types(self):
97 return self._element_condition.allowed_types()
98
99
100 class _OrPrecondition(_Precondition):
101 """Precondition that expects one of various choices to match."""
102
103 def __init__(self, choices):
104 super(_OrPrecondition, self).__init__()
105 self._choices = choices
106
107 def check(self, value):
108 errors = []
109 for c in self._choices:
110 try:
111 c.check(value)
112 return
113 except PreconditionError as e:
114 errors.append(e)
115 raise PreconditionError(
116 " or ".join("(%s)" % utils.message(e) for e in errors))
117
118 def allowed_types(self):
119 allowed = set()
120 for c in self._choices:
121 allowed |= c.allowed_types()
122 return allowed
123
124
125 class CallChecker(object):
126 """Class that performs argument checks against a collection of conditions."""
127
128 def __init__(self, condition_pairs):
129 """Create a checker given a sequence of (name, precondition) pairs."""
130 self._arg_sequence = tuple(condition_pairs)
131 self._arg_map = dict(self._arg_sequence)
132
133 def check(self, *args, **kwargs):
134 """Raise PreconditionError if the actual call is invalid."""
135 # This check is intended to be in addition to an actual call, so an
136 # incorrect number of args or undefined kwargs should be caught elsewhere.
137 for value, pair in zip(args, self._arg_sequence):
138 name, condition = pair
139 self._check_arg(condition, name, value)
140 for name, value in kwargs.items():
141 condition = self._arg_map.get(name)
142 self._check_arg(condition, name, value)
143
144 def _check_arg(self, condition, name, value):
145 if condition:
146 try:
147 condition.check(value)
148 except PreconditionError as e:
149 raise PreconditionError("argument=%s: %s." % (name, utils.message(e)))
150
151 def allowed_types(self):
152 """Determines the types and typenames allowed by calls to the checker.
153
154 Returns:
155 A set of types and/or typenames (strings). A typename matches
156 only that one class while a type matches any subclass of the type.
157 """
158 allowed = set()
159 for _, c in self._arg_sequence:
160 allowed |= c.allowed_types()
161 return allowed
162
163
164 # RE to match a single token. Leading whitepace is ignored.
165 _TOKEN_RE = re.compile(
166 r"\s*(?:(?P<literal>[[\]{}])|(?P<word>[a-zA-Z_]\w*))")
167
168 # Token codes (aside from literal characters)
169 _TOKEN_NAME = 1
170 _TOKEN_TUPLE = 2
171 _TOKEN_OR = 3
172
173 _RESERVED = {
174 "tuple": _TOKEN_TUPLE,
175 "or": _TOKEN_OR,
176 }
177
178
179 class _Parser(object):
180 """A parser for precondition specifications."""
181
182 def __init__(self, spec):
183 self._spec = spec.strip() # Must strip trailing whitespace.
184 self._pos = 0
185 self._pending_token = None
186
187 def parse(self):
188 """Parse the spec and return a precondition."""
189 cond = self._parse_or()
190 self._expect(None)
191 return cond
192
193 def _peek_token(self):
194 """Return the token code of the next token (do not consume token)."""
195 if self._pending_token is None:
196 self._pending_token = self._pop_token()
197 return self._pending_token[0]
198
199 def _pop_token(self):
200 """Consume the next token and return (token_code, token_val)."""
201 if self._pending_token is not None:
202 result = self._pending_token
203 self._pending_token = None
204 return result
205
206 if self._pos >= len(self._spec):
207 return None, None
208 m = _TOKEN_RE.match(self._spec, self._pos)
209 if not m:
210 raise ValueError("Syntax Error")
211 self._pos = m.end()
212 literal = m.group("literal")
213 if literal:
214 return literal, None
215 word = m.group("word")
216 t = _RESERVED.get(word)
217 if t:
218 return t, None
219 else:
220 return _TOKEN_NAME, word
221
222 def _expect(self, expected_code):
223 """Pop the next token, raise a ValueError if the code does not match."""
224 t, val = self._pop_token() # pylint: disable=unpacking-non-sequence
225 if t != expected_code:
226 raise ValueError("Syntax Error")
227 return val
228
229 def _parse_or(self):
230 """Parse one or more conditions separated by "or"."""
231 choices = [self._parse_one()]
232 while self._peek_token() == _TOKEN_OR:
233 self._pop_token()
234 choices.append(self._parse_one())
235 if len(choices) == 1:
236 return choices[0]
237 else:
238 return _OrPrecondition(choices)
239
240 def _parse_one(self):
241 """Parse a single condition (not including "or")."""
242 t, val = self._pop_token() # pylint: disable=unpacking-non-sequence
243 if t == _TOKEN_NAME:
244 return _ClassNamePrecondition(val if val != "None" else "NoneType")
245 elif t == "{":
246 name = self._expect(_TOKEN_NAME)
247 self._expect("}")
248 cond = _REGISTERED_CLASSES.get(name)
249 if cond is None:
250 raise ValueError(
251 "Class '%s' is not registered for preconditions." % name)
252 return cond
253 elif t == _TOKEN_TUPLE:
254 self._expect("[")
255 element = self._parse_or()
256 self._expect("]")
257 return _TuplePrecondition(element)
258 raise ValueError("Syntax Error")
259
260
261 def parse(spec):
262 """Return a _Precondition for the given string."""
263 return _Parser(spec).parse()
264
265
266 def parse_arg(arg_spec):
267 """Return (name, precondition) or (name, None) for given argument spec."""
268 name, _, spec = arg_spec.partition(":")
269 return name, parse(spec) if spec else None
270
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pytype/pytd/parse/preconditions.py b/pytype/pytd/parse/preconditions.py
--- a/pytype/pytd/parse/preconditions.py
+++ b/pytype/pytd/parse/preconditions.py
@@ -163,7 +163,7 @@
# RE to match a single token. Leading whitepace is ignored.
_TOKEN_RE = re.compile(
- r"\s*(?:(?P<literal>[[\]{}])|(?P<word>[a-zA-Z_]\w*))")
+ r"\s*(?:(?P<literal>[\[\]{}])|(?P<word>[a-zA-Z_]\w*))")
# Token codes (aside from literal characters)
_TOKEN_NAME = 1
| {"golden_diff": "diff --git a/pytype/pytd/parse/preconditions.py b/pytype/pytd/parse/preconditions.py\n--- a/pytype/pytd/parse/preconditions.py\n+++ b/pytype/pytd/parse/preconditions.py\n@@ -163,7 +163,7 @@\n \n # RE to match a single token. Leading whitepace is ignored.\n _TOKEN_RE = re.compile(\n- r\"\\s*(?:(?P<literal>[[\\]{}])|(?P<word>[a-zA-Z_]\\w*))\")\n+ r\"\\s*(?:(?P<literal>[\\[\\]{}])|(?P<word>[a-zA-Z_]\\w*))\")\n \n # Token codes (aside from literal characters)\n _TOKEN_NAME = 1\n", "issue": "Future-proof preconditions.py:_TOKEN_RE\n3.7 adds a warning about possible future changes to re: https://bugs.python.org/issue30349\r\nA future version of python will add nested sets, which allows nesting of sets using `[...]`. Escape the inner `[` in the re so it doesn't trigger a nested set.\r\nCloses #140.\n", "before_files": [{"content": "# Copyright 2016 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Preconditions for automatic argument checking.\"\"\"\n\nimport re\n\nfrom pytype import utils\n\n\nclass PreconditionError(ValueError):\n pass\n\n\nclass _Precondition(object):\n \"\"\"Base class for preconditions.\"\"\"\n\n def check(self, value):\n \"\"\"Raise PreconditionError if value does not match condition.\"\"\"\n raise NotImplementedError\n\n def allowed_types(self):\n \"\"\"Returns a set of types or typenames that are allowed.\"\"\"\n raise NotImplementedError\n\n\nclass _ClassNamePrecondition(_Precondition):\n \"\"\"Precondition that expects an instance of a specific class.\"\"\"\n\n def __init__(self, class_name):\n super(_ClassNamePrecondition, self).__init__()\n self._class_name = class_name\n\n def check(self, value):\n actual = type(value).__name__\n if actual != self._class_name:\n raise PreconditionError(\n \"actual=%s, expected=%s\" % (actual, self._class_name))\n\n def allowed_types(self):\n return {self._class_name}\n\n\nclass _IsInstancePrecondition(_Precondition):\n \"\"\"Precondition that expects an instance of a class or subclass.\"\"\"\n\n def __init__(self, cls):\n super(_IsInstancePrecondition, self).__init__()\n self._cls = cls\n\n def check(self, value):\n if not isinstance(value, self._cls):\n raise PreconditionError(\n \"actual=%s, expected_superclass=%s\" % (\n type(value).__name__, self._cls.__name__))\n\n def allowed_types(self):\n return {self._cls}\n\n\n_REGISTERED_CLASSES = {}\n\n\ndef register(cls):\n \"\"\"Register a class object for use in {X} syntax.\"\"\"\n name = cls.__name__\n assert name not in _REGISTERED_CLASSES\n _REGISTERED_CLASSES[name] = _IsInstancePrecondition(cls)\n\n\nclass _TuplePrecondition(_Precondition):\n \"\"\"Precondition that expects a tuple.\"\"\"\n\n def __init__(self, element_condition):\n super(_TuplePrecondition, self).__init__()\n self._element_condition = element_condition\n\n def check(self, value):\n if not isinstance(value, tuple):\n raise PreconditionError(\n \"actual=%s, expected=tuple\" % type(value).__name__)\n for v in value:\n self._element_condition.check(v)\n\n def allowed_types(self):\n return self._element_condition.allowed_types()\n\n\nclass _OrPrecondition(_Precondition):\n \"\"\"Precondition that expects one of various choices to match.\"\"\"\n\n def __init__(self, choices):\n super(_OrPrecondition, self).__init__()\n self._choices = choices\n\n def check(self, value):\n errors = []\n for c in self._choices:\n try:\n c.check(value)\n return\n except PreconditionError as e:\n errors.append(e)\n raise PreconditionError(\n \" or \".join(\"(%s)\" % utils.message(e) for e in errors))\n\n def allowed_types(self):\n allowed = set()\n for c in self._choices:\n allowed |= c.allowed_types()\n return allowed\n\n\nclass CallChecker(object):\n \"\"\"Class that performs argument checks against a collection of conditions.\"\"\"\n\n def __init__(self, condition_pairs):\n \"\"\"Create a checker given a sequence of (name, precondition) pairs.\"\"\"\n self._arg_sequence = tuple(condition_pairs)\n self._arg_map = dict(self._arg_sequence)\n\n def check(self, *args, **kwargs):\n \"\"\"Raise PreconditionError if the actual call is invalid.\"\"\"\n # This check is intended to be in addition to an actual call, so an\n # incorrect number of args or undefined kwargs should be caught elsewhere.\n for value, pair in zip(args, self._arg_sequence):\n name, condition = pair\n self._check_arg(condition, name, value)\n for name, value in kwargs.items():\n condition = self._arg_map.get(name)\n self._check_arg(condition, name, value)\n\n def _check_arg(self, condition, name, value):\n if condition:\n try:\n condition.check(value)\n except PreconditionError as e:\n raise PreconditionError(\"argument=%s: %s.\" % (name, utils.message(e)))\n\n def allowed_types(self):\n \"\"\"Determines the types and typenames allowed by calls to the checker.\n\n Returns:\n A set of types and/or typenames (strings). A typename matches\n only that one class while a type matches any subclass of the type.\n \"\"\"\n allowed = set()\n for _, c in self._arg_sequence:\n allowed |= c.allowed_types()\n return allowed\n\n\n# RE to match a single token. Leading whitepace is ignored.\n_TOKEN_RE = re.compile(\n r\"\\s*(?:(?P<literal>[[\\]{}])|(?P<word>[a-zA-Z_]\\w*))\")\n\n# Token codes (aside from literal characters)\n_TOKEN_NAME = 1\n_TOKEN_TUPLE = 2\n_TOKEN_OR = 3\n\n_RESERVED = {\n \"tuple\": _TOKEN_TUPLE,\n \"or\": _TOKEN_OR,\n}\n\n\nclass _Parser(object):\n \"\"\"A parser for precondition specifications.\"\"\"\n\n def __init__(self, spec):\n self._spec = spec.strip() # Must strip trailing whitespace.\n self._pos = 0\n self._pending_token = None\n\n def parse(self):\n \"\"\"Parse the spec and return a precondition.\"\"\"\n cond = self._parse_or()\n self._expect(None)\n return cond\n\n def _peek_token(self):\n \"\"\"Return the token code of the next token (do not consume token).\"\"\"\n if self._pending_token is None:\n self._pending_token = self._pop_token()\n return self._pending_token[0]\n\n def _pop_token(self):\n \"\"\"Consume the next token and return (token_code, token_val).\"\"\"\n if self._pending_token is not None:\n result = self._pending_token\n self._pending_token = None\n return result\n\n if self._pos >= len(self._spec):\n return None, None\n m = _TOKEN_RE.match(self._spec, self._pos)\n if not m:\n raise ValueError(\"Syntax Error\")\n self._pos = m.end()\n literal = m.group(\"literal\")\n if literal:\n return literal, None\n word = m.group(\"word\")\n t = _RESERVED.get(word)\n if t:\n return t, None\n else:\n return _TOKEN_NAME, word\n\n def _expect(self, expected_code):\n \"\"\"Pop the next token, raise a ValueError if the code does not match.\"\"\"\n t, val = self._pop_token() # pylint: disable=unpacking-non-sequence\n if t != expected_code:\n raise ValueError(\"Syntax Error\")\n return val\n\n def _parse_or(self):\n \"\"\"Parse one or more conditions separated by \"or\".\"\"\"\n choices = [self._parse_one()]\n while self._peek_token() == _TOKEN_OR:\n self._pop_token()\n choices.append(self._parse_one())\n if len(choices) == 1:\n return choices[0]\n else:\n return _OrPrecondition(choices)\n\n def _parse_one(self):\n \"\"\"Parse a single condition (not including \"or\").\"\"\"\n t, val = self._pop_token() # pylint: disable=unpacking-non-sequence\n if t == _TOKEN_NAME:\n return _ClassNamePrecondition(val if val != \"None\" else \"NoneType\")\n elif t == \"{\":\n name = self._expect(_TOKEN_NAME)\n self._expect(\"}\")\n cond = _REGISTERED_CLASSES.get(name)\n if cond is None:\n raise ValueError(\n \"Class '%s' is not registered for preconditions.\" % name)\n return cond\n elif t == _TOKEN_TUPLE:\n self._expect(\"[\")\n element = self._parse_or()\n self._expect(\"]\")\n return _TuplePrecondition(element)\n raise ValueError(\"Syntax Error\")\n\n\ndef parse(spec):\n \"\"\"Return a _Precondition for the given string.\"\"\"\n return _Parser(spec).parse()\n\n\ndef parse_arg(arg_spec):\n \"\"\"Return (name, precondition) or (name, None) for given argument spec.\"\"\"\n name, _, spec = arg_spec.partition(\":\")\n return name, parse(spec) if spec else None\n", "path": "pytype/pytd/parse/preconditions.py"}], "after_files": [{"content": "# Copyright 2016 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Preconditions for automatic argument checking.\"\"\"\n\nimport re\n\nfrom pytype import utils\n\n\nclass PreconditionError(ValueError):\n pass\n\n\nclass _Precondition(object):\n \"\"\"Base class for preconditions.\"\"\"\n\n def check(self, value):\n \"\"\"Raise PreconditionError if value does not match condition.\"\"\"\n raise NotImplementedError\n\n def allowed_types(self):\n \"\"\"Returns a set of types or typenames that are allowed.\"\"\"\n raise NotImplementedError\n\n\nclass _ClassNamePrecondition(_Precondition):\n \"\"\"Precondition that expects an instance of a specific class.\"\"\"\n\n def __init__(self, class_name):\n super(_ClassNamePrecondition, self).__init__()\n self._class_name = class_name\n\n def check(self, value):\n actual = type(value).__name__\n if actual != self._class_name:\n raise PreconditionError(\n \"actual=%s, expected=%s\" % (actual, self._class_name))\n\n def allowed_types(self):\n return {self._class_name}\n\n\nclass _IsInstancePrecondition(_Precondition):\n \"\"\"Precondition that expects an instance of a class or subclass.\"\"\"\n\n def __init__(self, cls):\n super(_IsInstancePrecondition, self).__init__()\n self._cls = cls\n\n def check(self, value):\n if not isinstance(value, self._cls):\n raise PreconditionError(\n \"actual=%s, expected_superclass=%s\" % (\n type(value).__name__, self._cls.__name__))\n\n def allowed_types(self):\n return {self._cls}\n\n\n_REGISTERED_CLASSES = {}\n\n\ndef register(cls):\n \"\"\"Register a class object for use in {X} syntax.\"\"\"\n name = cls.__name__\n assert name not in _REGISTERED_CLASSES\n _REGISTERED_CLASSES[name] = _IsInstancePrecondition(cls)\n\n\nclass _TuplePrecondition(_Precondition):\n \"\"\"Precondition that expects a tuple.\"\"\"\n\n def __init__(self, element_condition):\n super(_TuplePrecondition, self).__init__()\n self._element_condition = element_condition\n\n def check(self, value):\n if not isinstance(value, tuple):\n raise PreconditionError(\n \"actual=%s, expected=tuple\" % type(value).__name__)\n for v in value:\n self._element_condition.check(v)\n\n def allowed_types(self):\n return self._element_condition.allowed_types()\n\n\nclass _OrPrecondition(_Precondition):\n \"\"\"Precondition that expects one of various choices to match.\"\"\"\n\n def __init__(self, choices):\n super(_OrPrecondition, self).__init__()\n self._choices = choices\n\n def check(self, value):\n errors = []\n for c in self._choices:\n try:\n c.check(value)\n return\n except PreconditionError as e:\n errors.append(e)\n raise PreconditionError(\n \" or \".join(\"(%s)\" % utils.message(e) for e in errors))\n\n def allowed_types(self):\n allowed = set()\n for c in self._choices:\n allowed |= c.allowed_types()\n return allowed\n\n\nclass CallChecker(object):\n \"\"\"Class that performs argument checks against a collection of conditions.\"\"\"\n\n def __init__(self, condition_pairs):\n \"\"\"Create a checker given a sequence of (name, precondition) pairs.\"\"\"\n self._arg_sequence = tuple(condition_pairs)\n self._arg_map = dict(self._arg_sequence)\n\n def check(self, *args, **kwargs):\n \"\"\"Raise PreconditionError if the actual call is invalid.\"\"\"\n # This check is intended to be in addition to an actual call, so an\n # incorrect number of args or undefined kwargs should be caught elsewhere.\n for value, pair in zip(args, self._arg_sequence):\n name, condition = pair\n self._check_arg(condition, name, value)\n for name, value in kwargs.items():\n condition = self._arg_map.get(name)\n self._check_arg(condition, name, value)\n\n def _check_arg(self, condition, name, value):\n if condition:\n try:\n condition.check(value)\n except PreconditionError as e:\n raise PreconditionError(\"argument=%s: %s.\" % (name, utils.message(e)))\n\n def allowed_types(self):\n \"\"\"Determines the types and typenames allowed by calls to the checker.\n\n Returns:\n A set of types and/or typenames (strings). A typename matches\n only that one class while a type matches any subclass of the type.\n \"\"\"\n allowed = set()\n for _, c in self._arg_sequence:\n allowed |= c.allowed_types()\n return allowed\n\n\n# RE to match a single token. Leading whitepace is ignored.\n_TOKEN_RE = re.compile(\n r\"\\s*(?:(?P<literal>[\\[\\]{}])|(?P<word>[a-zA-Z_]\\w*))\")\n\n# Token codes (aside from literal characters)\n_TOKEN_NAME = 1\n_TOKEN_TUPLE = 2\n_TOKEN_OR = 3\n\n_RESERVED = {\n \"tuple\": _TOKEN_TUPLE,\n \"or\": _TOKEN_OR,\n}\n\n\nclass _Parser(object):\n \"\"\"A parser for precondition specifications.\"\"\"\n\n def __init__(self, spec):\n self._spec = spec.strip() # Must strip trailing whitespace.\n self._pos = 0\n self._pending_token = None\n\n def parse(self):\n \"\"\"Parse the spec and return a precondition.\"\"\"\n cond = self._parse_or()\n self._expect(None)\n return cond\n\n def _peek_token(self):\n \"\"\"Return the token code of the next token (do not consume token).\"\"\"\n if self._pending_token is None:\n self._pending_token = self._pop_token()\n return self._pending_token[0]\n\n def _pop_token(self):\n \"\"\"Consume the next token and return (token_code, token_val).\"\"\"\n if self._pending_token is not None:\n result = self._pending_token\n self._pending_token = None\n return result\n\n if self._pos >= len(self._spec):\n return None, None\n m = _TOKEN_RE.match(self._spec, self._pos)\n if not m:\n raise ValueError(\"Syntax Error\")\n self._pos = m.end()\n literal = m.group(\"literal\")\n if literal:\n return literal, None\n word = m.group(\"word\")\n t = _RESERVED.get(word)\n if t:\n return t, None\n else:\n return _TOKEN_NAME, word\n\n def _expect(self, expected_code):\n \"\"\"Pop the next token, raise a ValueError if the code does not match.\"\"\"\n t, val = self._pop_token() # pylint: disable=unpacking-non-sequence\n if t != expected_code:\n raise ValueError(\"Syntax Error\")\n return val\n\n def _parse_or(self):\n \"\"\"Parse one or more conditions separated by \"or\".\"\"\"\n choices = [self._parse_one()]\n while self._peek_token() == _TOKEN_OR:\n self._pop_token()\n choices.append(self._parse_one())\n if len(choices) == 1:\n return choices[0]\n else:\n return _OrPrecondition(choices)\n\n def _parse_one(self):\n \"\"\"Parse a single condition (not including \"or\").\"\"\"\n t, val = self._pop_token() # pylint: disable=unpacking-non-sequence\n if t == _TOKEN_NAME:\n return _ClassNamePrecondition(val if val != \"None\" else \"NoneType\")\n elif t == \"{\":\n name = self._expect(_TOKEN_NAME)\n self._expect(\"}\")\n cond = _REGISTERED_CLASSES.get(name)\n if cond is None:\n raise ValueError(\n \"Class '%s' is not registered for preconditions.\" % name)\n return cond\n elif t == _TOKEN_TUPLE:\n self._expect(\"[\")\n element = self._parse_or()\n self._expect(\"]\")\n return _TuplePrecondition(element)\n raise ValueError(\"Syntax Error\")\n\n\ndef parse(spec):\n \"\"\"Return a _Precondition for the given string.\"\"\"\n return _Parser(spec).parse()\n\n\ndef parse_arg(arg_spec):\n \"\"\"Return (name, precondition) or (name, None) for given argument spec.\"\"\"\n name, _, spec = arg_spec.partition(\":\")\n return name, parse(spec) if spec else None\n", "path": "pytype/pytd/parse/preconditions.py"}]} | 2,992 | 166 |
gh_patches_debug_8696 | rasdani/github-patches | git_diff | easybuilders__easybuild-framework-757 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
santiy_check_commands doesn't work for ipython
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `easybuild/framework/extensioneasyblock.py`
Content:
```
1 ##
2 # Copyright 2013 Ghent University
3 #
4 # This file is part of EasyBuild,
5 # originally created by the HPC team of the University of Ghent (http://ugent.be/hpc).
6 #
7 # http://github.com/hpcugent/easybuild
8 #
9 # EasyBuild is free software: you can redistribute it and/or modify
10 # it under the terms of the GNU General Public License as published by
11 # the Free Software Foundation v2.
12 #
13 # EasyBuild is distributed in the hope that it will be useful,
14 # but WITHOUT ANY WARRANTY; without even the implied warranty of
15 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 # GNU General Public License for more details.
17 #
18 # You should have received a copy of the GNU General Public License
19 # along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.
20 ##
21 """
22 EasyBuild support for building and installing extensions as actual extensions or as stand-alone modules,
23 implemented as an easyblock
24
25 @author: Kenneth Hoste (Ghent University)
26 """
27 import copy
28 import os
29
30 from easybuild.framework.easyblock import EasyBlock
31 from easybuild.framework.easyconfig import CUSTOM
32 from easybuild.framework.extension import Extension
33 from easybuild.tools.filetools import apply_patch, extract_file
34 from easybuild.tools.utilities import remove_unwanted_chars
35
36
37 class ExtensionEasyBlock(EasyBlock, Extension):
38 """
39 Install an extension as a separate module, or as an extension.
40
41 Deriving classes should implement the following functions:
42 * required EasyBlock functions:
43 - configure_step
44 - build_step
45 - install_step
46 * required Extension functions
47 - run
48 """
49
50 @staticmethod
51 def extra_options(extra_vars=None):
52 """Extra easyconfig parameters specific to ExtensionEasyBlock."""
53
54 # using [] as default value is a bad idea, so we handle it this way
55 if extra_vars is None:
56 extra_vars = []
57
58 extra_vars.extend([
59 ('options', [{}, "Dictionary with extension options.", CUSTOM]),
60 ])
61 return EasyBlock.extra_options(extra_vars)
62
63 def __init__(self, *args, **kwargs):
64 """Initialize either as EasyBlock or as Extension."""
65
66 self.is_extension = False
67
68 if isinstance(args[0], EasyBlock):
69 Extension.__init__(self, *args, **kwargs)
70 # name and version properties of EasyBlock are used, so make sure name and version are correct
71 self.cfg['name'] = self.ext.get('name', None)
72 self.cfg['version'] = self.ext.get('version', None)
73 self.builddir = self.master.builddir
74 self.installdir = self.master.installdir
75 self.is_extension = True
76 self.unpack_options = None
77 else:
78 EasyBlock.__init__(self, *args, **kwargs)
79 self.options = copy.deepcopy(self.cfg.get('options', {})) # we need this for Extension.sanity_check_step
80
81 self.ext_dir = None # dir where extension source was unpacked
82
83 def run(self, unpack_src=False):
84 """Common operations for extensions: unpacking sources, patching, ..."""
85
86 # unpack file if desired
87 if unpack_src:
88 targetdir = os.path.join(self.master.builddir, remove_unwanted_chars(self.name))
89 self.ext_dir = extract_file("%s" % self.src, targetdir, extra_options=self.unpack_options)
90
91 # patch if needed
92 if self.patches:
93 for patchfile in self.patches:
94 if not apply_patch(patchfile, self.ext_dir):
95 self.log.error("Applying patch %s failed" % patchfile)
96
97 def sanity_check_step(self, exts_filter=None, custom_paths=None, custom_commands=None):
98 """
99 Custom sanity check for extensions, whether installed as stand-alone module or not
100 """
101 if not self.cfg['exts_filter']:
102 self.cfg['exts_filter'] = exts_filter
103 self.log.debug("starting sanity check for extension with filter %s", self.cfg['exts_filter'])
104
105 if not self.is_extension:
106 # load fake module
107 fake_mod_data = self.load_fake_module(purge=True)
108
109 # perform sanity check
110 sanity_check_ok = Extension.sanity_check_step(self)
111
112 if not self.is_extension:
113 # unload fake module and clean up
114 self.clean_up_fake_module(fake_mod_data)
115
116 if custom_paths or custom_commands:
117 EasyBlock.sanity_check_step(self, custom_paths=custom_paths, custom_commands=custom_commands,
118 extension=self.is_extension)
119
120 # pass or fail sanity check
121 if not sanity_check_ok:
122 msg = "Sanity check for %s failed: %s" % (self.name, '; '.join(self.sanity_check_fail_msgs))
123 if self.is_extension:
124 self.log.warning(msg)
125 else:
126 self.log.error(msg)
127 return False
128 else:
129 self.log.info("Sanity check for %s successful!" % self.name)
130 return True
131
132 def make_module_extra(self, extra=None):
133 """Add custom entries to module."""
134
135 txt = EasyBlock.make_module_extra(self)
136 if not extra is None:
137 txt += extra
138 return txt
139
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/easybuild/framework/extensioneasyblock.py b/easybuild/framework/extensioneasyblock.py
--- a/easybuild/framework/extensioneasyblock.py
+++ b/easybuild/framework/extensioneasyblock.py
@@ -113,7 +113,7 @@
# unload fake module and clean up
self.clean_up_fake_module(fake_mod_data)
- if custom_paths or custom_commands:
+ if custom_paths or self.cfg['sanity_check_paths'] or custom_commands or self.cfg['sanity_check_commands']:
EasyBlock.sanity_check_step(self, custom_paths=custom_paths, custom_commands=custom_commands,
extension=self.is_extension)
| {"golden_diff": "diff --git a/easybuild/framework/extensioneasyblock.py b/easybuild/framework/extensioneasyblock.py\n--- a/easybuild/framework/extensioneasyblock.py\n+++ b/easybuild/framework/extensioneasyblock.py\n@@ -113,7 +113,7 @@\n # unload fake module and clean up\n self.clean_up_fake_module(fake_mod_data)\n \n- if custom_paths or custom_commands:\n+ if custom_paths or self.cfg['sanity_check_paths'] or custom_commands or self.cfg['sanity_check_commands']:\n EasyBlock.sanity_check_step(self, custom_paths=custom_paths, custom_commands=custom_commands,\n extension=self.is_extension)\n", "issue": "santiy_check_commands doesn't work for ipython\n\n", "before_files": [{"content": "##\n# Copyright 2013 Ghent University\n#\n# This file is part of EasyBuild,\n# originally created by the HPC team of the University of Ghent (http://ugent.be/hpc).\n#\n# http://github.com/hpcugent/easybuild\n#\n# EasyBuild is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation v2.\n#\n# EasyBuild is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.\n##\n\"\"\"\nEasyBuild support for building and installing extensions as actual extensions or as stand-alone modules,\nimplemented as an easyblock\n\n@author: Kenneth Hoste (Ghent University)\n\"\"\"\nimport copy\nimport os\n\nfrom easybuild.framework.easyblock import EasyBlock\nfrom easybuild.framework.easyconfig import CUSTOM\nfrom easybuild.framework.extension import Extension\nfrom easybuild.tools.filetools import apply_patch, extract_file\nfrom easybuild.tools.utilities import remove_unwanted_chars\n\n\nclass ExtensionEasyBlock(EasyBlock, Extension):\n \"\"\"\n Install an extension as a separate module, or as an extension.\n\n Deriving classes should implement the following functions:\n * required EasyBlock functions:\n - configure_step\n - build_step\n - install_step\n * required Extension functions\n - run\n \"\"\"\n\n @staticmethod\n def extra_options(extra_vars=None):\n \"\"\"Extra easyconfig parameters specific to ExtensionEasyBlock.\"\"\"\n\n # using [] as default value is a bad idea, so we handle it this way\n if extra_vars is None:\n extra_vars = []\n\n extra_vars.extend([\n ('options', [{}, \"Dictionary with extension options.\", CUSTOM]),\n ])\n return EasyBlock.extra_options(extra_vars)\n\n def __init__(self, *args, **kwargs):\n \"\"\"Initialize either as EasyBlock or as Extension.\"\"\"\n\n self.is_extension = False\n\n if isinstance(args[0], EasyBlock):\n Extension.__init__(self, *args, **kwargs)\n # name and version properties of EasyBlock are used, so make sure name and version are correct\n self.cfg['name'] = self.ext.get('name', None)\n self.cfg['version'] = self.ext.get('version', None)\n self.builddir = self.master.builddir\n self.installdir = self.master.installdir\n self.is_extension = True\n self.unpack_options = None\n else:\n EasyBlock.__init__(self, *args, **kwargs)\n self.options = copy.deepcopy(self.cfg.get('options', {})) # we need this for Extension.sanity_check_step\n\n self.ext_dir = None # dir where extension source was unpacked\n\n def run(self, unpack_src=False):\n \"\"\"Common operations for extensions: unpacking sources, patching, ...\"\"\"\n\n # unpack file if desired\n if unpack_src:\n targetdir = os.path.join(self.master.builddir, remove_unwanted_chars(self.name))\n self.ext_dir = extract_file(\"%s\" % self.src, targetdir, extra_options=self.unpack_options)\n\n # patch if needed\n if self.patches:\n for patchfile in self.patches:\n if not apply_patch(patchfile, self.ext_dir):\n self.log.error(\"Applying patch %s failed\" % patchfile)\n\n def sanity_check_step(self, exts_filter=None, custom_paths=None, custom_commands=None):\n \"\"\"\n Custom sanity check for extensions, whether installed as stand-alone module or not\n \"\"\"\n if not self.cfg['exts_filter']:\n self.cfg['exts_filter'] = exts_filter\n self.log.debug(\"starting sanity check for extension with filter %s\", self.cfg['exts_filter'])\n\n if not self.is_extension:\n # load fake module\n fake_mod_data = self.load_fake_module(purge=True)\n\n # perform sanity check\n sanity_check_ok = Extension.sanity_check_step(self)\n\n if not self.is_extension:\n # unload fake module and clean up\n self.clean_up_fake_module(fake_mod_data)\n\n if custom_paths or custom_commands:\n EasyBlock.sanity_check_step(self, custom_paths=custom_paths, custom_commands=custom_commands,\n extension=self.is_extension)\n\n # pass or fail sanity check\n if not sanity_check_ok:\n msg = \"Sanity check for %s failed: %s\" % (self.name, '; '.join(self.sanity_check_fail_msgs))\n if self.is_extension:\n self.log.warning(msg)\n else:\n self.log.error(msg)\n return False\n else:\n self.log.info(\"Sanity check for %s successful!\" % self.name)\n return True\n\n def make_module_extra(self, extra=None):\n \"\"\"Add custom entries to module.\"\"\"\n\n txt = EasyBlock.make_module_extra(self)\n if not extra is None:\n txt += extra\n return txt\n", "path": "easybuild/framework/extensioneasyblock.py"}], "after_files": [{"content": "##\n# Copyright 2013 Ghent University\n#\n# This file is part of EasyBuild,\n# originally created by the HPC team of the University of Ghent (http://ugent.be/hpc).\n#\n# http://github.com/hpcugent/easybuild\n#\n# EasyBuild is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation v2.\n#\n# EasyBuild is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.\n##\n\"\"\"\nEasyBuild support for building and installing extensions as actual extensions or as stand-alone modules,\nimplemented as an easyblock\n\n@author: Kenneth Hoste (Ghent University)\n\"\"\"\nimport copy\nimport os\n\nfrom easybuild.framework.easyblock import EasyBlock\nfrom easybuild.framework.easyconfig import CUSTOM\nfrom easybuild.framework.extension import Extension\nfrom easybuild.tools.filetools import apply_patch, extract_file\nfrom easybuild.tools.utilities import remove_unwanted_chars\n\n\nclass ExtensionEasyBlock(EasyBlock, Extension):\n \"\"\"\n Install an extension as a separate module, or as an extension.\n\n Deriving classes should implement the following functions:\n * required EasyBlock functions:\n - configure_step\n - build_step\n - install_step\n * required Extension functions\n - run\n \"\"\"\n\n @staticmethod\n def extra_options(extra_vars=None):\n \"\"\"Extra easyconfig parameters specific to ExtensionEasyBlock.\"\"\"\n\n # using [] as default value is a bad idea, so we handle it this way\n if extra_vars is None:\n extra_vars = []\n\n extra_vars.extend([\n ('options', [{}, \"Dictionary with extension options.\", CUSTOM]),\n ])\n return EasyBlock.extra_options(extra_vars)\n\n def __init__(self, *args, **kwargs):\n \"\"\"Initialize either as EasyBlock or as Extension.\"\"\"\n\n self.is_extension = False\n\n if isinstance(args[0], EasyBlock):\n Extension.__init__(self, *args, **kwargs)\n # name and version properties of EasyBlock are used, so make sure name and version are correct\n self.cfg['name'] = self.ext.get('name', None)\n self.cfg['version'] = self.ext.get('version', None)\n self.builddir = self.master.builddir\n self.installdir = self.master.installdir\n self.is_extension = True\n self.unpack_options = None\n else:\n EasyBlock.__init__(self, *args, **kwargs)\n self.options = copy.deepcopy(self.cfg.get('options', {})) # we need this for Extension.sanity_check_step\n\n self.ext_dir = None # dir where extension source was unpacked\n\n def run(self, unpack_src=False):\n \"\"\"Common operations for extensions: unpacking sources, patching, ...\"\"\"\n\n # unpack file if desired\n if unpack_src:\n targetdir = os.path.join(self.master.builddir, remove_unwanted_chars(self.name))\n self.ext_dir = extract_file(\"%s\" % self.src, targetdir, extra_options=self.unpack_options)\n\n # patch if needed\n if self.patches:\n for patchfile in self.patches:\n if not apply_patch(patchfile, self.ext_dir):\n self.log.error(\"Applying patch %s failed\" % patchfile)\n\n def sanity_check_step(self, exts_filter=None, custom_paths=None, custom_commands=None):\n \"\"\"\n Custom sanity check for extensions, whether installed as stand-alone module or not\n \"\"\"\n if not self.cfg['exts_filter']:\n self.cfg['exts_filter'] = exts_filter\n self.log.debug(\"starting sanity check for extension with filter %s\", self.cfg['exts_filter'])\n\n if not self.is_extension:\n # load fake module\n fake_mod_data = self.load_fake_module(purge=True)\n\n # perform sanity check\n sanity_check_ok = Extension.sanity_check_step(self)\n\n if not self.is_extension:\n # unload fake module and clean up\n self.clean_up_fake_module(fake_mod_data)\n\n if custom_paths or self.cfg['sanity_check_paths'] or custom_commands or self.cfg['sanity_check_commands']:\n EasyBlock.sanity_check_step(self, custom_paths=custom_paths, custom_commands=custom_commands,\n extension=self.is_extension)\n\n # pass or fail sanity check\n if not sanity_check_ok:\n msg = \"Sanity check for %s failed: %s\" % (self.name, '; '.join(self.sanity_check_fail_msgs))\n if self.is_extension:\n self.log.warning(msg)\n else:\n self.log.error(msg)\n return False\n else:\n self.log.info(\"Sanity check for %s successful!\" % self.name)\n return True\n\n def make_module_extra(self, extra=None):\n \"\"\"Add custom entries to module.\"\"\"\n\n txt = EasyBlock.make_module_extra(self)\n if not extra is None:\n txt += extra\n return txt\n", "path": "easybuild/framework/extensioneasyblock.py"}]} | 1,702 | 144 |
gh_patches_debug_3275 | rasdani/github-patches | git_diff | apache__tvm-6502 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[TOPI] Typo in operator key
https://github.com/apache/incubator-tvm/blob/bdfefbb03f5aab96ee677ee28a166dd6ab5dbf3f/python/tvm/topi/bifrost/dense.py#L26
"biforst" should be "bifrost". This bug makes the op totally unavailable in Relay.
I can fix this bug if expected, but I don't know how to add a proper test.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/tvm/topi/bifrost/dense.py`
Content:
```
1 # Licensed to the Apache Software Foundation (ASF) under one
2 # or more contributor license agreements. See the NOTICE file
3 # distributed with this work for additional information
4 # regarding copyright ownership. The ASF licenses this file
5 # to you under the Apache License, Version 2.0 (the
6 # "License"); you may not use this file except in compliance
7 # with the License. You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing,
12 # software distributed under the License is distributed on an
13 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
14 # KIND, either express or implied. See the License for the
15 # specific language governing permissions and limitations
16 # under the License.
17 # pylint: disable=invalid-name,unused-variable
18 """dense schedule on ARM Mali Biforst GPU"""
19 from tvm import te
20 from tvm import autotvm
21
22 from .. import nn
23 from ..util import traverse_inline
24
25
26 @autotvm.register_topi_compute("dense.biforst")
27 def dense(_, data, weight, bias=None, out_dtype=None):
28 """Dense operator on Biforst"""
29 return nn.dense(data, weight, bias, out_dtype)
30
31
32 @autotvm.register_topi_schedule("dense.bifrost")
33 def schedule_dense(cfg, outs):
34 """Schedule for dense operator.
35
36 Parameters
37 ----------
38 cfg: ConfigEntity
39 The config entity for this template
40 outs: Array of Tensor
41 The computation graph description of dense
42 in the format of an array of tensors.
43
44 Returns
45 -------
46 s: Schedule
47 The computation schedule for dense.
48 """
49 outs = [outs] if isinstance(outs, te.tensor.Tensor) else outs
50 s = te.create_schedule([x.op for x in outs])
51
52 def _callback(op):
53 if op.tag == "dense":
54 vec_size = [1, 2, 4, 8, 16]
55 max_unroll = 32
56
57 dense_out = op.output(0)
58 output = outs[0]
59
60 y, x = s[output].op.axis
61 c = s[dense_out].op.reduce_axis[0]
62
63 ##### space definition begin #####
64 cfg.define_split("tile_y", y, num_outputs=3)
65 cfg.define_split("tile_x", x, num_outputs=3)
66 cfg.define_split("c_unroll", c, num_outputs=2, max_factor=64)
67
68 # fallback support
69 if cfg.is_fallback:
70 ref_log = autotvm.tophub.load_reference_log("mali", "rk3399", "dense.bifrost")
71 cfg.fallback_with_reference_log(ref_log)
72 ##### space definition end #####
73
74 if dense_out.op in s.outputs:
75 dense_out = s.cache_write(output, "local")
76
77 by, ty, yi = cfg["tile_y"].apply(s, output, y)
78 bx, tx, xi = cfg["tile_x"].apply(s, output, x)
79
80 s[output].bind(by, te.thread_axis("blockIdx.y"))
81 s[output].bind(bx, te.thread_axis("blockIdx.x"))
82 s[output].bind(ty, te.thread_axis("threadIdx.y"))
83 s[output].bind(tx, te.thread_axis("threadIdx.x"))
84
85 if cfg["tile_y"].size[-1] < max_unroll:
86 s[output].unroll(yi)
87 if cfg["tile_x"].size[-1] in vec_size:
88 s[output].vectorize(xi)
89 s[dense_out].compute_at(s[output], tx)
90
91 k = s[dense_out].op.reduce_axis[0]
92 y, x = s[dense_out].op.axis
93 k, k_unroll = cfg["c_unroll"].apply(s, dense_out, k)
94 s[dense_out].reorder(k, k_unroll, y, x)
95 s[dense_out].unroll(k_unroll)
96 if cfg["tile_y"].size[-1] < max_unroll:
97 s[dense_out].unroll(y)
98 if cfg["tile_x"].size[-1] in vec_size:
99 s[dense_out].vectorize(x)
100
101 traverse_inline(s, outs[0].op, _callback)
102 return s
103
104
105 def fuse_and_bind(s, tensor, axis=None, num_thread=None):
106 """ fuse all the axis and bind to GPU threads """
107 axis = axis or s[tensor].op.axis
108 fused = s[tensor].fuse(*axis)
109 bx, tx = s[tensor].split(fused, num_thread)
110 s[tensor].bind(bx, te.thread_axis("blockIdx.x"))
111 s[tensor].bind(tx, te.thread_axis("threadIdx.x"))
112 return bx, tx
113
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/python/tvm/topi/bifrost/dense.py b/python/tvm/topi/bifrost/dense.py
--- a/python/tvm/topi/bifrost/dense.py
+++ b/python/tvm/topi/bifrost/dense.py
@@ -23,7 +23,7 @@
from ..util import traverse_inline
[email protected]_topi_compute("dense.biforst")
[email protected]_topi_compute("dense.bifrost")
def dense(_, data, weight, bias=None, out_dtype=None):
"""Dense operator on Biforst"""
return nn.dense(data, weight, bias, out_dtype)
| {"golden_diff": "diff --git a/python/tvm/topi/bifrost/dense.py b/python/tvm/topi/bifrost/dense.py\n--- a/python/tvm/topi/bifrost/dense.py\n+++ b/python/tvm/topi/bifrost/dense.py\n@@ -23,7 +23,7 @@\n from ..util import traverse_inline\n \n \[email protected]_topi_compute(\"dense.biforst\")\[email protected]_topi_compute(\"dense.bifrost\")\n def dense(_, data, weight, bias=None, out_dtype=None):\n \"\"\"Dense operator on Biforst\"\"\"\n return nn.dense(data, weight, bias, out_dtype)\n", "issue": "[TOPI] Typo in operator key\nhttps://github.com/apache/incubator-tvm/blob/bdfefbb03f5aab96ee677ee28a166dd6ab5dbf3f/python/tvm/topi/bifrost/dense.py#L26\r\n\r\n\"biforst\" should be \"bifrost\". This bug makes the op totally unavailable in Relay.\r\n\r\nI can fix this bug if expected, but I don't know how to add a proper test.\n", "before_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n# pylint: disable=invalid-name,unused-variable\n\"\"\"dense schedule on ARM Mali Biforst GPU\"\"\"\nfrom tvm import te\nfrom tvm import autotvm\n\nfrom .. import nn\nfrom ..util import traverse_inline\n\n\[email protected]_topi_compute(\"dense.biforst\")\ndef dense(_, data, weight, bias=None, out_dtype=None):\n \"\"\"Dense operator on Biforst\"\"\"\n return nn.dense(data, weight, bias, out_dtype)\n\n\[email protected]_topi_schedule(\"dense.bifrost\")\ndef schedule_dense(cfg, outs):\n \"\"\"Schedule for dense operator.\n\n Parameters\n ----------\n cfg: ConfigEntity\n The config entity for this template\n outs: Array of Tensor\n The computation graph description of dense\n in the format of an array of tensors.\n\n Returns\n -------\n s: Schedule\n The computation schedule for dense.\n \"\"\"\n outs = [outs] if isinstance(outs, te.tensor.Tensor) else outs\n s = te.create_schedule([x.op for x in outs])\n\n def _callback(op):\n if op.tag == \"dense\":\n vec_size = [1, 2, 4, 8, 16]\n max_unroll = 32\n\n dense_out = op.output(0)\n output = outs[0]\n\n y, x = s[output].op.axis\n c = s[dense_out].op.reduce_axis[0]\n\n ##### space definition begin #####\n cfg.define_split(\"tile_y\", y, num_outputs=3)\n cfg.define_split(\"tile_x\", x, num_outputs=3)\n cfg.define_split(\"c_unroll\", c, num_outputs=2, max_factor=64)\n\n # fallback support\n if cfg.is_fallback:\n ref_log = autotvm.tophub.load_reference_log(\"mali\", \"rk3399\", \"dense.bifrost\")\n cfg.fallback_with_reference_log(ref_log)\n ##### space definition end #####\n\n if dense_out.op in s.outputs:\n dense_out = s.cache_write(output, \"local\")\n\n by, ty, yi = cfg[\"tile_y\"].apply(s, output, y)\n bx, tx, xi = cfg[\"tile_x\"].apply(s, output, x)\n\n s[output].bind(by, te.thread_axis(\"blockIdx.y\"))\n s[output].bind(bx, te.thread_axis(\"blockIdx.x\"))\n s[output].bind(ty, te.thread_axis(\"threadIdx.y\"))\n s[output].bind(tx, te.thread_axis(\"threadIdx.x\"))\n\n if cfg[\"tile_y\"].size[-1] < max_unroll:\n s[output].unroll(yi)\n if cfg[\"tile_x\"].size[-1] in vec_size:\n s[output].vectorize(xi)\n s[dense_out].compute_at(s[output], tx)\n\n k = s[dense_out].op.reduce_axis[0]\n y, x = s[dense_out].op.axis\n k, k_unroll = cfg[\"c_unroll\"].apply(s, dense_out, k)\n s[dense_out].reorder(k, k_unroll, y, x)\n s[dense_out].unroll(k_unroll)\n if cfg[\"tile_y\"].size[-1] < max_unroll:\n s[dense_out].unroll(y)\n if cfg[\"tile_x\"].size[-1] in vec_size:\n s[dense_out].vectorize(x)\n\n traverse_inline(s, outs[0].op, _callback)\n return s\n\n\ndef fuse_and_bind(s, tensor, axis=None, num_thread=None):\n \"\"\" fuse all the axis and bind to GPU threads \"\"\"\n axis = axis or s[tensor].op.axis\n fused = s[tensor].fuse(*axis)\n bx, tx = s[tensor].split(fused, num_thread)\n s[tensor].bind(bx, te.thread_axis(\"blockIdx.x\"))\n s[tensor].bind(tx, te.thread_axis(\"threadIdx.x\"))\n return bx, tx\n", "path": "python/tvm/topi/bifrost/dense.py"}], "after_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n# pylint: disable=invalid-name,unused-variable\n\"\"\"dense schedule on ARM Mali Biforst GPU\"\"\"\nfrom tvm import te\nfrom tvm import autotvm\n\nfrom .. import nn\nfrom ..util import traverse_inline\n\n\[email protected]_topi_compute(\"dense.bifrost\")\ndef dense(_, data, weight, bias=None, out_dtype=None):\n \"\"\"Dense operator on Biforst\"\"\"\n return nn.dense(data, weight, bias, out_dtype)\n\n\[email protected]_topi_schedule(\"dense.bifrost\")\ndef schedule_dense(cfg, outs):\n \"\"\"Schedule for dense operator.\n\n Parameters\n ----------\n cfg: ConfigEntity\n The config entity for this template\n outs: Array of Tensor\n The computation graph description of dense\n in the format of an array of tensors.\n\n Returns\n -------\n s: Schedule\n The computation schedule for dense.\n \"\"\"\n outs = [outs] if isinstance(outs, te.tensor.Tensor) else outs\n s = te.create_schedule([x.op for x in outs])\n\n def _callback(op):\n if op.tag == \"dense\":\n vec_size = [1, 2, 4, 8, 16]\n max_unroll = 32\n\n dense_out = op.output(0)\n output = outs[0]\n\n y, x = s[output].op.axis\n c = s[dense_out].op.reduce_axis[0]\n\n ##### space definition begin #####\n cfg.define_split(\"tile_y\", y, num_outputs=3)\n cfg.define_split(\"tile_x\", x, num_outputs=3)\n cfg.define_split(\"c_unroll\", c, num_outputs=2, max_factor=64)\n\n # fallback support\n if cfg.is_fallback:\n ref_log = autotvm.tophub.load_reference_log(\"mali\", \"rk3399\", \"dense.bifrost\")\n cfg.fallback_with_reference_log(ref_log)\n ##### space definition end #####\n\n if dense_out.op in s.outputs:\n dense_out = s.cache_write(output, \"local\")\n\n by, ty, yi = cfg[\"tile_y\"].apply(s, output, y)\n bx, tx, xi = cfg[\"tile_x\"].apply(s, output, x)\n\n s[output].bind(by, te.thread_axis(\"blockIdx.y\"))\n s[output].bind(bx, te.thread_axis(\"blockIdx.x\"))\n s[output].bind(ty, te.thread_axis(\"threadIdx.y\"))\n s[output].bind(tx, te.thread_axis(\"threadIdx.x\"))\n\n if cfg[\"tile_y\"].size[-1] < max_unroll:\n s[output].unroll(yi)\n if cfg[\"tile_x\"].size[-1] in vec_size:\n s[output].vectorize(xi)\n s[dense_out].compute_at(s[output], tx)\n\n k = s[dense_out].op.reduce_axis[0]\n y, x = s[dense_out].op.axis\n k, k_unroll = cfg[\"c_unroll\"].apply(s, dense_out, k)\n s[dense_out].reorder(k, k_unroll, y, x)\n s[dense_out].unroll(k_unroll)\n if cfg[\"tile_y\"].size[-1] < max_unroll:\n s[dense_out].unroll(y)\n if cfg[\"tile_x\"].size[-1] in vec_size:\n s[dense_out].vectorize(x)\n\n traverse_inline(s, outs[0].op, _callback)\n return s\n\n\ndef fuse_and_bind(s, tensor, axis=None, num_thread=None):\n \"\"\" fuse all the axis and bind to GPU threads \"\"\"\n axis = axis or s[tensor].op.axis\n fused = s[tensor].fuse(*axis)\n bx, tx = s[tensor].split(fused, num_thread)\n s[tensor].bind(bx, te.thread_axis(\"blockIdx.x\"))\n s[tensor].bind(tx, te.thread_axis(\"threadIdx.x\"))\n return bx, tx\n", "path": "python/tvm/topi/bifrost/dense.py"}]} | 1,665 | 143 |
gh_patches_debug_20922 | rasdani/github-patches | git_diff | pystiche__pystiche-228 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
MD5 hash error
Hi, I get this error when running the script given in the example for beginner.
`FileExistsError: bird1.jpg with a different MD5 hash already exists in /root/.cache/pystiche. If you want to overwrite it, set overwrite=True.`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pystiche/demo.py`
Content:
```
1 import logging
2 import sys
3
4 from pystiche.data import (
5 DownloadableImage,
6 DownloadableImageCollection,
7 PixabayLicense,
8 PublicDomainLicense,
9 )
10 from pystiche.optim import OptimLogger
11
12 __all__ = ["demo_images", "demo_logger"]
13
14
15 def demo_images():
16 return DownloadableImageCollection(
17 {
18 "dancing": DownloadableImage(
19 "https://pytorch.org/tutorials/_static/img/neural-style/dancing.jpg",
20 md5="0a2df538901452d639170a2ed89815a4",
21 ),
22 "picasso": DownloadableImage(
23 "https://pytorch.org/tutorials/_static/img/neural-style/picasso.jpg",
24 md5="d1d60fc3f9d0b22d2d826c47934a37ea",
25 ),
26 "bird1": DownloadableImage(
27 "https://cdn.pixabay.com/photo/2016/01/14/11/26/bird-1139734_960_720.jpg",
28 file="bird1.jpg",
29 author="gholmz0",
30 date="09.03.2013",
31 license=PixabayLicense(),
32 md5="d42444d3cd0afa47f07066cd083d6cea",
33 ),
34 "paint": DownloadableImage(
35 "https://cdn.pixabay.com/photo/2017/07/03/20/17/abstract-2468874_960_720.jpg",
36 file="paint.jpg",
37 author="garageband",
38 date="03.07.2017",
39 license=PixabayLicense(),
40 md5="a991e222806ef49d34b172a67cf97d91",
41 ),
42 "bird2": DownloadableImage(
43 "https://cdn.pixabay.com/photo/2013/03/12/17/53/bird-92956_960_720.jpg",
44 file="bird2.jpg",
45 author="12019",
46 date="09.04.2012",
47 license=PixabayLicense(),
48 md5="dda3e1d0f93f783de823b4f91129d44e",
49 ),
50 "mosaic": DownloadableImage(
51 "https://upload.wikimedia.org/wikipedia/commons/2/23/Mosaic_ducks_Massimo.jpg",
52 file="mosaic.jpg",
53 author="Marie-Lan Nguyen",
54 date="2006",
55 license=PublicDomainLicense(),
56 md5="5b60cd1724395f7a0c21dc6dd006f8ae",
57 ),
58 }
59 )
60
61
62 def demo_logger():
63 logger = logging.getLogger("demo_logger")
64 logger.setLevel(logging.INFO)
65
66 sh = logging.StreamHandler(sys.stdout)
67 sh.setLevel(logging.INFO)
68 logger.addHandler(sh)
69
70 return OptimLogger(logger)
71
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pystiche/demo.py b/pystiche/demo.py
--- a/pystiche/demo.py
+++ b/pystiche/demo.py
@@ -29,7 +29,7 @@
author="gholmz0",
date="09.03.2013",
license=PixabayLicense(),
- md5="d42444d3cd0afa47f07066cd083d6cea",
+ md5="36e5fef725943a5d1d22b5048095da86",
),
"paint": DownloadableImage(
"https://cdn.pixabay.com/photo/2017/07/03/20/17/abstract-2468874_960_720.jpg",
@@ -45,7 +45,7 @@
author="12019",
date="09.04.2012",
license=PixabayLicense(),
- md5="dda3e1d0f93f783de823b4f91129d44e",
+ md5="8c5b608bd579d931e2cfe7229840fe9b",
),
"mosaic": DownloadableImage(
"https://upload.wikimedia.org/wikipedia/commons/2/23/Mosaic_ducks_Massimo.jpg",
| {"golden_diff": "diff --git a/pystiche/demo.py b/pystiche/demo.py\n--- a/pystiche/demo.py\n+++ b/pystiche/demo.py\n@@ -29,7 +29,7 @@\n author=\"gholmz0\",\n date=\"09.03.2013\",\n license=PixabayLicense(),\n- md5=\"d42444d3cd0afa47f07066cd083d6cea\",\n+ md5=\"36e5fef725943a5d1d22b5048095da86\",\n ),\n \"paint\": DownloadableImage(\n \"https://cdn.pixabay.com/photo/2017/07/03/20/17/abstract-2468874_960_720.jpg\",\n@@ -45,7 +45,7 @@\n author=\"12019\",\n date=\"09.04.2012\",\n license=PixabayLicense(),\n- md5=\"dda3e1d0f93f783de823b4f91129d44e\",\n+ md5=\"8c5b608bd579d931e2cfe7229840fe9b\",\n ),\n \"mosaic\": DownloadableImage(\n \"https://upload.wikimedia.org/wikipedia/commons/2/23/Mosaic_ducks_Massimo.jpg\",\n", "issue": "MD5 hash error\nHi, I get this error when running the script given in the example for beginner.\r\n\r\n`FileExistsError: bird1.jpg with a different MD5 hash already exists in /root/.cache/pystiche. If you want to overwrite it, set overwrite=True.`\n", "before_files": [{"content": "import logging\nimport sys\n\nfrom pystiche.data import (\n DownloadableImage,\n DownloadableImageCollection,\n PixabayLicense,\n PublicDomainLicense,\n)\nfrom pystiche.optim import OptimLogger\n\n__all__ = [\"demo_images\", \"demo_logger\"]\n\n\ndef demo_images():\n return DownloadableImageCollection(\n {\n \"dancing\": DownloadableImage(\n \"https://pytorch.org/tutorials/_static/img/neural-style/dancing.jpg\",\n md5=\"0a2df538901452d639170a2ed89815a4\",\n ),\n \"picasso\": DownloadableImage(\n \"https://pytorch.org/tutorials/_static/img/neural-style/picasso.jpg\",\n md5=\"d1d60fc3f9d0b22d2d826c47934a37ea\",\n ),\n \"bird1\": DownloadableImage(\n \"https://cdn.pixabay.com/photo/2016/01/14/11/26/bird-1139734_960_720.jpg\",\n file=\"bird1.jpg\",\n author=\"gholmz0\",\n date=\"09.03.2013\",\n license=PixabayLicense(),\n md5=\"d42444d3cd0afa47f07066cd083d6cea\",\n ),\n \"paint\": DownloadableImage(\n \"https://cdn.pixabay.com/photo/2017/07/03/20/17/abstract-2468874_960_720.jpg\",\n file=\"paint.jpg\",\n author=\"garageband\",\n date=\"03.07.2017\",\n license=PixabayLicense(),\n md5=\"a991e222806ef49d34b172a67cf97d91\",\n ),\n \"bird2\": DownloadableImage(\n \"https://cdn.pixabay.com/photo/2013/03/12/17/53/bird-92956_960_720.jpg\",\n file=\"bird2.jpg\",\n author=\"12019\",\n date=\"09.04.2012\",\n license=PixabayLicense(),\n md5=\"dda3e1d0f93f783de823b4f91129d44e\",\n ),\n \"mosaic\": DownloadableImage(\n \"https://upload.wikimedia.org/wikipedia/commons/2/23/Mosaic_ducks_Massimo.jpg\",\n file=\"mosaic.jpg\",\n author=\"Marie-Lan Nguyen\",\n date=\"2006\",\n license=PublicDomainLicense(),\n md5=\"5b60cd1724395f7a0c21dc6dd006f8ae\",\n ),\n }\n )\n\n\ndef demo_logger():\n logger = logging.getLogger(\"demo_logger\")\n logger.setLevel(logging.INFO)\n\n sh = logging.StreamHandler(sys.stdout)\n sh.setLevel(logging.INFO)\n logger.addHandler(sh)\n\n return OptimLogger(logger)\n", "path": "pystiche/demo.py"}], "after_files": [{"content": "import logging\nimport sys\n\nfrom pystiche.data import (\n DownloadableImage,\n DownloadableImageCollection,\n PixabayLicense,\n PublicDomainLicense,\n)\nfrom pystiche.optim import OptimLogger\n\n__all__ = [\"demo_images\", \"demo_logger\"]\n\n\ndef demo_images():\n return DownloadableImageCollection(\n {\n \"dancing\": DownloadableImage(\n \"https://pytorch.org/tutorials/_static/img/neural-style/dancing.jpg\",\n md5=\"0a2df538901452d639170a2ed89815a4\",\n ),\n \"picasso\": DownloadableImage(\n \"https://pytorch.org/tutorials/_static/img/neural-style/picasso.jpg\",\n md5=\"d1d60fc3f9d0b22d2d826c47934a37ea\",\n ),\n \"bird1\": DownloadableImage(\n \"https://cdn.pixabay.com/photo/2016/01/14/11/26/bird-1139734_960_720.jpg\",\n file=\"bird1.jpg\",\n author=\"gholmz0\",\n date=\"09.03.2013\",\n license=PixabayLicense(),\n md5=\"36e5fef725943a5d1d22b5048095da86\",\n ),\n \"paint\": DownloadableImage(\n \"https://cdn.pixabay.com/photo/2017/07/03/20/17/abstract-2468874_960_720.jpg\",\n file=\"paint.jpg\",\n author=\"garageband\",\n date=\"03.07.2017\",\n license=PixabayLicense(),\n md5=\"a991e222806ef49d34b172a67cf97d91\",\n ),\n \"bird2\": DownloadableImage(\n \"https://cdn.pixabay.com/photo/2013/03/12/17/53/bird-92956_960_720.jpg\",\n file=\"bird2.jpg\",\n author=\"12019\",\n date=\"09.04.2012\",\n license=PixabayLicense(),\n md5=\"8c5b608bd579d931e2cfe7229840fe9b\",\n ),\n \"mosaic\": DownloadableImage(\n \"https://upload.wikimedia.org/wikipedia/commons/2/23/Mosaic_ducks_Massimo.jpg\",\n file=\"mosaic.jpg\",\n author=\"Marie-Lan Nguyen\",\n date=\"2006\",\n license=PublicDomainLicense(),\n md5=\"5b60cd1724395f7a0c21dc6dd006f8ae\",\n ),\n }\n )\n\n\ndef demo_logger():\n logger = logging.getLogger(\"demo_logger\")\n logger.setLevel(logging.INFO)\n\n sh = logging.StreamHandler(sys.stdout)\n sh.setLevel(logging.INFO)\n logger.addHandler(sh)\n\n return OptimLogger(logger)\n", "path": "pystiche/demo.py"}]} | 1,184 | 345 |
gh_patches_debug_28865 | rasdani/github-patches | git_diff | bokeh__bokeh-2790 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Create example of using Hover tool to display custom images
It would be nice to show how someone can use the hovertool to display custom images using URL/URI upon hovering over a region of interest. It would allow users to embed an additional dimension into plots.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sphinx/source/docs/user_guide/source_examples/tools_hover_custom_tooltip.py`
Content:
```
1 from bokeh.plotting import figure, output_file, show, ColumnDataSource
2 from bokeh.models import HoverTool
3
4 output_file("toolbar.html")
5
6 source = ColumnDataSource(
7 data=dict(
8 x=[1, 2, 3, 4, 5],
9 y=[2, 5, 8, 2, 7],
10 desc=['A', 'b', 'C', 'd', 'E'],
11 )
12 )
13
14 hover = HoverTool(
15 tooltips="""
16 <div>
17 <span style="font-size: 17px; font-weight: bold;">@desc</span>
18 <span style="font-size: 15px; color: #966;">[$index]</span>
19 </div>
20 <div>
21 <span style="font-size: 15px;">Location</span>
22 <span style="font-size: 10px; color: #696;">($x, $y)</span>
23 </div>
24 """
25 )
26
27 p = figure(plot_width=400, plot_height=400, tools=[hover],
28 title="Mouse over the dots")
29
30 p.circle('x', 'y', size=20, source=source)
31
32 show(p)
33
34
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sphinx/source/docs/user_guide/source_examples/tools_hover_custom_tooltip.py b/sphinx/source/docs/user_guide/source_examples/tools_hover_custom_tooltip.py
--- a/sphinx/source/docs/user_guide/source_examples/tools_hover_custom_tooltip.py
+++ b/sphinx/source/docs/user_guide/source_examples/tools_hover_custom_tooltip.py
@@ -8,18 +8,34 @@
x=[1, 2, 3, 4, 5],
y=[2, 5, 8, 2, 7],
desc=['A', 'b', 'C', 'd', 'E'],
+ imgs = [
+ 'http://bokeh.pydata.org/static/snake.jpg',
+ 'http://bokeh.pydata.org/static/snake2.png',
+ 'http://bokeh.pydata.org/static/snake3D.png',
+ 'http://bokeh.pydata.org/static/snake4_TheRevenge.png',
+ 'http://bokeh.pydata.org/static/snakebite.jpg'
+ ]
)
)
hover = HoverTool(
tooltips="""
<div>
- <span style="font-size: 17px; font-weight: bold;">@desc</span>
- <span style="font-size: 15px; color: #966;">[$index]</span>
- </div>
- <div>
- <span style="font-size: 15px;">Location</span>
- <span style="font-size: 10px; color: #696;">($x, $y)</span>
+ <div>
+ <img
+ src="@imgs" height="42" alt="@imgs" width="42"
+ style="float: left; margin: 0px 15px 15px 0px;"
+ border="2"
+ ></img>
+ </div>
+ <div>
+ <span style="font-size: 17px; font-weight: bold;">@desc</span>
+ <span style="font-size: 15px; color: #966;">[$index]</span>
+ </div>
+ <div>
+ <span style="font-size: 15px;">Location</span>
+ <span style="font-size: 10px; color: #696;">($x, $y)</span>
+ </div>
</div>
"""
)
@@ -30,5 +46,3 @@
p.circle('x', 'y', size=20, source=source)
show(p)
-
-
| {"golden_diff": "diff --git a/sphinx/source/docs/user_guide/source_examples/tools_hover_custom_tooltip.py b/sphinx/source/docs/user_guide/source_examples/tools_hover_custom_tooltip.py\n--- a/sphinx/source/docs/user_guide/source_examples/tools_hover_custom_tooltip.py\n+++ b/sphinx/source/docs/user_guide/source_examples/tools_hover_custom_tooltip.py\n@@ -8,18 +8,34 @@\n x=[1, 2, 3, 4, 5],\n y=[2, 5, 8, 2, 7],\n desc=['A', 'b', 'C', 'd', 'E'],\n+ imgs = [\n+ 'http://bokeh.pydata.org/static/snake.jpg',\n+ 'http://bokeh.pydata.org/static/snake2.png',\n+ 'http://bokeh.pydata.org/static/snake3D.png',\n+ 'http://bokeh.pydata.org/static/snake4_TheRevenge.png',\n+ 'http://bokeh.pydata.org/static/snakebite.jpg'\n+ ]\n )\n )\n \n hover = HoverTool(\n tooltips=\"\"\"\n <div>\n- <span style=\"font-size: 17px; font-weight: bold;\">@desc</span>\n- <span style=\"font-size: 15px; color: #966;\">[$index]</span>\n- </div>\n- <div>\n- <span style=\"font-size: 15px;\">Location</span>\n- <span style=\"font-size: 10px; color: #696;\">($x, $y)</span>\n+ <div>\n+ <img\n+ src=\"@imgs\" height=\"42\" alt=\"@imgs\" width=\"42\"\n+ style=\"float: left; margin: 0px 15px 15px 0px;\"\n+ border=\"2\"\n+ ></img>\n+ </div>\n+ <div>\n+ <span style=\"font-size: 17px; font-weight: bold;\">@desc</span>\n+ <span style=\"font-size: 15px; color: #966;\">[$index]</span>\n+ </div>\n+ <div>\n+ <span style=\"font-size: 15px;\">Location</span>\n+ <span style=\"font-size: 10px; color: #696;\">($x, $y)</span>\n+ </div>\n </div>\n \"\"\"\n )\n@@ -30,5 +46,3 @@\n p.circle('x', 'y', size=20, source=source)\n \n show(p)\n-\n-\n", "issue": "Create example of using Hover tool to display custom images\nIt would be nice to show how someone can use the hovertool to display custom images using URL/URI upon hovering over a region of interest. It would allow users to embed an additional dimension into plots.\n\n", "before_files": [{"content": "from bokeh.plotting import figure, output_file, show, ColumnDataSource\nfrom bokeh.models import HoverTool\n\noutput_file(\"toolbar.html\")\n\nsource = ColumnDataSource(\n data=dict(\n x=[1, 2, 3, 4, 5],\n y=[2, 5, 8, 2, 7],\n desc=['A', 'b', 'C', 'd', 'E'],\n )\n )\n\nhover = HoverTool(\n tooltips=\"\"\"\n <div>\n <span style=\"font-size: 17px; font-weight: bold;\">@desc</span>\n <span style=\"font-size: 15px; color: #966;\">[$index]</span>\n </div>\n <div>\n <span style=\"font-size: 15px;\">Location</span>\n <span style=\"font-size: 10px; color: #696;\">($x, $y)</span>\n </div>\n \"\"\"\n )\n\np = figure(plot_width=400, plot_height=400, tools=[hover],\n title=\"Mouse over the dots\")\n\np.circle('x', 'y', size=20, source=source)\n\nshow(p)\n\n \n", "path": "sphinx/source/docs/user_guide/source_examples/tools_hover_custom_tooltip.py"}], "after_files": [{"content": "from bokeh.plotting import figure, output_file, show, ColumnDataSource\nfrom bokeh.models import HoverTool\n\noutput_file(\"toolbar.html\")\n\nsource = ColumnDataSource(\n data=dict(\n x=[1, 2, 3, 4, 5],\n y=[2, 5, 8, 2, 7],\n desc=['A', 'b', 'C', 'd', 'E'],\n imgs = [\n 'http://bokeh.pydata.org/static/snake.jpg',\n 'http://bokeh.pydata.org/static/snake2.png',\n 'http://bokeh.pydata.org/static/snake3D.png',\n 'http://bokeh.pydata.org/static/snake4_TheRevenge.png',\n 'http://bokeh.pydata.org/static/snakebite.jpg'\n ]\n )\n )\n\nhover = HoverTool(\n tooltips=\"\"\"\n <div>\n <div>\n <img\n src=\"@imgs\" height=\"42\" alt=\"@imgs\" width=\"42\"\n style=\"float: left; margin: 0px 15px 15px 0px;\"\n border=\"2\"\n ></img>\n </div>\n <div>\n <span style=\"font-size: 17px; font-weight: bold;\">@desc</span>\n <span style=\"font-size: 15px; color: #966;\">[$index]</span>\n </div>\n <div>\n <span style=\"font-size: 15px;\">Location</span>\n <span style=\"font-size: 10px; color: #696;\">($x, $y)</span>\n </div>\n </div>\n \"\"\"\n )\n\np = figure(plot_width=400, plot_height=400, tools=[hover],\n title=\"Mouse over the dots\")\n\np.circle('x', 'y', size=20, source=source)\n\nshow(p)\n", "path": "sphinx/source/docs/user_guide/source_examples/tools_hover_custom_tooltip.py"}]} | 654 | 575 |
gh_patches_debug_25146 | rasdani/github-patches | git_diff | qtile__qtile-472 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CPU usage leak
Running qtile master. Over a timespan of about 24 hours, qtile's CPU usage rises from trivially low to nearly 100%. In the worst stages, qtile is non-responsive to keyboard shortcuts, forcing me to kill it externally.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libqtile/widget/battery.py`
Content:
```
1 import cairo
2 import os
3 from libqtile import bar
4 import base
5
6 BAT_DIR = '/sys/class/power_supply'
7 CHARGED = 'Full'
8 CHARGING = 'Charging'
9 DISCHARGING = 'Discharging'
10 UNKNOWN = 'Unknown'
11
12 BATTERY_INFO_FILES = {
13 'energy_now_file': ['energy_now', 'charge_now'],
14 'energy_full_file': ['energy_full', 'charge_full'],
15 'power_now_file': ['power_now', 'current_now'],
16 'status_file': ['status'],
17 }
18
19
20 def default_icon_path():
21 # default icons are in libqtile/resources/battery-icons
22 root = os.sep.join(os.path.abspath(__file__).split(os.sep)[:-2])
23 return os.path.join(root, 'resources', 'battery-icons')
24
25
26 class _Battery(base._TextBox):
27 ''' Base battery class '''
28
29 filenames = {}
30
31 defaults = [
32 ('battery_name', 'BAT0', 'ACPI name of a battery, usually BAT0'),
33 (
34 'status_file',
35 'status',
36 'Name of status file in'
37 ' /sys/class/power_supply/battery_name'
38 ),
39 (
40 'energy_now_file',
41 None,
42 'Name of file with the '
43 'current energy in /sys/class/power_supply/battery_name'
44 ),
45 (
46 'energy_full_file',
47 None,
48 'Name of file with the maximum'
49 ' energy in /sys/class/power_supply/battery_name'
50 ),
51 (
52 'power_now_file',
53 None,
54 'Name of file with the current'
55 ' power draw in /sys/class/power_supply/battery_name'
56 ),
57 ('update_delay', 1, 'The delay in seconds between updates'),
58 ]
59
60 def __init__(self, **config):
61 base._TextBox.__init__(self, "BAT", bar.CALCULATED, **config)
62 self.add_defaults(_Battery.defaults)
63
64 def _load_file(self, name):
65 try:
66 path = os.path.join(BAT_DIR, self.battery_name, name)
67 with open(path, 'r') as f:
68 return f.read().strip()
69 except IOError:
70 if name == 'current_now':
71 return 0
72 return False
73 except Exception:
74 self.log.exception("Failed to get %s" % name)
75
76 def _get_param(self, name):
77 if name in self.filenames:
78 return self._load_file(self.filenames[name])
79 else:
80 # Don't have the file name cached, figure it out
81 file_list = BATTERY_INFO_FILES.get(name, [])
82 if getattr(self, name, None):
83 # If a file is manually specified, check it first
84 file_list.insert(0, getattr(self, name))
85
86 # Iterate over the possibilities, and return the first valid value
87 for file in file_list:
88 value = self._load_file(file)
89 if not (value in (False, None)):
90 self.filenames[name] = file
91 return value
92
93 # If we made it this far, we don't have a valid file. Just return None.
94 return None
95
96 def _get_info(self):
97 try:
98 info = {
99 'stat': self._get_param('status_file'),
100 'now': float(self._get_param('energy_now_file')),
101 'full': float(self._get_param('energy_full_file')),
102 'power': float(self._get_param('power_now_file')),
103 }
104 except TypeError:
105 return False
106 return info
107
108
109 class Battery(_Battery):
110 """
111 A simple but flexible text-based battery widget.
112 """
113 defaults = [
114 ('low_foreground', 'FF0000', 'font color when battery is low'),
115 (
116 'format',
117 '{char} {percent:2.0%} {hour:d}:{min:02d}',
118 'Display format'
119 ),
120 ('charge_char', '^', 'Character to indicate the battery is charging'),
121 (
122 'discharge_char',
123 'V',
124 'Character to indicate the battery'
125 ' is discharging'
126 ),
127 (
128 'low_percentage',
129 0.10,
130 "0 < x < 1 at which to indicate battery is low with low_foreground"
131 ),
132 ('hide_threshold', None, 'Hide the text when there is enough energy'),
133 ]
134
135 def __init__(self, **config):
136 _Battery.__init__(self, **config)
137 self.add_defaults(Battery.defaults)
138 self.timeout_add(self.update_delay, self.update)
139 self.update()
140
141 def _get_text(self):
142 info = self._get_info()
143 if info is False:
144 return 'Error'
145
146 # Set the charging character
147 try:
148 # hide the text when it's higher than threshold, but still
149 # display `full` when the battery is fully charged.
150 if self.hide_threshold and \
151 info['now'] / info['full'] * 100.0 >= \
152 self.hide_threshold and \
153 info['stat'] != CHARGED:
154 return ''
155 elif info['stat'] == DISCHARGING:
156 char = self.discharge_char
157 time = info['now'] / info['power']
158 elif info['stat'] == CHARGING:
159 char = self.charge_char
160 time = (info['full'] - info['now']) / info['power']
161 else:
162 return 'Full'
163 except ZeroDivisionError:
164 time = -1
165
166 # Calculate the battery percentage and time left
167 if time >= 0:
168 hour = int(time)
169 min = int(time * 60) % 60
170 else:
171 hour = -1
172 min = -1
173 percent = info['now'] / info['full']
174 if info['stat'] == DISCHARGING and percent < self.low_percentage:
175 self.layout.colour = self.low_foreground
176 else:
177 self.layout.colour = self.foreground
178
179 return self.format.format(
180 char=char,
181 percent=percent,
182 hour=hour,
183 min=min
184 )
185
186 def update(self):
187 if self.configured:
188 ntext = self._get_text()
189 if ntext != self.text:
190 self.text = ntext
191 self.bar.draw()
192 return True
193
194
195 class BatteryIcon(_Battery):
196 ''' Battery life indicator widget '''
197
198 defaults = [
199 ('theme_path', default_icon_path(), 'Path of the icons'),
200 ('custom_icons', {}, 'dict containing key->filename icon map'),
201 ]
202
203 def __init__(self, **config):
204 _Battery.__init__(self, **config)
205 self.add_defaults(BatteryIcon.defaults)
206
207 if self.theme_path:
208 self.width_type = bar.STATIC
209 self.width = 0
210 self.surfaces = {}
211 self.current_icon = 'battery-missing'
212 self.icons = dict([(x, '{0}.png'.format(x)) for x in (
213 'battery-missing',
214 'battery-caution',
215 'battery-low',
216 'battery-good',
217 'battery-full',
218 'battery-caution-charging',
219 'battery-low-charging',
220 'battery-good-charging',
221 'battery-full-charging',
222 'battery-full-charged',
223 )])
224 self.icons.update(self.custom_icons)
225 self.timeout_add(self.update_delay, self.update)
226
227 def _configure(self, qtile, bar):
228 base._TextBox._configure(self, qtile, bar)
229 self.setup_images()
230
231 def _get_icon_key(self):
232 key = 'battery'
233 info = self._get_info()
234 if info is False or not info.get('full'):
235 key += '-missing'
236 else:
237 percent = info['now'] / info['full']
238 if percent < .2:
239 key += '-caution'
240 elif percent < .4:
241 key += '-low'
242 elif percent < .8:
243 key += '-good'
244 else:
245 key += '-full'
246
247 if info['stat'] == CHARGING:
248 key += '-charging'
249 elif info['stat'] == CHARGED:
250 key += '-charged'
251 return key
252
253 def update(self):
254 if self.configured:
255 icon = self._get_icon_key()
256 if icon != self.current_icon:
257 self.current_icon = icon
258 self.draw()
259 return True
260
261 def draw(self):
262 if self.theme_path:
263 self.drawer.clear(self.background or self.bar.background)
264 self.drawer.ctx.set_source(self.surfaces[self.current_icon])
265 self.drawer.ctx.paint()
266 self.drawer.draw(self.offset, self.width)
267 else:
268 self.text = self.current_icon[8:]
269 base._TextBox.draw(self)
270
271 def setup_images(self):
272 for key, name in self.icons.iteritems():
273 try:
274 path = os.path.join(self.theme_path, name)
275 img = cairo.ImageSurface.create_from_png(path)
276 except cairo.Error:
277 self.theme_path = None
278 self.qtile.log.warning('Battery Icon switching to text mode')
279 return
280 input_width = img.get_width()
281 input_height = img.get_height()
282
283 sp = input_height / float(self.bar.height - 1)
284
285 width = input_width / sp
286 if width > self.width:
287 self.width = int(width) + self.actual_padding * 2
288
289 imgpat = cairo.SurfacePattern(img)
290
291 scaler = cairo.Matrix()
292
293 scaler.scale(sp, sp)
294 scaler.translate(self.actual_padding * -1, 0)
295 imgpat.set_matrix(scaler)
296
297 imgpat.set_filter(cairo.FILTER_BEST)
298 self.surfaces[key] = imgpat
299
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/libqtile/widget/battery.py b/libqtile/widget/battery.py
--- a/libqtile/widget/battery.py
+++ b/libqtile/widget/battery.py
@@ -74,11 +74,14 @@
self.log.exception("Failed to get %s" % name)
def _get_param(self, name):
- if name in self.filenames:
+ if name in self.filenames and self.filenames[name]:
return self._load_file(self.filenames[name])
- else:
+ elif name not in self.filenames:
# Don't have the file name cached, figure it out
- file_list = BATTERY_INFO_FILES.get(name, [])
+
+ # Don't modify the global list! Copy with [:]
+ file_list = BATTERY_INFO_FILES.get(name, [])[:]
+
if getattr(self, name, None):
# If a file is manually specified, check it first
file_list.insert(0, getattr(self, name))
@@ -90,7 +93,10 @@
self.filenames[name] = file
return value
- # If we made it this far, we don't have a valid file. Just return None.
+ # If we made it this far, we don't have a valid file.
+ # Set it to None to avoid trying the next time.
+ self.filenames[name] = None
+
return None
def _get_info(self):
| {"golden_diff": "diff --git a/libqtile/widget/battery.py b/libqtile/widget/battery.py\n--- a/libqtile/widget/battery.py\n+++ b/libqtile/widget/battery.py\n@@ -74,11 +74,14 @@\n self.log.exception(\"Failed to get %s\" % name)\n \n def _get_param(self, name):\n- if name in self.filenames:\n+ if name in self.filenames and self.filenames[name]:\n return self._load_file(self.filenames[name])\n- else:\n+ elif name not in self.filenames:\n # Don't have the file name cached, figure it out\n- file_list = BATTERY_INFO_FILES.get(name, [])\n+\n+ # Don't modify the global list! Copy with [:]\n+ file_list = BATTERY_INFO_FILES.get(name, [])[:]\n+\n if getattr(self, name, None):\n # If a file is manually specified, check it first\n file_list.insert(0, getattr(self, name))\n@@ -90,7 +93,10 @@\n self.filenames[name] = file\n return value\n \n- # If we made it this far, we don't have a valid file. Just return None.\n+ # If we made it this far, we don't have a valid file.\n+ # Set it to None to avoid trying the next time.\n+ self.filenames[name] = None\n+\n return None\n \n def _get_info(self):\n", "issue": "CPU usage leak\nRunning qtile master. Over a timespan of about 24 hours, qtile's CPU usage rises from trivially low to nearly 100%. In the worst stages, qtile is non-responsive to keyboard shortcuts, forcing me to kill it externally.\n\n", "before_files": [{"content": "import cairo\nimport os\nfrom libqtile import bar\nimport base\n\nBAT_DIR = '/sys/class/power_supply'\nCHARGED = 'Full'\nCHARGING = 'Charging'\nDISCHARGING = 'Discharging'\nUNKNOWN = 'Unknown'\n\nBATTERY_INFO_FILES = {\n 'energy_now_file': ['energy_now', 'charge_now'],\n 'energy_full_file': ['energy_full', 'charge_full'],\n 'power_now_file': ['power_now', 'current_now'],\n 'status_file': ['status'],\n}\n\n\ndef default_icon_path():\n # default icons are in libqtile/resources/battery-icons\n root = os.sep.join(os.path.abspath(__file__).split(os.sep)[:-2])\n return os.path.join(root, 'resources', 'battery-icons')\n\n\nclass _Battery(base._TextBox):\n ''' Base battery class '''\n\n filenames = {}\n\n defaults = [\n ('battery_name', 'BAT0', 'ACPI name of a battery, usually BAT0'),\n (\n 'status_file',\n 'status',\n 'Name of status file in'\n ' /sys/class/power_supply/battery_name'\n ),\n (\n 'energy_now_file',\n None,\n 'Name of file with the '\n 'current energy in /sys/class/power_supply/battery_name'\n ),\n (\n 'energy_full_file',\n None,\n 'Name of file with the maximum'\n ' energy in /sys/class/power_supply/battery_name'\n ),\n (\n 'power_now_file',\n None,\n 'Name of file with the current'\n ' power draw in /sys/class/power_supply/battery_name'\n ),\n ('update_delay', 1, 'The delay in seconds between updates'),\n ]\n\n def __init__(self, **config):\n base._TextBox.__init__(self, \"BAT\", bar.CALCULATED, **config)\n self.add_defaults(_Battery.defaults)\n\n def _load_file(self, name):\n try:\n path = os.path.join(BAT_DIR, self.battery_name, name)\n with open(path, 'r') as f:\n return f.read().strip()\n except IOError:\n if name == 'current_now':\n return 0\n return False\n except Exception:\n self.log.exception(\"Failed to get %s\" % name)\n\n def _get_param(self, name):\n if name in self.filenames:\n return self._load_file(self.filenames[name])\n else:\n # Don't have the file name cached, figure it out\n file_list = BATTERY_INFO_FILES.get(name, [])\n if getattr(self, name, None):\n # If a file is manually specified, check it first\n file_list.insert(0, getattr(self, name))\n\n # Iterate over the possibilities, and return the first valid value\n for file in file_list:\n value = self._load_file(file)\n if not (value in (False, None)):\n self.filenames[name] = file\n return value\n\n # If we made it this far, we don't have a valid file. Just return None.\n return None\n\n def _get_info(self):\n try:\n info = {\n 'stat': self._get_param('status_file'),\n 'now': float(self._get_param('energy_now_file')),\n 'full': float(self._get_param('energy_full_file')),\n 'power': float(self._get_param('power_now_file')),\n }\n except TypeError:\n return False\n return info\n\n\nclass Battery(_Battery):\n \"\"\"\n A simple but flexible text-based battery widget.\n \"\"\"\n defaults = [\n ('low_foreground', 'FF0000', 'font color when battery is low'),\n (\n 'format',\n '{char} {percent:2.0%} {hour:d}:{min:02d}',\n 'Display format'\n ),\n ('charge_char', '^', 'Character to indicate the battery is charging'),\n (\n 'discharge_char',\n 'V',\n 'Character to indicate the battery'\n ' is discharging'\n ),\n (\n 'low_percentage',\n 0.10,\n \"0 < x < 1 at which to indicate battery is low with low_foreground\"\n ),\n ('hide_threshold', None, 'Hide the text when there is enough energy'),\n ]\n\n def __init__(self, **config):\n _Battery.__init__(self, **config)\n self.add_defaults(Battery.defaults)\n self.timeout_add(self.update_delay, self.update)\n self.update()\n\n def _get_text(self):\n info = self._get_info()\n if info is False:\n return 'Error'\n\n # Set the charging character\n try:\n # hide the text when it's higher than threshold, but still\n # display `full` when the battery is fully charged.\n if self.hide_threshold and \\\n info['now'] / info['full'] * 100.0 >= \\\n self.hide_threshold and \\\n info['stat'] != CHARGED:\n return ''\n elif info['stat'] == DISCHARGING:\n char = self.discharge_char\n time = info['now'] / info['power']\n elif info['stat'] == CHARGING:\n char = self.charge_char\n time = (info['full'] - info['now']) / info['power']\n else:\n return 'Full'\n except ZeroDivisionError:\n time = -1\n\n # Calculate the battery percentage and time left\n if time >= 0:\n hour = int(time)\n min = int(time * 60) % 60\n else:\n hour = -1\n min = -1\n percent = info['now'] / info['full']\n if info['stat'] == DISCHARGING and percent < self.low_percentage:\n self.layout.colour = self.low_foreground\n else:\n self.layout.colour = self.foreground\n\n return self.format.format(\n char=char,\n percent=percent,\n hour=hour,\n min=min\n )\n\n def update(self):\n if self.configured:\n ntext = self._get_text()\n if ntext != self.text:\n self.text = ntext\n self.bar.draw()\n return True\n\n\nclass BatteryIcon(_Battery):\n ''' Battery life indicator widget '''\n\n defaults = [\n ('theme_path', default_icon_path(), 'Path of the icons'),\n ('custom_icons', {}, 'dict containing key->filename icon map'),\n ]\n\n def __init__(self, **config):\n _Battery.__init__(self, **config)\n self.add_defaults(BatteryIcon.defaults)\n\n if self.theme_path:\n self.width_type = bar.STATIC\n self.width = 0\n self.surfaces = {}\n self.current_icon = 'battery-missing'\n self.icons = dict([(x, '{0}.png'.format(x)) for x in (\n 'battery-missing',\n 'battery-caution',\n 'battery-low',\n 'battery-good',\n 'battery-full',\n 'battery-caution-charging',\n 'battery-low-charging',\n 'battery-good-charging',\n 'battery-full-charging',\n 'battery-full-charged',\n )])\n self.icons.update(self.custom_icons)\n self.timeout_add(self.update_delay, self.update)\n\n def _configure(self, qtile, bar):\n base._TextBox._configure(self, qtile, bar)\n self.setup_images()\n\n def _get_icon_key(self):\n key = 'battery'\n info = self._get_info()\n if info is False or not info.get('full'):\n key += '-missing'\n else:\n percent = info['now'] / info['full']\n if percent < .2:\n key += '-caution'\n elif percent < .4:\n key += '-low'\n elif percent < .8:\n key += '-good'\n else:\n key += '-full'\n\n if info['stat'] == CHARGING:\n key += '-charging'\n elif info['stat'] == CHARGED:\n key += '-charged'\n return key\n\n def update(self):\n if self.configured:\n icon = self._get_icon_key()\n if icon != self.current_icon:\n self.current_icon = icon\n self.draw()\n return True\n\n def draw(self):\n if self.theme_path:\n self.drawer.clear(self.background or self.bar.background)\n self.drawer.ctx.set_source(self.surfaces[self.current_icon])\n self.drawer.ctx.paint()\n self.drawer.draw(self.offset, self.width)\n else:\n self.text = self.current_icon[8:]\n base._TextBox.draw(self)\n\n def setup_images(self):\n for key, name in self.icons.iteritems():\n try:\n path = os.path.join(self.theme_path, name)\n img = cairo.ImageSurface.create_from_png(path)\n except cairo.Error:\n self.theme_path = None\n self.qtile.log.warning('Battery Icon switching to text mode')\n return\n input_width = img.get_width()\n input_height = img.get_height()\n\n sp = input_height / float(self.bar.height - 1)\n\n width = input_width / sp\n if width > self.width:\n self.width = int(width) + self.actual_padding * 2\n\n imgpat = cairo.SurfacePattern(img)\n\n scaler = cairo.Matrix()\n\n scaler.scale(sp, sp)\n scaler.translate(self.actual_padding * -1, 0)\n imgpat.set_matrix(scaler)\n\n imgpat.set_filter(cairo.FILTER_BEST)\n self.surfaces[key] = imgpat\n", "path": "libqtile/widget/battery.py"}], "after_files": [{"content": "import cairo\nimport os\nfrom libqtile import bar\nimport base\n\nBAT_DIR = '/sys/class/power_supply'\nCHARGED = 'Full'\nCHARGING = 'Charging'\nDISCHARGING = 'Discharging'\nUNKNOWN = 'Unknown'\n\nBATTERY_INFO_FILES = {\n 'energy_now_file': ['energy_now', 'charge_now'],\n 'energy_full_file': ['energy_full', 'charge_full'],\n 'power_now_file': ['power_now', 'current_now'],\n 'status_file': ['status'],\n}\n\n\ndef default_icon_path():\n # default icons are in libqtile/resources/battery-icons\n root = os.sep.join(os.path.abspath(__file__).split(os.sep)[:-2])\n return os.path.join(root, 'resources', 'battery-icons')\n\n\nclass _Battery(base._TextBox):\n ''' Base battery class '''\n\n filenames = {}\n\n defaults = [\n ('battery_name', 'BAT0', 'ACPI name of a battery, usually BAT0'),\n (\n 'status_file',\n 'status',\n 'Name of status file in'\n ' /sys/class/power_supply/battery_name'\n ),\n (\n 'energy_now_file',\n None,\n 'Name of file with the '\n 'current energy in /sys/class/power_supply/battery_name'\n ),\n (\n 'energy_full_file',\n None,\n 'Name of file with the maximum'\n ' energy in /sys/class/power_supply/battery_name'\n ),\n (\n 'power_now_file',\n None,\n 'Name of file with the current'\n ' power draw in /sys/class/power_supply/battery_name'\n ),\n ('update_delay', 1, 'The delay in seconds between updates'),\n ]\n\n def __init__(self, **config):\n base._TextBox.__init__(self, \"BAT\", bar.CALCULATED, **config)\n self.add_defaults(_Battery.defaults)\n\n def _load_file(self, name):\n try:\n path = os.path.join(BAT_DIR, self.battery_name, name)\n with open(path, 'r') as f:\n return f.read().strip()\n except IOError:\n if name == 'current_now':\n return 0\n return False\n except Exception:\n self.log.exception(\"Failed to get %s\" % name)\n\n def _get_param(self, name):\n if name in self.filenames and self.filenames[name]:\n return self._load_file(self.filenames[name])\n elif name not in self.filenames:\n # Don't have the file name cached, figure it out\n\n # Don't modify the global list! Copy with [:]\n file_list = BATTERY_INFO_FILES.get(name, [])[:]\n\n if getattr(self, name, None):\n # If a file is manually specified, check it first\n file_list.insert(0, getattr(self, name))\n\n # Iterate over the possibilities, and return the first valid value\n for file in file_list:\n value = self._load_file(file)\n if not (value in (False, None)):\n self.filenames[name] = file\n return value\n\n # If we made it this far, we don't have a valid file.\n # Set it to None to avoid trying the next time.\n self.filenames[name] = None\n\n return None\n\n def _get_info(self):\n try:\n info = {\n 'stat': self._get_param('status_file'),\n 'now': float(self._get_param('energy_now_file')),\n 'full': float(self._get_param('energy_full_file')),\n 'power': float(self._get_param('power_now_file')),\n }\n except TypeError:\n return False\n return info\n\n\nclass Battery(_Battery):\n \"\"\"\n A simple but flexible text-based battery widget.\n \"\"\"\n defaults = [\n ('low_foreground', 'FF0000', 'font color when battery is low'),\n (\n 'format',\n '{char} {percent:2.0%} {hour:d}:{min:02d}',\n 'Display format'\n ),\n ('charge_char', '^', 'Character to indicate the battery is charging'),\n (\n 'discharge_char',\n 'V',\n 'Character to indicate the battery'\n ' is discharging'\n ),\n (\n 'low_percentage',\n 0.10,\n \"0 < x < 1 at which to indicate battery is low with low_foreground\"\n ),\n ('hide_threshold', None, 'Hide the text when there is enough energy'),\n ]\n\n def __init__(self, **config):\n _Battery.__init__(self, **config)\n self.add_defaults(Battery.defaults)\n self.timeout_add(self.update_delay, self.update)\n self.update()\n\n def _get_text(self):\n info = self._get_info()\n if info is False:\n return 'Error'\n\n # Set the charging character\n try:\n # hide the text when it's higher than threshold, but still\n # display `full` when the battery is fully charged.\n if self.hide_threshold and \\\n info['now'] / info['full'] * 100.0 >= \\\n self.hide_threshold and \\\n info['stat'] != CHARGED:\n return ''\n elif info['stat'] == DISCHARGING:\n char = self.discharge_char\n time = info['now'] / info['power']\n elif info['stat'] == CHARGING:\n char = self.charge_char\n time = (info['full'] - info['now']) / info['power']\n else:\n return 'Full'\n except ZeroDivisionError:\n time = -1\n\n # Calculate the battery percentage and time left\n if time >= 0:\n hour = int(time)\n min = int(time * 60) % 60\n else:\n hour = -1\n min = -1\n percent = info['now'] / info['full']\n if info['stat'] == DISCHARGING and percent < self.low_percentage:\n self.layout.colour = self.low_foreground\n else:\n self.layout.colour = self.foreground\n\n return self.format.format(\n char=char,\n percent=percent,\n hour=hour,\n min=min\n )\n\n def update(self):\n if self.configured:\n ntext = self._get_text()\n if ntext != self.text:\n self.text = ntext\n self.bar.draw()\n return True\n\n\nclass BatteryIcon(_Battery):\n ''' Battery life indicator widget '''\n\n defaults = [\n ('theme_path', default_icon_path(), 'Path of the icons'),\n ('custom_icons', {}, 'dict containing key->filename icon map'),\n ]\n\n def __init__(self, **config):\n _Battery.__init__(self, **config)\n self.add_defaults(BatteryIcon.defaults)\n\n if self.theme_path:\n self.width_type = bar.STATIC\n self.width = 0\n self.surfaces = {}\n self.current_icon = 'battery-missing'\n self.icons = dict([(x, '{0}.png'.format(x)) for x in (\n 'battery-missing',\n 'battery-caution',\n 'battery-low',\n 'battery-good',\n 'battery-full',\n 'battery-caution-charging',\n 'battery-low-charging',\n 'battery-good-charging',\n 'battery-full-charging',\n 'battery-full-charged',\n )])\n self.icons.update(self.custom_icons)\n self.timeout_add(self.update_delay, self.update)\n\n def _configure(self, qtile, bar):\n base._TextBox._configure(self, qtile, bar)\n self.setup_images()\n\n def _get_icon_key(self):\n key = 'battery'\n info = self._get_info()\n if info is False or not info.get('full'):\n key += '-missing'\n else:\n percent = info['now'] / info['full']\n if percent < .2:\n key += '-caution'\n elif percent < .4:\n key += '-low'\n elif percent < .8:\n key += '-good'\n else:\n key += '-full'\n\n if info['stat'] == CHARGING:\n key += '-charging'\n elif info['stat'] == CHARGED:\n key += '-charged'\n return key\n\n def update(self):\n if self.configured:\n icon = self._get_icon_key()\n if icon != self.current_icon:\n self.current_icon = icon\n self.draw()\n return True\n\n def draw(self):\n if self.theme_path:\n self.drawer.clear(self.background or self.bar.background)\n self.drawer.ctx.set_source(self.surfaces[self.current_icon])\n self.drawer.ctx.paint()\n self.drawer.draw(self.offset, self.width)\n else:\n self.text = self.current_icon[8:]\n base._TextBox.draw(self)\n\n def setup_images(self):\n for key, name in self.icons.iteritems():\n try:\n path = os.path.join(self.theme_path, name)\n img = cairo.ImageSurface.create_from_png(path)\n except cairo.Error:\n self.theme_path = None\n self.qtile.log.warning('Battery Icon switching to text mode')\n return\n input_width = img.get_width()\n input_height = img.get_height()\n\n sp = input_height / float(self.bar.height - 1)\n\n width = input_width / sp\n if width > self.width:\n self.width = int(width) + self.actual_padding * 2\n\n imgpat = cairo.SurfacePattern(img)\n\n scaler = cairo.Matrix()\n\n scaler.scale(sp, sp)\n scaler.translate(self.actual_padding * -1, 0)\n imgpat.set_matrix(scaler)\n\n imgpat.set_filter(cairo.FILTER_BEST)\n self.surfaces[key] = imgpat\n", "path": "libqtile/widget/battery.py"}]} | 3,190 | 319 |
gh_patches_debug_11629 | rasdani/github-patches | git_diff | beeware__toga-193 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ProgressBar doesn't appears in a Box [Core] [Cocoa]
Example code: https://gist.github.com/Dayof/528f9dc38f4178dbc25db6bab553e19a
When a progress bar is add inside of a box (bellow the label is the progress bar):

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cocoa/toga_cocoa/widgets/progressbar.py`
Content:
```
1 from toga.interface import ProgressBar as ProgressBarInterface
2
3 from ..libs import *
4 from .base import WidgetMixin
5
6
7 class ProgressBar(ProgressBarInterface, WidgetMixin):
8 def __init__(self, id=None, style=None, max=None, value=None):
9 super().__init__(id=id, style=style, max=max, value=value)
10 self._create()
11
12 def create(self):
13 self._impl = NSProgressIndicator.new()
14 self._impl.setStyle_(NSProgressIndicatorBarStyle)
15 self._impl.setDisplayedWhenStopped_(True)
16
17 # Add the layout constraints
18 self._add_constraints()
19
20 def _set_value(self, value):
21 if value is not None:
22 self._impl.setDoubleValue_(value)
23
24 def start(self):
25 if self._impl and not self._running:
26 self._impl.startAnimation_(self._impl)
27 self._running = True
28
29 def stop(self):
30 if self._impl and self._running:
31 self._impl.stopAnimation_(self._impl)
32 self._running = False
33
34 def _set_max(self, value):
35 if value:
36 self._impl.setIndeterminate_(False)
37 self._impl.setMaxValue_(value)
38 else:
39 self._impl.setIndeterminate_(True)
40
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cocoa/toga_cocoa/widgets/progressbar.py b/src/cocoa/toga_cocoa/widgets/progressbar.py
--- a/src/cocoa/toga_cocoa/widgets/progressbar.py
+++ b/src/cocoa/toga_cocoa/widgets/progressbar.py
@@ -16,6 +16,7 @@
# Add the layout constraints
self._add_constraints()
+ self.rehint()
def _set_value(self, value):
if value is not None:
@@ -37,3 +38,9 @@
self._impl.setMaxValue_(value)
else:
self._impl.setIndeterminate_(True)
+
+ def rehint(self):
+ self.style.hint(
+ height=self._impl.fittingSize().height,
+ width=self._impl.fittingSize().width
+ )
| {"golden_diff": "diff --git a/src/cocoa/toga_cocoa/widgets/progressbar.py b/src/cocoa/toga_cocoa/widgets/progressbar.py\n--- a/src/cocoa/toga_cocoa/widgets/progressbar.py\n+++ b/src/cocoa/toga_cocoa/widgets/progressbar.py\n@@ -16,6 +16,7 @@\n \n # Add the layout constraints\n self._add_constraints()\n+ self.rehint()\n \n def _set_value(self, value):\n if value is not None:\n@@ -37,3 +38,9 @@\n self._impl.setMaxValue_(value)\n else:\n self._impl.setIndeterminate_(True)\n+\n+ def rehint(self):\n+ self.style.hint(\n+ height=self._impl.fittingSize().height,\n+ width=self._impl.fittingSize().width\n+ )\n", "issue": "ProgressBar doesn't appears in a Box [Core] [Cocoa]\nExample code: https://gist.github.com/Dayof/528f9dc38f4178dbc25db6bab553e19a\r\n\r\nWhen a progress bar is add inside of a box (bellow the label is the progress bar):\r\n\r\n\r\n\n", "before_files": [{"content": "from toga.interface import ProgressBar as ProgressBarInterface\n\nfrom ..libs import *\nfrom .base import WidgetMixin\n\n\nclass ProgressBar(ProgressBarInterface, WidgetMixin):\n def __init__(self, id=None, style=None, max=None, value=None):\n super().__init__(id=id, style=style, max=max, value=value)\n self._create()\n\n def create(self):\n self._impl = NSProgressIndicator.new()\n self._impl.setStyle_(NSProgressIndicatorBarStyle)\n self._impl.setDisplayedWhenStopped_(True)\n\n # Add the layout constraints\n self._add_constraints()\n\n def _set_value(self, value):\n if value is not None:\n self._impl.setDoubleValue_(value)\n\n def start(self):\n if self._impl and not self._running:\n self._impl.startAnimation_(self._impl)\n self._running = True\n\n def stop(self):\n if self._impl and self._running:\n self._impl.stopAnimation_(self._impl)\n self._running = False\n\n def _set_max(self, value):\n if value:\n self._impl.setIndeterminate_(False)\n self._impl.setMaxValue_(value)\n else:\n self._impl.setIndeterminate_(True)\n", "path": "src/cocoa/toga_cocoa/widgets/progressbar.py"}], "after_files": [{"content": "from toga.interface import ProgressBar as ProgressBarInterface\n\nfrom ..libs import *\nfrom .base import WidgetMixin\n\n\nclass ProgressBar(ProgressBarInterface, WidgetMixin):\n def __init__(self, id=None, style=None, max=None, value=None):\n super().__init__(id=id, style=style, max=max, value=value)\n self._create()\n\n def create(self):\n self._impl = NSProgressIndicator.new()\n self._impl.setStyle_(NSProgressIndicatorBarStyle)\n self._impl.setDisplayedWhenStopped_(True)\n\n # Add the layout constraints\n self._add_constraints()\n self.rehint()\n\n def _set_value(self, value):\n if value is not None:\n self._impl.setDoubleValue_(value)\n\n def start(self):\n if self._impl and not self._running:\n self._impl.startAnimation_(self._impl)\n self._running = True\n\n def stop(self):\n if self._impl and self._running:\n self._impl.stopAnimation_(self._impl)\n self._running = False\n\n def _set_max(self, value):\n if value:\n self._impl.setIndeterminate_(False)\n self._impl.setMaxValue_(value)\n else:\n self._impl.setIndeterminate_(True)\n\n def rehint(self):\n self.style.hint(\n height=self._impl.fittingSize().height,\n width=self._impl.fittingSize().width\n )\n", "path": "src/cocoa/toga_cocoa/widgets/progressbar.py"}]} | 768 | 181 |
gh_patches_debug_40338 | rasdani/github-patches | git_diff | fossasia__open-event-server-4176 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix 'auth required' for GET /speakers
**I'm submitting a ...** (check one with "x")
- [x] bug report
- [ ] feature request
- [ ] support request => Please do not submit support requests here, instead ask your query in out Gitter channel at https://gitter.im/fossasia/open-event-orga-server
**Current behavior:**
<!-- Describe how the bug manifests. -->
**Expected behavior:**
<!-- Describe what the behavior would be without the bug. -->
**Steps to reproduce:**
<!-- If you are able to illustrate the bug or feature request with an example, please provide steps to reproduce -->
**Related code:**
```
insert any relevant code here else remove this section
```
**Other information:**
<!-- List any other information that is relevant to your issue. Stack traces, related issues, suggestions on how to fix, Stack Overflow links, forum links, etc. -->
**System information:**
<!-- Add information about the system your facing this bug on. If you think this is irrelevant or if it's a UI bug or a feature request, please remove this section -->
```
Your operating system
```
```
output of `python --version`
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/api/speakers.py`
Content:
```
1 from marshmallow_jsonapi import fields
2 from marshmallow_jsonapi.flask import Schema, Relationship
3 from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship
4 from flask_rest_jsonapi.exceptions import ObjectNotFound
5
6 from app.api.helpers.utilities import dasherize
7 from app.api.helpers.permissions import jwt_required
8 from app.models import db
9 from app.models.speaker import Speaker
10 from app.models.session import Session
11 from app.models.user import User
12 from app.models.event import Event
13 from app.api.helpers.db import safe_query
14 from app.api.bootstrap import api
15 from app.api.helpers.utilities import require_relationship
16 from app.api.helpers.permission_manager import has_access
17
18
19 class SpeakerSchema(Schema):
20 """
21 Speaker Schema based on Speaker Model
22 """
23
24 class Meta:
25 """
26 Meta class for speaker schema
27 """
28 type_ = 'speaker'
29 self_view = 'v1.speaker_detail'
30 self_view_kwargs = {'id': '<id>'}
31 inflect = dasherize
32
33 id = fields.Str(dump_only=True)
34 name = fields.Str(required=True)
35 email = fields.Str(required=True)
36 photo_url = fields.Url(allow_none=True)
37 thumbnail_image_url = fields.Url(allow_none=True)
38 small_image_url = fields.Url(allow_none=True)
39 icon_image_url = fields.Url(allow_none=True)
40 short_biography = fields.Str(allow_none=True)
41 long_biography = fields.Str(allow_none=True)
42 speaking_experience = fields.Str(allow_none=True)
43 mobile = fields.Str(allow_none=True)
44 website = fields.Url(allow_none=True)
45 twitter = fields.Url(allow_none=True)
46 facebook = fields.Url(allow_none=True)
47 github = fields.Url(allow_none=True)
48 linkedin = fields.Url(allow_none=True)
49 organisation = fields.Str(allow_none=True)
50 is_featured = fields.Boolean(default=False)
51 position = fields.Str(allow_none=True)
52 country = fields.Str(allow_none=True)
53 city = fields.Str(allow_none=True)
54 gender = fields.Str(allow_none=True)
55 heard_from = fields.Str(allow_none=True)
56 sponsorship_required = fields.Str(allow_none=True)
57 event = Relationship(attribute='event',
58 self_view='v1.speaker_event',
59 self_view_kwargs={'id': '<id>'},
60 related_view='v1.event_detail',
61 related_view_kwargs={'speaker_id': '<id>'},
62 schema='EventSchema',
63 type_='event')
64 user = Relationship(attribute='user',
65 self_view='v1.speaker_user',
66 self_view_kwargs={'id': '<id>'},
67 related_view='v1.user_detail',
68 related_view_kwargs={'speaker_id': '<id>'},
69 schema='UserSchema',
70 type_='user')
71 sessions = Relationship(attribute='sessions',
72 self_view='v1.speaker_session',
73 self_view_kwargs={'id': '<id>'},
74 related_view='v1.session_list',
75 related_view_kwargs={'speaker_id': '<id>'},
76 schema='SessionSchema',
77 many=True,
78 type_='session')
79
80
81 class SpeakerListPost(ResourceList):
82 """
83 List and create speakers
84 """
85
86 def before_post(self, args, kwargs, data):
87 """
88 method to add user_id to view_kwargs before post
89 :param args:
90 :param kwargs:
91 :param data:
92 :return:
93 """
94 require_relationship(['event', 'user'], data)
95
96 if not has_access('is_coorganizer', event_id=data['event']):
97 event = safe_query(self, Event, 'id', data['event'], 'event_id')
98 if event.state == "draft":
99 raise ObjectNotFound({'parameter': 'event_id'},
100 "Event: {} not found".format(data['event_id']))
101
102 if 'sessions' in data:
103 session_ids = data['sessions']
104 for session_id in session_ids:
105 if not has_access('is_session_self_submitted', session_id=session_id):
106 raise ObjectNotFound({'parameter': 'session_id'},
107 "Session: {} not found".format(session_id))
108
109 schema = SpeakerSchema
110 methods = ['POST', ]
111 data_layer = {'session': db.session,
112 'model': Speaker
113 }
114
115
116 class SpeakerList(ResourceList):
117 """
118 List speakers based on different params from view_kwargs
119 """
120
121 def query(self, view_kwargs):
122 """
123 query method for speakers list class
124 :param view_kwargs:
125 :return:
126 """
127 query_ = self.session.query(Speaker)
128 if view_kwargs.get('event_identifier'):
129 event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')
130 view_kwargs['event_id'] = event.id
131 if view_kwargs.get('event_id'):
132 event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')
133 query_ = query_.join(Event).filter(Event.id == event.id)
134 if not has_access('is_coorganizer', event_id=event.id):
135 query_ = query_.filter(Event.state == "published")
136
137 if view_kwargs.get('user_id'):
138 user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')
139 query_ = query_.join(User).filter(User.id == user.id)
140
141 if view_kwargs.get('session_id'):
142 session = safe_query(self, Session, 'id', view_kwargs['session_id'], 'session_id')
143 # session-speaker :: many-to-many relationship
144 query_ = Speaker.query.filter(Speaker.sessions.any(id=session.id))
145 if not has_access('is_coorganizer', event_id=session.event_id):
146 if not has_access('is_session_self_submitted', session_id=session.id):
147 query_ = query_.filter(Session.state == "approved" or Session.state == "accepted")
148
149 return query_
150
151 view_kwargs = True
152 schema = SpeakerSchema
153 methods = ['GET', ]
154 data_layer = {'session': db.session,
155 'model': Speaker,
156 'methods': {
157 'query': query,
158 }}
159
160
161 class SpeakerDetail(ResourceDetail):
162 """
163 Speakers Detail by id
164 """
165 decorators = (api.has_permission('is_coorganizer_or_user_itself', methods="PATCH,DELETE", fetch="event_id",
166 fetch_as="event_id", model=Speaker, check=lambda a: a.get('id') is not None),)
167 schema = SpeakerSchema
168 data_layer = {'session': db.session,
169 'model': Speaker}
170
171
172 class SpeakerRelationshipRequired(ResourceRelationship):
173 """
174 Speaker Relationship class
175 """
176 decorators = (jwt_required,)
177 methods = ['GET', 'PATCH']
178 schema = SpeakerSchema
179 data_layer = {'session': db.session,
180 'model': Speaker}
181
182
183 class SpeakerRelationshipOptional(ResourceRelationship):
184 """
185 Speaker Relationship class
186 """
187 decorators = (jwt_required,)
188 schema = SpeakerSchema
189 data_layer = {'session': db.session,
190 'model': Speaker}
191
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/api/speakers.py b/app/api/speakers.py
--- a/app/api/speakers.py
+++ b/app/api/speakers.py
@@ -2,6 +2,7 @@
from marshmallow_jsonapi.flask import Schema, Relationship
from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship
from flask_rest_jsonapi.exceptions import ObjectNotFound
+from flask import request
from app.api.helpers.utilities import dasherize
from app.api.helpers.permissions import jwt_required
@@ -125,14 +126,26 @@
:return:
"""
query_ = self.session.query(Speaker)
- if view_kwargs.get('event_identifier'):
- event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')
- view_kwargs['event_id'] = event.id
if view_kwargs.get('event_id'):
event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')
- query_ = query_.join(Event).filter(Event.id == event.id)
- if not has_access('is_coorganizer', event_id=event.id):
- query_ = query_.filter(Event.state == "published")
+ if event.state != 'published':
+ if 'Authorization' in request.headers and has_access('is_coorganizer', event_id=event.id):
+ query_ = query_.join(Event).filter(Event.id == event.id)
+ else:
+ raise ObjectNotFound({'parameter': 'event_id'},
+ "Event: {} not found".format(view_kwargs['event_identifier']))
+ else:
+ query_ = query_.join(Event).filter(Event.id == event.id)
+ elif view_kwargs.get('event_identifier'):
+ event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')
+ if event.state != 'published':
+ if 'Authorization' in request.headers and has_access('is_coorganizer', event_id=event.id):
+ query_ = query_.join(Event).filter(Event.id == event.id)
+ else:
+ raise ObjectNotFound({'parameter': 'event_identifier'},
+ "Event: {} not found".format(view_kwargs['event_identifier']))
+ else:
+ query_ = query_.join(Event).filter(Event.id == event.id)
if view_kwargs.get('user_id'):
user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')
@@ -142,7 +155,7 @@
session = safe_query(self, Session, 'id', view_kwargs['session_id'], 'session_id')
# session-speaker :: many-to-many relationship
query_ = Speaker.query.filter(Speaker.sessions.any(id=session.id))
- if not has_access('is_coorganizer', event_id=session.event_id):
+ if 'Authorization' in request.headers and not has_access('is_coorganizer', event_id=session.event_id):
if not has_access('is_session_self_submitted', session_id=session.id):
query_ = query_.filter(Session.state == "approved" or Session.state == "accepted")
| {"golden_diff": "diff --git a/app/api/speakers.py b/app/api/speakers.py\n--- a/app/api/speakers.py\n+++ b/app/api/speakers.py\n@@ -2,6 +2,7 @@\n from marshmallow_jsonapi.flask import Schema, Relationship\n from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\n from flask_rest_jsonapi.exceptions import ObjectNotFound\n+from flask import request\n \n from app.api.helpers.utilities import dasherize\n from app.api.helpers.permissions import jwt_required\n@@ -125,14 +126,26 @@\n :return:\n \"\"\"\n query_ = self.session.query(Speaker)\n- if view_kwargs.get('event_identifier'):\n- event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')\n- view_kwargs['event_id'] = event.id\n if view_kwargs.get('event_id'):\n event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')\n- query_ = query_.join(Event).filter(Event.id == event.id)\n- if not has_access('is_coorganizer', event_id=event.id):\n- query_ = query_.filter(Event.state == \"published\")\n+ if event.state != 'published':\n+ if 'Authorization' in request.headers and has_access('is_coorganizer', event_id=event.id):\n+ query_ = query_.join(Event).filter(Event.id == event.id)\n+ else:\n+ raise ObjectNotFound({'parameter': 'event_id'},\n+ \"Event: {} not found\".format(view_kwargs['event_identifier']))\n+ else:\n+ query_ = query_.join(Event).filter(Event.id == event.id)\n+ elif view_kwargs.get('event_identifier'):\n+ event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')\n+ if event.state != 'published':\n+ if 'Authorization' in request.headers and has_access('is_coorganizer', event_id=event.id):\n+ query_ = query_.join(Event).filter(Event.id == event.id)\n+ else:\n+ raise ObjectNotFound({'parameter': 'event_identifier'},\n+ \"Event: {} not found\".format(view_kwargs['event_identifier']))\n+ else:\n+ query_ = query_.join(Event).filter(Event.id == event.id)\n \n if view_kwargs.get('user_id'):\n user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')\n@@ -142,7 +155,7 @@\n session = safe_query(self, Session, 'id', view_kwargs['session_id'], 'session_id')\n # session-speaker :: many-to-many relationship\n query_ = Speaker.query.filter(Speaker.sessions.any(id=session.id))\n- if not has_access('is_coorganizer', event_id=session.event_id):\n+ if 'Authorization' in request.headers and not has_access('is_coorganizer', event_id=session.event_id):\n if not has_access('is_session_self_submitted', session_id=session.id):\n query_ = query_.filter(Session.state == \"approved\" or Session.state == \"accepted\")\n", "issue": "Fix 'auth required' for GET /speakers\n**I'm submitting a ...** (check one with \"x\")\r\n- [x] bug report\r\n- [ ] feature request\r\n- [ ] support request => Please do not submit support requests here, instead ask your query in out Gitter channel at https://gitter.im/fossasia/open-event-orga-server\r\n\r\n**Current behavior:**\r\n<!-- Describe how the bug manifests. -->\r\n\r\n**Expected behavior:**\r\n<!-- Describe what the behavior would be without the bug. -->\r\n\r\n**Steps to reproduce:**\r\n<!-- If you are able to illustrate the bug or feature request with an example, please provide steps to reproduce -->\r\n\r\n**Related code:**\r\n\r\n```\r\ninsert any relevant code here else remove this section\r\n```\r\n\r\n**Other information:**\r\n<!-- List any other information that is relevant to your issue. Stack traces, related issues, suggestions on how to fix, Stack Overflow links, forum links, etc. -->\r\n\r\n**System information:** \r\n\r\n<!-- Add information about the system your facing this bug on. If you think this is irrelevant or if it's a UI bug or a feature request, please remove this section -->\r\n\r\n```\r\nYour operating system\r\n```\r\n\r\n```\r\noutput of `python --version`\r\n```\r\n\n", "before_files": [{"content": "from marshmallow_jsonapi import fields\nfrom marshmallow_jsonapi.flask import Schema, Relationship\nfrom flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\nfrom flask_rest_jsonapi.exceptions import ObjectNotFound\n\nfrom app.api.helpers.utilities import dasherize\nfrom app.api.helpers.permissions import jwt_required\nfrom app.models import db\nfrom app.models.speaker import Speaker\nfrom app.models.session import Session\nfrom app.models.user import User\nfrom app.models.event import Event\nfrom app.api.helpers.db import safe_query\nfrom app.api.bootstrap import api\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.helpers.permission_manager import has_access\n\n\nclass SpeakerSchema(Schema):\n \"\"\"\n Speaker Schema based on Speaker Model\n \"\"\"\n\n class Meta:\n \"\"\"\n Meta class for speaker schema\n \"\"\"\n type_ = 'speaker'\n self_view = 'v1.speaker_detail'\n self_view_kwargs = {'id': '<id>'}\n inflect = dasherize\n\n id = fields.Str(dump_only=True)\n name = fields.Str(required=True)\n email = fields.Str(required=True)\n photo_url = fields.Url(allow_none=True)\n thumbnail_image_url = fields.Url(allow_none=True)\n small_image_url = fields.Url(allow_none=True)\n icon_image_url = fields.Url(allow_none=True)\n short_biography = fields.Str(allow_none=True)\n long_biography = fields.Str(allow_none=True)\n speaking_experience = fields.Str(allow_none=True)\n mobile = fields.Str(allow_none=True)\n website = fields.Url(allow_none=True)\n twitter = fields.Url(allow_none=True)\n facebook = fields.Url(allow_none=True)\n github = fields.Url(allow_none=True)\n linkedin = fields.Url(allow_none=True)\n organisation = fields.Str(allow_none=True)\n is_featured = fields.Boolean(default=False)\n position = fields.Str(allow_none=True)\n country = fields.Str(allow_none=True)\n city = fields.Str(allow_none=True)\n gender = fields.Str(allow_none=True)\n heard_from = fields.Str(allow_none=True)\n sponsorship_required = fields.Str(allow_none=True)\n event = Relationship(attribute='event',\n self_view='v1.speaker_event',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.event_detail',\n related_view_kwargs={'speaker_id': '<id>'},\n schema='EventSchema',\n type_='event')\n user = Relationship(attribute='user',\n self_view='v1.speaker_user',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.user_detail',\n related_view_kwargs={'speaker_id': '<id>'},\n schema='UserSchema',\n type_='user')\n sessions = Relationship(attribute='sessions',\n self_view='v1.speaker_session',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.session_list',\n related_view_kwargs={'speaker_id': '<id>'},\n schema='SessionSchema',\n many=True,\n type_='session')\n\n\nclass SpeakerListPost(ResourceList):\n \"\"\"\n List and create speakers\n \"\"\"\n\n def before_post(self, args, kwargs, data):\n \"\"\"\n method to add user_id to view_kwargs before post\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['event', 'user'], data)\n\n if not has_access('is_coorganizer', event_id=data['event']):\n event = safe_query(self, Event, 'id', data['event'], 'event_id')\n if event.state == \"draft\":\n raise ObjectNotFound({'parameter': 'event_id'},\n \"Event: {} not found\".format(data['event_id']))\n\n if 'sessions' in data:\n session_ids = data['sessions']\n for session_id in session_ids:\n if not has_access('is_session_self_submitted', session_id=session_id):\n raise ObjectNotFound({'parameter': 'session_id'},\n \"Session: {} not found\".format(session_id))\n\n schema = SpeakerSchema\n methods = ['POST', ]\n data_layer = {'session': db.session,\n 'model': Speaker\n }\n\n\nclass SpeakerList(ResourceList):\n \"\"\"\n List speakers based on different params from view_kwargs\n \"\"\"\n\n def query(self, view_kwargs):\n \"\"\"\n query method for speakers list class\n :param view_kwargs:\n :return:\n \"\"\"\n query_ = self.session.query(Speaker)\n if view_kwargs.get('event_identifier'):\n event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')\n view_kwargs['event_id'] = event.id\n if view_kwargs.get('event_id'):\n event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')\n query_ = query_.join(Event).filter(Event.id == event.id)\n if not has_access('is_coorganizer', event_id=event.id):\n query_ = query_.filter(Event.state == \"published\")\n\n if view_kwargs.get('user_id'):\n user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')\n query_ = query_.join(User).filter(User.id == user.id)\n\n if view_kwargs.get('session_id'):\n session = safe_query(self, Session, 'id', view_kwargs['session_id'], 'session_id')\n # session-speaker :: many-to-many relationship\n query_ = Speaker.query.filter(Speaker.sessions.any(id=session.id))\n if not has_access('is_coorganizer', event_id=session.event_id):\n if not has_access('is_session_self_submitted', session_id=session.id):\n query_ = query_.filter(Session.state == \"approved\" or Session.state == \"accepted\")\n\n return query_\n\n view_kwargs = True\n schema = SpeakerSchema\n methods = ['GET', ]\n data_layer = {'session': db.session,\n 'model': Speaker,\n 'methods': {\n 'query': query,\n }}\n\n\nclass SpeakerDetail(ResourceDetail):\n \"\"\"\n Speakers Detail by id\n \"\"\"\n decorators = (api.has_permission('is_coorganizer_or_user_itself', methods=\"PATCH,DELETE\", fetch=\"event_id\",\n fetch_as=\"event_id\", model=Speaker, check=lambda a: a.get('id') is not None),)\n schema = SpeakerSchema\n data_layer = {'session': db.session,\n 'model': Speaker}\n\n\nclass SpeakerRelationshipRequired(ResourceRelationship):\n \"\"\"\n Speaker Relationship class\n \"\"\"\n decorators = (jwt_required,)\n methods = ['GET', 'PATCH']\n schema = SpeakerSchema\n data_layer = {'session': db.session,\n 'model': Speaker}\n\n\nclass SpeakerRelationshipOptional(ResourceRelationship):\n \"\"\"\n Speaker Relationship class\n \"\"\"\n decorators = (jwt_required,)\n schema = SpeakerSchema\n data_layer = {'session': db.session,\n 'model': Speaker}\n", "path": "app/api/speakers.py"}], "after_files": [{"content": "from marshmallow_jsonapi import fields\nfrom marshmallow_jsonapi.flask import Schema, Relationship\nfrom flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\nfrom flask_rest_jsonapi.exceptions import ObjectNotFound\nfrom flask import request\n\nfrom app.api.helpers.utilities import dasherize\nfrom app.api.helpers.permissions import jwt_required\nfrom app.models import db\nfrom app.models.speaker import Speaker\nfrom app.models.session import Session\nfrom app.models.user import User\nfrom app.models.event import Event\nfrom app.api.helpers.db import safe_query\nfrom app.api.bootstrap import api\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.helpers.permission_manager import has_access\n\n\nclass SpeakerSchema(Schema):\n \"\"\"\n Speaker Schema based on Speaker Model\n \"\"\"\n\n class Meta:\n \"\"\"\n Meta class for speaker schema\n \"\"\"\n type_ = 'speaker'\n self_view = 'v1.speaker_detail'\n self_view_kwargs = {'id': '<id>'}\n inflect = dasherize\n\n id = fields.Str(dump_only=True)\n name = fields.Str(required=True)\n email = fields.Str(required=True)\n photo_url = fields.Url(allow_none=True)\n thumbnail_image_url = fields.Url(allow_none=True)\n small_image_url = fields.Url(allow_none=True)\n icon_image_url = fields.Url(allow_none=True)\n short_biography = fields.Str(allow_none=True)\n long_biography = fields.Str(allow_none=True)\n speaking_experience = fields.Str(allow_none=True)\n mobile = fields.Str(allow_none=True)\n website = fields.Url(allow_none=True)\n twitter = fields.Url(allow_none=True)\n facebook = fields.Url(allow_none=True)\n github = fields.Url(allow_none=True)\n linkedin = fields.Url(allow_none=True)\n organisation = fields.Str(allow_none=True)\n is_featured = fields.Boolean(default=False)\n position = fields.Str(allow_none=True)\n country = fields.Str(allow_none=True)\n city = fields.Str(allow_none=True)\n gender = fields.Str(allow_none=True)\n heard_from = fields.Str(allow_none=True)\n sponsorship_required = fields.Str(allow_none=True)\n event = Relationship(attribute='event',\n self_view='v1.speaker_event',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.event_detail',\n related_view_kwargs={'speaker_id': '<id>'},\n schema='EventSchema',\n type_='event')\n user = Relationship(attribute='user',\n self_view='v1.speaker_user',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.user_detail',\n related_view_kwargs={'speaker_id': '<id>'},\n schema='UserSchema',\n type_='user')\n sessions = Relationship(attribute='sessions',\n self_view='v1.speaker_session',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.session_list',\n related_view_kwargs={'speaker_id': '<id>'},\n schema='SessionSchema',\n many=True,\n type_='session')\n\n\nclass SpeakerListPost(ResourceList):\n \"\"\"\n List and create speakers\n \"\"\"\n\n def before_post(self, args, kwargs, data):\n \"\"\"\n method to add user_id to view_kwargs before post\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['event', 'user'], data)\n\n if not has_access('is_coorganizer', event_id=data['event']):\n event = safe_query(self, Event, 'id', data['event'], 'event_id')\n if event.state == \"draft\":\n raise ObjectNotFound({'parameter': 'event_id'},\n \"Event: {} not found\".format(data['event_id']))\n\n if 'sessions' in data:\n session_ids = data['sessions']\n for session_id in session_ids:\n if not has_access('is_session_self_submitted', session_id=session_id):\n raise ObjectNotFound({'parameter': 'session_id'},\n \"Session: {} not found\".format(session_id))\n\n schema = SpeakerSchema\n methods = ['POST', ]\n data_layer = {'session': db.session,\n 'model': Speaker\n }\n\n\nclass SpeakerList(ResourceList):\n \"\"\"\n List speakers based on different params from view_kwargs\n \"\"\"\n\n def query(self, view_kwargs):\n \"\"\"\n query method for speakers list class\n :param view_kwargs:\n :return:\n \"\"\"\n query_ = self.session.query(Speaker)\n if view_kwargs.get('event_id'):\n event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')\n if event.state != 'published':\n if 'Authorization' in request.headers and has_access('is_coorganizer', event_id=event.id):\n query_ = query_.join(Event).filter(Event.id == event.id)\n else:\n raise ObjectNotFound({'parameter': 'event_id'},\n \"Event: {} not found\".format(view_kwargs['event_identifier']))\n else:\n query_ = query_.join(Event).filter(Event.id == event.id)\n elif view_kwargs.get('event_identifier'):\n event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')\n if event.state != 'published':\n if 'Authorization' in request.headers and has_access('is_coorganizer', event_id=event.id):\n query_ = query_.join(Event).filter(Event.id == event.id)\n else:\n raise ObjectNotFound({'parameter': 'event_identifier'},\n \"Event: {} not found\".format(view_kwargs['event_identifier']))\n else:\n query_ = query_.join(Event).filter(Event.id == event.id)\n\n if view_kwargs.get('user_id'):\n user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')\n query_ = query_.join(User).filter(User.id == user.id)\n\n if view_kwargs.get('session_id'):\n session = safe_query(self, Session, 'id', view_kwargs['session_id'], 'session_id')\n # session-speaker :: many-to-many relationship\n query_ = Speaker.query.filter(Speaker.sessions.any(id=session.id))\n if 'Authorization' in request.headers and not has_access('is_coorganizer', event_id=session.event_id):\n if not has_access('is_session_self_submitted', session_id=session.id):\n query_ = query_.filter(Session.state == \"approved\" or Session.state == \"accepted\")\n\n return query_\n\n view_kwargs = True\n schema = SpeakerSchema\n methods = ['GET', ]\n data_layer = {'session': db.session,\n 'model': Speaker,\n 'methods': {\n 'query': query,\n }}\n\n\nclass SpeakerDetail(ResourceDetail):\n \"\"\"\n Speakers Detail by id\n \"\"\"\n decorators = (api.has_permission('is_coorganizer_or_user_itself', methods=\"PATCH,DELETE\", fetch=\"event_id\",\n fetch_as=\"event_id\", model=Speaker, check=lambda a: a.get('id') is not None),)\n schema = SpeakerSchema\n data_layer = {'session': db.session,\n 'model': Speaker}\n\n\nclass SpeakerRelationshipRequired(ResourceRelationship):\n \"\"\"\n Speaker Relationship class\n \"\"\"\n decorators = (jwt_required,)\n methods = ['GET', 'PATCH']\n schema = SpeakerSchema\n data_layer = {'session': db.session,\n 'model': Speaker}\n\n\nclass SpeakerRelationshipOptional(ResourceRelationship):\n \"\"\"\n Speaker Relationship class\n \"\"\"\n decorators = (jwt_required,)\n schema = SpeakerSchema\n data_layer = {'session': db.session,\n 'model': Speaker}\n", "path": "app/api/speakers.py"}]} | 2,463 | 675 |
gh_patches_debug_23822 | rasdani/github-patches | git_diff | pytorch__vision-3396 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ResNet FPN backbone
## 🐛 Bug
In https://github.com/pytorch/vision/blob/master/torchvision/models/detection/backbone_utils.py#L99, it says
> freeze layers only if pretrained backbone is used.
Therefore, if pretrained backbone is NOT used, any layers should be frozen.
But when I run:
```
from torchvision.models.detection.backbone_utils import resnet_fpn_backbone
resnet_fpn_backbone('resnet50', pretrained=False, norm_layer=None, trainable_layers=5)
```
I can see that parameters `bn1.weight, bn1.bias, fc.weight, fc.bias` are frozen on the [line 101](https://github.com/pytorch/vision/blob/master/torchvision/models/detection/backbone_utils.py#L101).
## To Reproduce
Steps to reproduce the behavior:
Run
```
from torchvision.models.detection.backbone_utils import resnet_fpn_backbone
resnet_fpn_backbone('resnet50', pretrained=False, norm_layer=None, trainable_layers=5)
```
and see that on the [line 101](https://github.com/pytorch/vision/blob/master/torchvision/models/detection/backbone_utils.py#L101) parameters `bn1.weight, bn1.bias, fc.weight, fc.bias` are frozen despite that `trainable_layers=5` which should correspond to no frozen layers as written on the [line 83](https://github.com/pytorch/vision/blob/master/torchvision/models/detection/backbone_utils.py#L83).
## Expected behavior
No layers are frozen.
## Environment
PyTorch version: 1.7.0
Is debug build: True
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 10.15.7 (x86_64)
GCC version: Could not collect
Clang version: 12.0.0 (clang-1200.0.32.29)
CMake version: version 3.18.2
Python version: 3.8 (64-bit runtime)
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.19.2
[pip3] pytorch-lightning==1.1.1
[pip3] pytorch-lightning-bolts==0.2.5rc1
[pip3] torch==1.7.0
[pip3] torchvision==0.8.1
[conda] blas 1.0 mkl
[conda] mkl 2019.5 281 conda-forge
[conda] mkl-service 2.3.0 py38h0b31af3_0 conda-forge
[conda] mkl_fft 1.2.0 py38hc64f4ea_0
[conda] mkl_random 1.1.1 py38h959d312_0
[conda] numpy 1.19.2 py38h456fd55_0
[conda] numpy-base 1.19.2 py38hcfb5961_0
[conda] pytorch 1.7.0 py3.8_0 pytorch
[conda] pytorch-lightning 1.1.1 pyhd8ed1ab_0 conda-forge
[conda] pytorch-lightning-bolts 0.2.5rc1 pypi_0 pypi
[conda] torchvision 0.8.1 py38_cpu pytorch
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchvision/models/detection/backbone_utils.py`
Content:
```
1 import warnings
2 from torch import nn
3 from torchvision.ops.feature_pyramid_network import FeaturePyramidNetwork, LastLevelMaxPool
4
5 from torchvision.ops import misc as misc_nn_ops
6 from .._utils import IntermediateLayerGetter
7 from .. import mobilenet
8 from .. import resnet
9
10
11 class BackboneWithFPN(nn.Module):
12 """
13 Adds a FPN on top of a model.
14 Internally, it uses torchvision.models._utils.IntermediateLayerGetter to
15 extract a submodel that returns the feature maps specified in return_layers.
16 The same limitations of IntermediatLayerGetter apply here.
17 Args:
18 backbone (nn.Module)
19 return_layers (Dict[name, new_name]): a dict containing the names
20 of the modules for which the activations will be returned as
21 the key of the dict, and the value of the dict is the name
22 of the returned activation (which the user can specify).
23 in_channels_list (List[int]): number of channels for each feature map
24 that is returned, in the order they are present in the OrderedDict
25 out_channels (int): number of channels in the FPN.
26 Attributes:
27 out_channels (int): the number of channels in the FPN
28 """
29 def __init__(self, backbone, return_layers, in_channels_list, out_channels, extra_blocks=None):
30 super(BackboneWithFPN, self).__init__()
31
32 if extra_blocks is None:
33 extra_blocks = LastLevelMaxPool()
34
35 self.body = IntermediateLayerGetter(backbone, return_layers=return_layers)
36 self.fpn = FeaturePyramidNetwork(
37 in_channels_list=in_channels_list,
38 out_channels=out_channels,
39 extra_blocks=extra_blocks,
40 )
41 self.out_channels = out_channels
42
43 def forward(self, x):
44 x = self.body(x)
45 x = self.fpn(x)
46 return x
47
48
49 def resnet_fpn_backbone(
50 backbone_name,
51 pretrained,
52 norm_layer=misc_nn_ops.FrozenBatchNorm2d,
53 trainable_layers=3,
54 returned_layers=None,
55 extra_blocks=None
56 ):
57 """
58 Constructs a specified ResNet backbone with FPN on top. Freezes the specified number of layers in the backbone.
59
60 Examples::
61
62 >>> from torchvision.models.detection.backbone_utils import resnet_fpn_backbone
63 >>> backbone = resnet_fpn_backbone('resnet50', pretrained=True, trainable_layers=3)
64 >>> # get some dummy image
65 >>> x = torch.rand(1,3,64,64)
66 >>> # compute the output
67 >>> output = backbone(x)
68 >>> print([(k, v.shape) for k, v in output.items()])
69 >>> # returns
70 >>> [('0', torch.Size([1, 256, 16, 16])),
71 >>> ('1', torch.Size([1, 256, 8, 8])),
72 >>> ('2', torch.Size([1, 256, 4, 4])),
73 >>> ('3', torch.Size([1, 256, 2, 2])),
74 >>> ('pool', torch.Size([1, 256, 1, 1]))]
75
76 Args:
77 backbone_name (string): resnet architecture. Possible values are 'ResNet', 'resnet18', 'resnet34', 'resnet50',
78 'resnet101', 'resnet152', 'resnext50_32x4d', 'resnext101_32x8d', 'wide_resnet50_2', 'wide_resnet101_2'
79 pretrained (bool): If True, returns a model with backbone pre-trained on Imagenet
80 norm_layer (torchvision.ops): it is recommended to use the default value. For details visit:
81 (https://github.com/facebookresearch/maskrcnn-benchmark/issues/267)
82 trainable_layers (int): number of trainable (not frozen) resnet layers starting from final block.
83 Valid values are between 0 and 5, with 5 meaning all backbone layers are trainable.
84 returned_layers (list of int): The layers of the network to return. Each entry must be in ``[1, 4]``.
85 By default all layers are returned.
86 extra_blocks (ExtraFPNBlock or None): if provided, extra operations will
87 be performed. It is expected to take the fpn features, the original
88 features and the names of the original features as input, and returns
89 a new list of feature maps and their corresponding names. By
90 default a ``LastLevelMaxPool`` is used.
91 """
92 backbone = resnet.__dict__[backbone_name](
93 pretrained=pretrained,
94 norm_layer=norm_layer)
95
96 # select layers that wont be frozen
97 assert 0 <= trainable_layers <= 5
98 layers_to_train = ['layer4', 'layer3', 'layer2', 'layer1', 'conv1'][:trainable_layers]
99 # freeze layers only if pretrained backbone is used
100 for name, parameter in backbone.named_parameters():
101 if all([not name.startswith(layer) for layer in layers_to_train]):
102 parameter.requires_grad_(False)
103
104 if extra_blocks is None:
105 extra_blocks = LastLevelMaxPool()
106
107 if returned_layers is None:
108 returned_layers = [1, 2, 3, 4]
109 assert min(returned_layers) > 0 and max(returned_layers) < 5
110 return_layers = {f'layer{k}': str(v) for v, k in enumerate(returned_layers)}
111
112 in_channels_stage2 = backbone.inplanes // 8
113 in_channels_list = [in_channels_stage2 * 2 ** (i - 1) for i in returned_layers]
114 out_channels = 256
115 return BackboneWithFPN(backbone, return_layers, in_channels_list, out_channels, extra_blocks=extra_blocks)
116
117
118 def _validate_trainable_layers(pretrained, trainable_backbone_layers, max_value, default_value):
119 # dont freeze any layers if pretrained model or backbone is not used
120 if not pretrained:
121 if trainable_backbone_layers is not None:
122 warnings.warn(
123 "Changing trainable_backbone_layers has not effect if "
124 "neither pretrained nor pretrained_backbone have been set to True, "
125 "falling back to trainable_backbone_layers={} so that all layers are trainable".format(max_value))
126 trainable_backbone_layers = max_value
127
128 # by default freeze first blocks
129 if trainable_backbone_layers is None:
130 trainable_backbone_layers = default_value
131 assert 0 <= trainable_backbone_layers <= max_value
132 return trainable_backbone_layers
133
134
135 def mobilenet_backbone(
136 backbone_name,
137 pretrained,
138 fpn,
139 norm_layer=misc_nn_ops.FrozenBatchNorm2d,
140 trainable_layers=2,
141 returned_layers=None,
142 extra_blocks=None
143 ):
144 backbone = mobilenet.__dict__[backbone_name](pretrained=pretrained, norm_layer=norm_layer).features
145
146 # Gather the indices of blocks which are strided. These are the locations of C1, ..., Cn-1 blocks.
147 # The first and last blocks are always included because they are the C0 (conv1) and Cn.
148 stage_indices = [0] + [i for i, b in enumerate(backbone) if getattr(b, "_is_cn", False)] + [len(backbone) - 1]
149 num_stages = len(stage_indices)
150
151 # find the index of the layer from which we wont freeze
152 assert 0 <= trainable_layers <= num_stages
153 freeze_before = num_stages if trainable_layers == 0 else stage_indices[num_stages - trainable_layers]
154
155 # freeze layers only if pretrained backbone is used
156 for b in backbone[:freeze_before]:
157 for parameter in b.parameters():
158 parameter.requires_grad_(False)
159
160 out_channels = 256
161 if fpn:
162 if extra_blocks is None:
163 extra_blocks = LastLevelMaxPool()
164
165 if returned_layers is None:
166 returned_layers = [num_stages - 2, num_stages - 1]
167 assert min(returned_layers) >= 0 and max(returned_layers) < num_stages
168 return_layers = {f'{stage_indices[k]}': str(v) for v, k in enumerate(returned_layers)}
169
170 in_channels_list = [backbone[stage_indices[i]].out_channels for i in returned_layers]
171 return BackboneWithFPN(backbone, return_layers, in_channels_list, out_channels, extra_blocks=extra_blocks)
172 else:
173 m = nn.Sequential(
174 backbone,
175 # depthwise linear combination of channels to reduce their size
176 nn.Conv2d(backbone[-1].out_channels, out_channels, 1),
177 )
178 m.out_channels = out_channels
179 return m
180
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/torchvision/models/detection/backbone_utils.py b/torchvision/models/detection/backbone_utils.py
--- a/torchvision/models/detection/backbone_utils.py
+++ b/torchvision/models/detection/backbone_utils.py
@@ -96,7 +96,8 @@
# select layers that wont be frozen
assert 0 <= trainable_layers <= 5
layers_to_train = ['layer4', 'layer3', 'layer2', 'layer1', 'conv1'][:trainable_layers]
- # freeze layers only if pretrained backbone is used
+ if trainable_layers == 5:
+ layers_to_train.append('bn1')
for name, parameter in backbone.named_parameters():
if all([not name.startswith(layer) for layer in layers_to_train]):
parameter.requires_grad_(False)
@@ -152,7 +153,6 @@
assert 0 <= trainable_layers <= num_stages
freeze_before = num_stages if trainable_layers == 0 else stage_indices[num_stages - trainable_layers]
- # freeze layers only if pretrained backbone is used
for b in backbone[:freeze_before]:
for parameter in b.parameters():
parameter.requires_grad_(False)
| {"golden_diff": "diff --git a/torchvision/models/detection/backbone_utils.py b/torchvision/models/detection/backbone_utils.py\n--- a/torchvision/models/detection/backbone_utils.py\n+++ b/torchvision/models/detection/backbone_utils.py\n@@ -96,7 +96,8 @@\n # select layers that wont be frozen\n assert 0 <= trainable_layers <= 5\n layers_to_train = ['layer4', 'layer3', 'layer2', 'layer1', 'conv1'][:trainable_layers]\n- # freeze layers only if pretrained backbone is used\n+ if trainable_layers == 5:\n+ layers_to_train.append('bn1')\n for name, parameter in backbone.named_parameters():\n if all([not name.startswith(layer) for layer in layers_to_train]):\n parameter.requires_grad_(False)\n@@ -152,7 +153,6 @@\n assert 0 <= trainable_layers <= num_stages\n freeze_before = num_stages if trainable_layers == 0 else stage_indices[num_stages - trainable_layers]\n \n- # freeze layers only if pretrained backbone is used\n for b in backbone[:freeze_before]:\n for parameter in b.parameters():\n parameter.requires_grad_(False)\n", "issue": "ResNet FPN backbone\n## \ud83d\udc1b Bug\r\n\r\nIn https://github.com/pytorch/vision/blob/master/torchvision/models/detection/backbone_utils.py#L99, it says\r\n\r\n> freeze layers only if pretrained backbone is used.\r\n\r\nTherefore, if pretrained backbone is NOT used, any layers should be frozen.\r\n\r\nBut when I run:\r\n```\r\nfrom torchvision.models.detection.backbone_utils import resnet_fpn_backbone\r\nresnet_fpn_backbone('resnet50', pretrained=False, norm_layer=None, trainable_layers=5)\r\n```\r\nI can see that parameters `bn1.weight, bn1.bias, fc.weight, fc.bias` are frozen on the [line 101](https://github.com/pytorch/vision/blob/master/torchvision/models/detection/backbone_utils.py#L101).\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\nRun \r\n```\r\nfrom torchvision.models.detection.backbone_utils import resnet_fpn_backbone\r\nresnet_fpn_backbone('resnet50', pretrained=False, norm_layer=None, trainable_layers=5)\r\n```\r\n\r\nand see that on the [line 101](https://github.com/pytorch/vision/blob/master/torchvision/models/detection/backbone_utils.py#L101) parameters `bn1.weight, bn1.bias, fc.weight, fc.bias` are frozen despite that `trainable_layers=5` which should correspond to no frozen layers as written on the [line 83](https://github.com/pytorch/vision/blob/master/torchvision/models/detection/backbone_utils.py#L83).\r\n\r\n\r\n## Expected behavior\r\n\r\nNo layers are frozen.\r\n\r\n## Environment\r\n\r\nPyTorch version: 1.7.0\r\nIs debug build: True\r\nCUDA used to build PyTorch: None\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: macOS 10.15.7 (x86_64)\r\nGCC version: Could not collect\r\nClang version: 12.0.0 (clang-1200.0.32.29)\r\nCMake version: version 3.18.2\r\n\r\nPython version: 3.8 (64-bit runtime)\r\nIs CUDA available: False\r\nCUDA runtime version: No CUDA\r\nGPU models and configuration: No CUDA\r\nNvidia driver version: No CUDA\r\ncuDNN version: No CUDA\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.19.2\r\n[pip3] pytorch-lightning==1.1.1\r\n[pip3] pytorch-lightning-bolts==0.2.5rc1\r\n[pip3] torch==1.7.0\r\n[pip3] torchvision==0.8.1\r\n[conda] blas 1.0 mkl\r\n[conda] mkl 2019.5 281 conda-forge\r\n[conda] mkl-service 2.3.0 py38h0b31af3_0 conda-forge\r\n[conda] mkl_fft 1.2.0 py38hc64f4ea_0\r\n[conda] mkl_random 1.1.1 py38h959d312_0\r\n[conda] numpy 1.19.2 py38h456fd55_0\r\n[conda] numpy-base 1.19.2 py38hcfb5961_0\r\n[conda] pytorch 1.7.0 py3.8_0 pytorch\r\n[conda] pytorch-lightning 1.1.1 pyhd8ed1ab_0 conda-forge\r\n[conda] pytorch-lightning-bolts 0.2.5rc1 pypi_0 pypi\r\n[conda] torchvision 0.8.1 py38_cpu pytorch\r\n\r\n\n", "before_files": [{"content": "import warnings\nfrom torch import nn\nfrom torchvision.ops.feature_pyramid_network import FeaturePyramidNetwork, LastLevelMaxPool\n\nfrom torchvision.ops import misc as misc_nn_ops\nfrom .._utils import IntermediateLayerGetter\nfrom .. import mobilenet\nfrom .. import resnet\n\n\nclass BackboneWithFPN(nn.Module):\n \"\"\"\n Adds a FPN on top of a model.\n Internally, it uses torchvision.models._utils.IntermediateLayerGetter to\n extract a submodel that returns the feature maps specified in return_layers.\n The same limitations of IntermediatLayerGetter apply here.\n Args:\n backbone (nn.Module)\n return_layers (Dict[name, new_name]): a dict containing the names\n of the modules for which the activations will be returned as\n the key of the dict, and the value of the dict is the name\n of the returned activation (which the user can specify).\n in_channels_list (List[int]): number of channels for each feature map\n that is returned, in the order they are present in the OrderedDict\n out_channels (int): number of channels in the FPN.\n Attributes:\n out_channels (int): the number of channels in the FPN\n \"\"\"\n def __init__(self, backbone, return_layers, in_channels_list, out_channels, extra_blocks=None):\n super(BackboneWithFPN, self).__init__()\n\n if extra_blocks is None:\n extra_blocks = LastLevelMaxPool()\n\n self.body = IntermediateLayerGetter(backbone, return_layers=return_layers)\n self.fpn = FeaturePyramidNetwork(\n in_channels_list=in_channels_list,\n out_channels=out_channels,\n extra_blocks=extra_blocks,\n )\n self.out_channels = out_channels\n\n def forward(self, x):\n x = self.body(x)\n x = self.fpn(x)\n return x\n\n\ndef resnet_fpn_backbone(\n backbone_name,\n pretrained,\n norm_layer=misc_nn_ops.FrozenBatchNorm2d,\n trainable_layers=3,\n returned_layers=None,\n extra_blocks=None\n):\n \"\"\"\n Constructs a specified ResNet backbone with FPN on top. Freezes the specified number of layers in the backbone.\n\n Examples::\n\n >>> from torchvision.models.detection.backbone_utils import resnet_fpn_backbone\n >>> backbone = resnet_fpn_backbone('resnet50', pretrained=True, trainable_layers=3)\n >>> # get some dummy image\n >>> x = torch.rand(1,3,64,64)\n >>> # compute the output\n >>> output = backbone(x)\n >>> print([(k, v.shape) for k, v in output.items()])\n >>> # returns\n >>> [('0', torch.Size([1, 256, 16, 16])),\n >>> ('1', torch.Size([1, 256, 8, 8])),\n >>> ('2', torch.Size([1, 256, 4, 4])),\n >>> ('3', torch.Size([1, 256, 2, 2])),\n >>> ('pool', torch.Size([1, 256, 1, 1]))]\n\n Args:\n backbone_name (string): resnet architecture. Possible values are 'ResNet', 'resnet18', 'resnet34', 'resnet50',\n 'resnet101', 'resnet152', 'resnext50_32x4d', 'resnext101_32x8d', 'wide_resnet50_2', 'wide_resnet101_2'\n pretrained (bool): If True, returns a model with backbone pre-trained on Imagenet\n norm_layer (torchvision.ops): it is recommended to use the default value. For details visit:\n (https://github.com/facebookresearch/maskrcnn-benchmark/issues/267)\n trainable_layers (int): number of trainable (not frozen) resnet layers starting from final block.\n Valid values are between 0 and 5, with 5 meaning all backbone layers are trainable.\n returned_layers (list of int): The layers of the network to return. Each entry must be in ``[1, 4]``.\n By default all layers are returned.\n extra_blocks (ExtraFPNBlock or None): if provided, extra operations will\n be performed. It is expected to take the fpn features, the original\n features and the names of the original features as input, and returns\n a new list of feature maps and their corresponding names. By\n default a ``LastLevelMaxPool`` is used.\n \"\"\"\n backbone = resnet.__dict__[backbone_name](\n pretrained=pretrained,\n norm_layer=norm_layer)\n\n # select layers that wont be frozen\n assert 0 <= trainable_layers <= 5\n layers_to_train = ['layer4', 'layer3', 'layer2', 'layer1', 'conv1'][:trainable_layers]\n # freeze layers only if pretrained backbone is used\n for name, parameter in backbone.named_parameters():\n if all([not name.startswith(layer) for layer in layers_to_train]):\n parameter.requires_grad_(False)\n\n if extra_blocks is None:\n extra_blocks = LastLevelMaxPool()\n\n if returned_layers is None:\n returned_layers = [1, 2, 3, 4]\n assert min(returned_layers) > 0 and max(returned_layers) < 5\n return_layers = {f'layer{k}': str(v) for v, k in enumerate(returned_layers)}\n\n in_channels_stage2 = backbone.inplanes // 8\n in_channels_list = [in_channels_stage2 * 2 ** (i - 1) for i in returned_layers]\n out_channels = 256\n return BackboneWithFPN(backbone, return_layers, in_channels_list, out_channels, extra_blocks=extra_blocks)\n\n\ndef _validate_trainable_layers(pretrained, trainable_backbone_layers, max_value, default_value):\n # dont freeze any layers if pretrained model or backbone is not used\n if not pretrained:\n if trainable_backbone_layers is not None:\n warnings.warn(\n \"Changing trainable_backbone_layers has not effect if \"\n \"neither pretrained nor pretrained_backbone have been set to True, \"\n \"falling back to trainable_backbone_layers={} so that all layers are trainable\".format(max_value))\n trainable_backbone_layers = max_value\n\n # by default freeze first blocks\n if trainable_backbone_layers is None:\n trainable_backbone_layers = default_value\n assert 0 <= trainable_backbone_layers <= max_value\n return trainable_backbone_layers\n\n\ndef mobilenet_backbone(\n backbone_name,\n pretrained,\n fpn,\n norm_layer=misc_nn_ops.FrozenBatchNorm2d,\n trainable_layers=2,\n returned_layers=None,\n extra_blocks=None\n):\n backbone = mobilenet.__dict__[backbone_name](pretrained=pretrained, norm_layer=norm_layer).features\n\n # Gather the indices of blocks which are strided. These are the locations of C1, ..., Cn-1 blocks.\n # The first and last blocks are always included because they are the C0 (conv1) and Cn.\n stage_indices = [0] + [i for i, b in enumerate(backbone) if getattr(b, \"_is_cn\", False)] + [len(backbone) - 1]\n num_stages = len(stage_indices)\n\n # find the index of the layer from which we wont freeze\n assert 0 <= trainable_layers <= num_stages\n freeze_before = num_stages if trainable_layers == 0 else stage_indices[num_stages - trainable_layers]\n\n # freeze layers only if pretrained backbone is used\n for b in backbone[:freeze_before]:\n for parameter in b.parameters():\n parameter.requires_grad_(False)\n\n out_channels = 256\n if fpn:\n if extra_blocks is None:\n extra_blocks = LastLevelMaxPool()\n\n if returned_layers is None:\n returned_layers = [num_stages - 2, num_stages - 1]\n assert min(returned_layers) >= 0 and max(returned_layers) < num_stages\n return_layers = {f'{stage_indices[k]}': str(v) for v, k in enumerate(returned_layers)}\n\n in_channels_list = [backbone[stage_indices[i]].out_channels for i in returned_layers]\n return BackboneWithFPN(backbone, return_layers, in_channels_list, out_channels, extra_blocks=extra_blocks)\n else:\n m = nn.Sequential(\n backbone,\n # depthwise linear combination of channels to reduce their size\n nn.Conv2d(backbone[-1].out_channels, out_channels, 1),\n )\n m.out_channels = out_channels\n return m\n", "path": "torchvision/models/detection/backbone_utils.py"}], "after_files": [{"content": "import warnings\nfrom torch import nn\nfrom torchvision.ops.feature_pyramid_network import FeaturePyramidNetwork, LastLevelMaxPool\n\nfrom torchvision.ops import misc as misc_nn_ops\nfrom .._utils import IntermediateLayerGetter\nfrom .. import mobilenet\nfrom .. import resnet\n\n\nclass BackboneWithFPN(nn.Module):\n \"\"\"\n Adds a FPN on top of a model.\n Internally, it uses torchvision.models._utils.IntermediateLayerGetter to\n extract a submodel that returns the feature maps specified in return_layers.\n The same limitations of IntermediatLayerGetter apply here.\n Args:\n backbone (nn.Module)\n return_layers (Dict[name, new_name]): a dict containing the names\n of the modules for which the activations will be returned as\n the key of the dict, and the value of the dict is the name\n of the returned activation (which the user can specify).\n in_channels_list (List[int]): number of channels for each feature map\n that is returned, in the order they are present in the OrderedDict\n out_channels (int): number of channels in the FPN.\n Attributes:\n out_channels (int): the number of channels in the FPN\n \"\"\"\n def __init__(self, backbone, return_layers, in_channels_list, out_channels, extra_blocks=None):\n super(BackboneWithFPN, self).__init__()\n\n if extra_blocks is None:\n extra_blocks = LastLevelMaxPool()\n\n self.body = IntermediateLayerGetter(backbone, return_layers=return_layers)\n self.fpn = FeaturePyramidNetwork(\n in_channels_list=in_channels_list,\n out_channels=out_channels,\n extra_blocks=extra_blocks,\n )\n self.out_channels = out_channels\n\n def forward(self, x):\n x = self.body(x)\n x = self.fpn(x)\n return x\n\n\ndef resnet_fpn_backbone(\n backbone_name,\n pretrained,\n norm_layer=misc_nn_ops.FrozenBatchNorm2d,\n trainable_layers=3,\n returned_layers=None,\n extra_blocks=None\n):\n \"\"\"\n Constructs a specified ResNet backbone with FPN on top. Freezes the specified number of layers in the backbone.\n\n Examples::\n\n >>> from torchvision.models.detection.backbone_utils import resnet_fpn_backbone\n >>> backbone = resnet_fpn_backbone('resnet50', pretrained=True, trainable_layers=3)\n >>> # get some dummy image\n >>> x = torch.rand(1,3,64,64)\n >>> # compute the output\n >>> output = backbone(x)\n >>> print([(k, v.shape) for k, v in output.items()])\n >>> # returns\n >>> [('0', torch.Size([1, 256, 16, 16])),\n >>> ('1', torch.Size([1, 256, 8, 8])),\n >>> ('2', torch.Size([1, 256, 4, 4])),\n >>> ('3', torch.Size([1, 256, 2, 2])),\n >>> ('pool', torch.Size([1, 256, 1, 1]))]\n\n Args:\n backbone_name (string): resnet architecture. Possible values are 'ResNet', 'resnet18', 'resnet34', 'resnet50',\n 'resnet101', 'resnet152', 'resnext50_32x4d', 'resnext101_32x8d', 'wide_resnet50_2', 'wide_resnet101_2'\n pretrained (bool): If True, returns a model with backbone pre-trained on Imagenet\n norm_layer (torchvision.ops): it is recommended to use the default value. For details visit:\n (https://github.com/facebookresearch/maskrcnn-benchmark/issues/267)\n trainable_layers (int): number of trainable (not frozen) resnet layers starting from final block.\n Valid values are between 0 and 5, with 5 meaning all backbone layers are trainable.\n returned_layers (list of int): The layers of the network to return. Each entry must be in ``[1, 4]``.\n By default all layers are returned.\n extra_blocks (ExtraFPNBlock or None): if provided, extra operations will\n be performed. It is expected to take the fpn features, the original\n features and the names of the original features as input, and returns\n a new list of feature maps and their corresponding names. By\n default a ``LastLevelMaxPool`` is used.\n \"\"\"\n backbone = resnet.__dict__[backbone_name](\n pretrained=pretrained,\n norm_layer=norm_layer)\n\n # select layers that wont be frozen\n assert 0 <= trainable_layers <= 5\n layers_to_train = ['layer4', 'layer3', 'layer2', 'layer1', 'conv1'][:trainable_layers]\n if trainable_layers == 5:\n layers_to_train.append('bn1')\n for name, parameter in backbone.named_parameters():\n if all([not name.startswith(layer) for layer in layers_to_train]):\n parameter.requires_grad_(False)\n\n if extra_blocks is None:\n extra_blocks = LastLevelMaxPool()\n\n if returned_layers is None:\n returned_layers = [1, 2, 3, 4]\n assert min(returned_layers) > 0 and max(returned_layers) < 5\n return_layers = {f'layer{k}': str(v) for v, k in enumerate(returned_layers)}\n\n in_channels_stage2 = backbone.inplanes // 8\n in_channels_list = [in_channels_stage2 * 2 ** (i - 1) for i in returned_layers]\n out_channels = 256\n return BackboneWithFPN(backbone, return_layers, in_channels_list, out_channels, extra_blocks=extra_blocks)\n\n\ndef _validate_trainable_layers(pretrained, trainable_backbone_layers, max_value, default_value):\n # dont freeze any layers if pretrained model or backbone is not used\n if not pretrained:\n if trainable_backbone_layers is not None:\n warnings.warn(\n \"Changing trainable_backbone_layers has not effect if \"\n \"neither pretrained nor pretrained_backbone have been set to True, \"\n \"falling back to trainable_backbone_layers={} so that all layers are trainable\".format(max_value))\n trainable_backbone_layers = max_value\n\n # by default freeze first blocks\n if trainable_backbone_layers is None:\n trainable_backbone_layers = default_value\n assert 0 <= trainable_backbone_layers <= max_value\n return trainable_backbone_layers\n\n\ndef mobilenet_backbone(\n backbone_name,\n pretrained,\n fpn,\n norm_layer=misc_nn_ops.FrozenBatchNorm2d,\n trainable_layers=2,\n returned_layers=None,\n extra_blocks=None\n):\n backbone = mobilenet.__dict__[backbone_name](pretrained=pretrained, norm_layer=norm_layer).features\n\n # Gather the indices of blocks which are strided. These are the locations of C1, ..., Cn-1 blocks.\n # The first and last blocks are always included because they are the C0 (conv1) and Cn.\n stage_indices = [0] + [i for i, b in enumerate(backbone) if getattr(b, \"_is_cn\", False)] + [len(backbone) - 1]\n num_stages = len(stage_indices)\n\n # find the index of the layer from which we wont freeze\n assert 0 <= trainable_layers <= num_stages\n freeze_before = num_stages if trainable_layers == 0 else stage_indices[num_stages - trainable_layers]\n\n for b in backbone[:freeze_before]:\n for parameter in b.parameters():\n parameter.requires_grad_(False)\n\n out_channels = 256\n if fpn:\n if extra_blocks is None:\n extra_blocks = LastLevelMaxPool()\n\n if returned_layers is None:\n returned_layers = [num_stages - 2, num_stages - 1]\n assert min(returned_layers) >= 0 and max(returned_layers) < num_stages\n return_layers = {f'{stage_indices[k]}': str(v) for v, k in enumerate(returned_layers)}\n\n in_channels_list = [backbone[stage_indices[i]].out_channels for i in returned_layers]\n return BackboneWithFPN(backbone, return_layers, in_channels_list, out_channels, extra_blocks=extra_blocks)\n else:\n m = nn.Sequential(\n backbone,\n # depthwise linear combination of channels to reduce their size\n nn.Conv2d(backbone[-1].out_channels, out_channels, 1),\n )\n m.out_channels = out_channels\n return m\n", "path": "torchvision/models/detection/backbone_utils.py"}]} | 3,493 | 263 |
gh_patches_debug_20797 | rasdani/github-patches | git_diff | ansible-collections__community.general-6370 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Can't use CreateBiosConfigJob command from idrac_redfish_command module
### Summary
I'm trying to create a job for BIOS configuration.
In the documentation example, specified resource_id is System.Embedded.1
https://github.com/ansible-collections/community.general/blob/926c0a71d03a447580b6a034432608c632477059/plugins/modules/remote_management/redfish/idrac_redfish_command.py#L68
When i use this resource, i have an error message :
> Manager resource System.Embedded.1 not found
explained by:
https://github.com/ansible-collections/community.general/blob/926c0a71d03a447580b6a034432608c632477059/plugins/modules/remote_management/redfish/idrac_redfish_command.py#L201
And of course, if I use iDRAC.embedded.1 which is the actual manager resource ID, task fails with :
> System resource iDRAC.Embedded.1 not found
explained by :
https://github.com/ansible-collections/community.general/blob/926c0a71d03a447580b6a034432608c632477059/plugins/modules/remote_management/redfish/idrac_redfish_command.py#L194
### Issue Type
Bug Report
### Component Name
idrac_redfish_command
### Ansible Version
```console (paste below)
$ ansible --version
ansible 2.10.5
config file = /home/pyfontan/.ansible.cfg
configured module search path = ['/home/pyfontan/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/pyfontan/virtualenv/ansible/lib/python3.8/site-packages/ansible
executable location = /home/pyfontan/virtualenv/ansible/bin/ansible
python version = 3.8.6 (default, Jan 27 2021, 15:42:20) [GCC 10.2.0]
```
### Configuration
```console (paste below)
$ ansible-config dump --only-changed
```
### OS / Environment
iDRAC 4.32.20.00 on PowerEdge C6525
or
iDRAC 3.21.26.22 on PowerEdge R740xd
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Create BIOS configuration job (schedule BIOS setting update)
community.general.idrac_redfish_command:
category: Systems
command: CreateBiosConfigJob
#resource_id: "{{asset_system_resource_name}}"
#resource_id: "System.Embedded.1"
resource_id: "iDRAC.Embedded.1"
baseuri: "{{ asset.manager.ip }}"
username: "{{ asset.manager.user }}"
password: "{{ asset.manager.password }}"
```
### Expected Results
I expect to have a BIOS Job configuration created.
### Actual Results
```console (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugins/modules/idrac_redfish_command.py`
Content:
```
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # Copyright (c) 2018 Dell EMC Inc.
5 # GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
6 # SPDX-License-Identifier: GPL-3.0-or-later
7
8 from __future__ import absolute_import, division, print_function
9 __metaclass__ = type
10
11 DOCUMENTATION = '''
12 ---
13 module: idrac_redfish_command
14 short_description: Manages Out-Of-Band controllers using iDRAC OEM Redfish APIs
15 description:
16 - Builds Redfish URIs locally and sends them to remote OOB controllers to
17 perform an action.
18 - For use with Dell iDRAC operations that require Redfish OEM extensions.
19 extends_documentation_fragment:
20 - community.general.attributes
21 attributes:
22 check_mode:
23 support: none
24 diff_mode:
25 support: none
26 options:
27 category:
28 required: true
29 description:
30 - Category to execute on iDRAC.
31 type: str
32 command:
33 required: true
34 description:
35 - List of commands to execute on iDRAC.
36 type: list
37 elements: str
38 baseuri:
39 required: true
40 description:
41 - Base URI of iDRAC.
42 type: str
43 username:
44 description:
45 - Username for authenticating to iDRAC.
46 type: str
47 password:
48 description:
49 - Password for authenticating to iDRAC.
50 type: str
51 auth_token:
52 description:
53 - Security token for authenticating to iDRAC.
54 type: str
55 version_added: 2.3.0
56 timeout:
57 description:
58 - Timeout in seconds for HTTP requests to iDRAC.
59 default: 10
60 type: int
61 resource_id:
62 required: false
63 description:
64 - ID of the System, Manager or Chassis to modify.
65 type: str
66 version_added: '0.2.0'
67
68 author: "Jose Delarosa (@jose-delarosa)"
69 '''
70
71 EXAMPLES = '''
72 - name: Create BIOS configuration job (schedule BIOS setting update)
73 community.general.idrac_redfish_command:
74 category: Systems
75 command: CreateBiosConfigJob
76 resource_id: System.Embedded.1
77 baseuri: "{{ baseuri }}"
78 username: "{{ username }}"
79 password: "{{ password }}"
80 '''
81
82 RETURN = '''
83 msg:
84 description: Message with action result or error description
85 returned: always
86 type: str
87 sample: "Action was successful"
88 return_values:
89 description: Dictionary containing command-specific response data from the action.
90 returned: on success
91 type: dict
92 version_added: 6.6.0
93 sample: {
94 "job_id": "/redfish/v1/Managers/iDRAC.Embedded.1/Jobs/JID_471269252011"
95 }
96 '''
97
98 import re
99 from ansible.module_utils.basic import AnsibleModule
100 from ansible_collections.community.general.plugins.module_utils.redfish_utils import RedfishUtils
101 from ansible.module_utils.common.text.converters import to_native
102
103
104 class IdracRedfishUtils(RedfishUtils):
105
106 def create_bios_config_job(self):
107 result = {}
108 key = "Bios"
109 jobs = "Jobs"
110
111 # Search for 'key' entry and extract URI from it
112 response = self.get_request(self.root_uri + self.systems_uris[0])
113 if response['ret'] is False:
114 return response
115 result['ret'] = True
116 data = response['data']
117
118 if key not in data:
119 return {'ret': False, 'msg': "Key %s not found" % key}
120
121 bios_uri = data[key]["@odata.id"]
122
123 # Extract proper URI
124 response = self.get_request(self.root_uri + bios_uri)
125 if response['ret'] is False:
126 return response
127 result['ret'] = True
128 data = response['data']
129 set_bios_attr_uri = data["@Redfish.Settings"]["SettingsObject"][
130 "@odata.id"]
131
132 payload = {"TargetSettingsURI": set_bios_attr_uri}
133 response = self.post_request(
134 self.root_uri + self.manager_uri + "/" + jobs, payload)
135 if response['ret'] is False:
136 return response
137
138 response_output = response['resp'].__dict__
139 job_id_full = response_output["headers"]["Location"]
140 job_id = re.search("JID_.+", job_id_full).group()
141 return {'ret': True, 'msg': "Config job %s created" % job_id, 'job_id': job_id_full}
142
143
144 CATEGORY_COMMANDS_ALL = {
145 "Systems": ["CreateBiosConfigJob"],
146 "Accounts": [],
147 "Manager": []
148 }
149
150
151 def main():
152 result = {}
153 return_values = {}
154 module = AnsibleModule(
155 argument_spec=dict(
156 category=dict(required=True),
157 command=dict(required=True, type='list', elements='str'),
158 baseuri=dict(required=True),
159 username=dict(),
160 password=dict(no_log=True),
161 auth_token=dict(no_log=True),
162 timeout=dict(type='int', default=10),
163 resource_id=dict()
164 ),
165 required_together=[
166 ('username', 'password'),
167 ],
168 required_one_of=[
169 ('username', 'auth_token'),
170 ],
171 mutually_exclusive=[
172 ('username', 'auth_token'),
173 ],
174 supports_check_mode=False
175 )
176
177 category = module.params['category']
178 command_list = module.params['command']
179
180 # admin credentials used for authentication
181 creds = {'user': module.params['username'],
182 'pswd': module.params['password'],
183 'token': module.params['auth_token']}
184
185 # timeout
186 timeout = module.params['timeout']
187
188 # System, Manager or Chassis ID to modify
189 resource_id = module.params['resource_id']
190
191 # Build root URI
192 root_uri = "https://" + module.params['baseuri']
193 rf_utils = IdracRedfishUtils(creds, root_uri, timeout, module,
194 resource_id=resource_id, data_modification=True)
195
196 # Check that Category is valid
197 if category not in CATEGORY_COMMANDS_ALL:
198 module.fail_json(msg=to_native("Invalid Category '%s'. Valid Categories = %s" % (category, list(CATEGORY_COMMANDS_ALL.keys()))))
199
200 # Check that all commands are valid
201 for cmd in command_list:
202 # Fail if even one command given is invalid
203 if cmd not in CATEGORY_COMMANDS_ALL[category]:
204 module.fail_json(msg=to_native("Invalid Command '%s'. Valid Commands = %s" % (cmd, CATEGORY_COMMANDS_ALL[category])))
205
206 # Organize by Categories / Commands
207
208 if category == "Systems":
209 # execute only if we find a System resource
210 result = rf_utils._find_systems_resource()
211 if result['ret'] is False:
212 module.fail_json(msg=to_native(result['msg']))
213
214 for command in command_list:
215 if command == "CreateBiosConfigJob":
216 # execute only if we find a Managers resource
217 result = rf_utils._find_managers_resource()
218 if result['ret'] is False:
219 module.fail_json(msg=to_native(result['msg']))
220 result = rf_utils.create_bios_config_job()
221 if 'job_id' in result:
222 return_values['job_id'] = result['job_id']
223
224 # Return data back or fail with proper message
225 if result['ret'] is True:
226 del result['ret']
227 module.exit_json(changed=True, msg='Action was successful', return_values=return_values)
228 else:
229 module.fail_json(msg=to_native(result['msg']))
230
231
232 if __name__ == '__main__':
233 main()
234
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/plugins/modules/idrac_redfish_command.py b/plugins/modules/idrac_redfish_command.py
--- a/plugins/modules/idrac_redfish_command.py
+++ b/plugins/modules/idrac_redfish_command.py
@@ -199,7 +199,20 @@
if category == "Systems":
# execute only if we find a System resource
+ # NOTE: Currently overriding the usage of 'data_modification' due to
+ # how 'resource_id' is processed. In the case of CreateBiosConfigJob,
+ # we interact with BOTH systems and managers, so you currently cannot
+ # specify a single 'resource_id' to make both '_find_systems_resource'
+ # and '_find_managers_resource' return success. Since
+ # CreateBiosConfigJob doesn't use the matched 'resource_id' for a
+ # system regardless of what's specified, disabling the 'resource_id'
+ # inspection for the next call allows a specific manager to be
+ # specified with 'resource_id'. If we ever need to expand the input
+ # to inspect a specific system and manager in parallel, this will need
+ # updates.
+ rf_utils.data_modification = False
result = rf_utils._find_systems_resource()
+ rf_utils.data_modification = True
if result['ret'] is False:
module.fail_json(msg=to_native(result['msg']))
| {"golden_diff": "diff --git a/plugins/modules/idrac_redfish_command.py b/plugins/modules/idrac_redfish_command.py\n--- a/plugins/modules/idrac_redfish_command.py\n+++ b/plugins/modules/idrac_redfish_command.py\n@@ -199,7 +199,20 @@\n \n if category == \"Systems\":\n # execute only if we find a System resource\n+ # NOTE: Currently overriding the usage of 'data_modification' due to\n+ # how 'resource_id' is processed. In the case of CreateBiosConfigJob,\n+ # we interact with BOTH systems and managers, so you currently cannot\n+ # specify a single 'resource_id' to make both '_find_systems_resource'\n+ # and '_find_managers_resource' return success. Since\n+ # CreateBiosConfigJob doesn't use the matched 'resource_id' for a\n+ # system regardless of what's specified, disabling the 'resource_id'\n+ # inspection for the next call allows a specific manager to be\n+ # specified with 'resource_id'. If we ever need to expand the input\n+ # to inspect a specific system and manager in parallel, this will need\n+ # updates.\n+ rf_utils.data_modification = False\n result = rf_utils._find_systems_resource()\n+ rf_utils.data_modification = True\n if result['ret'] is False:\n module.fail_json(msg=to_native(result['msg']))\n", "issue": "Can't use CreateBiosConfigJob command from idrac_redfish_command module\n### Summary\n\nI'm trying to create a job for BIOS configuration.\r\n\r\nIn the documentation example, specified resource_id is System.Embedded.1\r\nhttps://github.com/ansible-collections/community.general/blob/926c0a71d03a447580b6a034432608c632477059/plugins/modules/remote_management/redfish/idrac_redfish_command.py#L68\r\n\r\nWhen i use this resource, i have an error message : \r\n> Manager resource System.Embedded.1 not found\r\n\r\nexplained by:\r\nhttps://github.com/ansible-collections/community.general/blob/926c0a71d03a447580b6a034432608c632477059/plugins/modules/remote_management/redfish/idrac_redfish_command.py#L201\r\n\r\nAnd of course, if I use iDRAC.embedded.1 which is the actual manager resource ID, task fails with :\r\n> System resource iDRAC.Embedded.1 not found\r\n\r\nexplained by :\r\nhttps://github.com/ansible-collections/community.general/blob/926c0a71d03a447580b6a034432608c632477059/plugins/modules/remote_management/redfish/idrac_redfish_command.py#L194\r\n\r\n\r\n\n\n### Issue Type\n\nBug Report\n\n### Component Name\n\nidrac_redfish_command\n\n### Ansible Version\n\n```console (paste below)\r\n$ ansible --version\r\nansible 2.10.5\r\n config file = /home/pyfontan/.ansible.cfg\r\n configured module search path = ['/home/pyfontan/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /home/pyfontan/virtualenv/ansible/lib/python3.8/site-packages/ansible\r\n executable location = /home/pyfontan/virtualenv/ansible/bin/ansible\r\n python version = 3.8.6 (default, Jan 27 2021, 15:42:20) [GCC 10.2.0]\r\n\r\n```\r\n\n\n### Configuration\n\n```console (paste below)\r\n$ ansible-config dump --only-changed\r\n\r\n```\r\n\n\n### OS / Environment\n\niDRAC 4.32.20.00 on PowerEdge C6525\r\nor\r\niDRAC 3.21.26.22 on PowerEdge R740xd\n\n### Steps to Reproduce\n\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml\r\n- name: Create BIOS configuration job (schedule BIOS setting update)\r\n community.general.idrac_redfish_command:\r\n category: Systems\r\n command: CreateBiosConfigJob\r\n #resource_id: \"{{asset_system_resource_name}}\"\r\n #resource_id: \"System.Embedded.1\"\r\n resource_id: \"iDRAC.Embedded.1\"\r\n baseuri: \"{{ asset.manager.ip }}\"\r\n username: \"{{ asset.manager.user }}\"\r\n password: \"{{ asset.manager.password }}\"\r\n ```\r\n\n\n### Expected Results\n\nI expect to have a BIOS Job configuration created.\n\n### Actual Results\n\n```console (paste below)\r\n\r\n```\r\n\n\n### Code of Conduct\n\n- [X] I agree to follow the Ansible Code of Conduct\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# Copyright (c) 2018 Dell EMC Inc.\n# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\nDOCUMENTATION = '''\n---\nmodule: idrac_redfish_command\nshort_description: Manages Out-Of-Band controllers using iDRAC OEM Redfish APIs\ndescription:\n - Builds Redfish URIs locally and sends them to remote OOB controllers to\n perform an action.\n - For use with Dell iDRAC operations that require Redfish OEM extensions.\nextends_documentation_fragment:\n - community.general.attributes\nattributes:\n check_mode:\n support: none\n diff_mode:\n support: none\noptions:\n category:\n required: true\n description:\n - Category to execute on iDRAC.\n type: str\n command:\n required: true\n description:\n - List of commands to execute on iDRAC.\n type: list\n elements: str\n baseuri:\n required: true\n description:\n - Base URI of iDRAC.\n type: str\n username:\n description:\n - Username for authenticating to iDRAC.\n type: str\n password:\n description:\n - Password for authenticating to iDRAC.\n type: str\n auth_token:\n description:\n - Security token for authenticating to iDRAC.\n type: str\n version_added: 2.3.0\n timeout:\n description:\n - Timeout in seconds for HTTP requests to iDRAC.\n default: 10\n type: int\n resource_id:\n required: false\n description:\n - ID of the System, Manager or Chassis to modify.\n type: str\n version_added: '0.2.0'\n\nauthor: \"Jose Delarosa (@jose-delarosa)\"\n'''\n\nEXAMPLES = '''\n - name: Create BIOS configuration job (schedule BIOS setting update)\n community.general.idrac_redfish_command:\n category: Systems\n command: CreateBiosConfigJob\n resource_id: System.Embedded.1\n baseuri: \"{{ baseuri }}\"\n username: \"{{ username }}\"\n password: \"{{ password }}\"\n'''\n\nRETURN = '''\nmsg:\n description: Message with action result or error description\n returned: always\n type: str\n sample: \"Action was successful\"\nreturn_values:\n description: Dictionary containing command-specific response data from the action.\n returned: on success\n type: dict\n version_added: 6.6.0\n sample: {\n \"job_id\": \"/redfish/v1/Managers/iDRAC.Embedded.1/Jobs/JID_471269252011\"\n }\n'''\n\nimport re\nfrom ansible.module_utils.basic import AnsibleModule\nfrom ansible_collections.community.general.plugins.module_utils.redfish_utils import RedfishUtils\nfrom ansible.module_utils.common.text.converters import to_native\n\n\nclass IdracRedfishUtils(RedfishUtils):\n\n def create_bios_config_job(self):\n result = {}\n key = \"Bios\"\n jobs = \"Jobs\"\n\n # Search for 'key' entry and extract URI from it\n response = self.get_request(self.root_uri + self.systems_uris[0])\n if response['ret'] is False:\n return response\n result['ret'] = True\n data = response['data']\n\n if key not in data:\n return {'ret': False, 'msg': \"Key %s not found\" % key}\n\n bios_uri = data[key][\"@odata.id\"]\n\n # Extract proper URI\n response = self.get_request(self.root_uri + bios_uri)\n if response['ret'] is False:\n return response\n result['ret'] = True\n data = response['data']\n set_bios_attr_uri = data[\"@Redfish.Settings\"][\"SettingsObject\"][\n \"@odata.id\"]\n\n payload = {\"TargetSettingsURI\": set_bios_attr_uri}\n response = self.post_request(\n self.root_uri + self.manager_uri + \"/\" + jobs, payload)\n if response['ret'] is False:\n return response\n\n response_output = response['resp'].__dict__\n job_id_full = response_output[\"headers\"][\"Location\"]\n job_id = re.search(\"JID_.+\", job_id_full).group()\n return {'ret': True, 'msg': \"Config job %s created\" % job_id, 'job_id': job_id_full}\n\n\nCATEGORY_COMMANDS_ALL = {\n \"Systems\": [\"CreateBiosConfigJob\"],\n \"Accounts\": [],\n \"Manager\": []\n}\n\n\ndef main():\n result = {}\n return_values = {}\n module = AnsibleModule(\n argument_spec=dict(\n category=dict(required=True),\n command=dict(required=True, type='list', elements='str'),\n baseuri=dict(required=True),\n username=dict(),\n password=dict(no_log=True),\n auth_token=dict(no_log=True),\n timeout=dict(type='int', default=10),\n resource_id=dict()\n ),\n required_together=[\n ('username', 'password'),\n ],\n required_one_of=[\n ('username', 'auth_token'),\n ],\n mutually_exclusive=[\n ('username', 'auth_token'),\n ],\n supports_check_mode=False\n )\n\n category = module.params['category']\n command_list = module.params['command']\n\n # admin credentials used for authentication\n creds = {'user': module.params['username'],\n 'pswd': module.params['password'],\n 'token': module.params['auth_token']}\n\n # timeout\n timeout = module.params['timeout']\n\n # System, Manager or Chassis ID to modify\n resource_id = module.params['resource_id']\n\n # Build root URI\n root_uri = \"https://\" + module.params['baseuri']\n rf_utils = IdracRedfishUtils(creds, root_uri, timeout, module,\n resource_id=resource_id, data_modification=True)\n\n # Check that Category is valid\n if category not in CATEGORY_COMMANDS_ALL:\n module.fail_json(msg=to_native(\"Invalid Category '%s'. Valid Categories = %s\" % (category, list(CATEGORY_COMMANDS_ALL.keys()))))\n\n # Check that all commands are valid\n for cmd in command_list:\n # Fail if even one command given is invalid\n if cmd not in CATEGORY_COMMANDS_ALL[category]:\n module.fail_json(msg=to_native(\"Invalid Command '%s'. Valid Commands = %s\" % (cmd, CATEGORY_COMMANDS_ALL[category])))\n\n # Organize by Categories / Commands\n\n if category == \"Systems\":\n # execute only if we find a System resource\n result = rf_utils._find_systems_resource()\n if result['ret'] is False:\n module.fail_json(msg=to_native(result['msg']))\n\n for command in command_list:\n if command == \"CreateBiosConfigJob\":\n # execute only if we find a Managers resource\n result = rf_utils._find_managers_resource()\n if result['ret'] is False:\n module.fail_json(msg=to_native(result['msg']))\n result = rf_utils.create_bios_config_job()\n if 'job_id' in result:\n return_values['job_id'] = result['job_id']\n\n # Return data back or fail with proper message\n if result['ret'] is True:\n del result['ret']\n module.exit_json(changed=True, msg='Action was successful', return_values=return_values)\n else:\n module.fail_json(msg=to_native(result['msg']))\n\n\nif __name__ == '__main__':\n main()\n", "path": "plugins/modules/idrac_redfish_command.py"}], "after_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# Copyright (c) 2018 Dell EMC Inc.\n# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\nDOCUMENTATION = '''\n---\nmodule: idrac_redfish_command\nshort_description: Manages Out-Of-Band controllers using iDRAC OEM Redfish APIs\ndescription:\n - Builds Redfish URIs locally and sends them to remote OOB controllers to\n perform an action.\n - For use with Dell iDRAC operations that require Redfish OEM extensions.\nextends_documentation_fragment:\n - community.general.attributes\nattributes:\n check_mode:\n support: none\n diff_mode:\n support: none\noptions:\n category:\n required: true\n description:\n - Category to execute on iDRAC.\n type: str\n command:\n required: true\n description:\n - List of commands to execute on iDRAC.\n type: list\n elements: str\n baseuri:\n required: true\n description:\n - Base URI of iDRAC.\n type: str\n username:\n description:\n - Username for authenticating to iDRAC.\n type: str\n password:\n description:\n - Password for authenticating to iDRAC.\n type: str\n auth_token:\n description:\n - Security token for authenticating to iDRAC.\n type: str\n version_added: 2.3.0\n timeout:\n description:\n - Timeout in seconds for HTTP requests to iDRAC.\n default: 10\n type: int\n resource_id:\n required: false\n description:\n - ID of the System, Manager or Chassis to modify.\n type: str\n version_added: '0.2.0'\n\nauthor: \"Jose Delarosa (@jose-delarosa)\"\n'''\n\nEXAMPLES = '''\n - name: Create BIOS configuration job (schedule BIOS setting update)\n community.general.idrac_redfish_command:\n category: Systems\n command: CreateBiosConfigJob\n resource_id: System.Embedded.1\n baseuri: \"{{ baseuri }}\"\n username: \"{{ username }}\"\n password: \"{{ password }}\"\n'''\n\nRETURN = '''\nmsg:\n description: Message with action result or error description\n returned: always\n type: str\n sample: \"Action was successful\"\n'''\n\nimport re\nfrom ansible.module_utils.basic import AnsibleModule\nfrom ansible_collections.community.general.plugins.module_utils.redfish_utils import RedfishUtils\nfrom ansible.module_utils.common.text.converters import to_native\n\n\nclass IdracRedfishUtils(RedfishUtils):\n\n def create_bios_config_job(self):\n result = {}\n key = \"Bios\"\n jobs = \"Jobs\"\n\n # Search for 'key' entry and extract URI from it\n response = self.get_request(self.root_uri + self.systems_uris[0])\n if response['ret'] is False:\n return response\n result['ret'] = True\n data = response['data']\n\n if key not in data:\n return {'ret': False, 'msg': \"Key %s not found\" % key}\n\n bios_uri = data[key][\"@odata.id\"]\n\n # Extract proper URI\n response = self.get_request(self.root_uri + bios_uri)\n if response['ret'] is False:\n return response\n result['ret'] = True\n data = response['data']\n set_bios_attr_uri = data[\"@Redfish.Settings\"][\"SettingsObject\"][\n \"@odata.id\"]\n\n payload = {\"TargetSettingsURI\": set_bios_attr_uri}\n response = self.post_request(\n self.root_uri + self.manager_uri + \"/\" + jobs, payload)\n if response['ret'] is False:\n return response\n\n response_output = response['resp'].__dict__\n job_id = response_output[\"headers\"][\"Location\"]\n job_id = re.search(\"JID_.+\", job_id).group()\n # Currently not passing job_id back to user but patch is coming\n return {'ret': True, 'msg': \"Config job %s created\" % job_id}\n\n\nCATEGORY_COMMANDS_ALL = {\n \"Systems\": [\"CreateBiosConfigJob\"],\n \"Accounts\": [],\n \"Manager\": []\n}\n\n\ndef main():\n result = {}\n module = AnsibleModule(\n argument_spec=dict(\n category=dict(required=True),\n command=dict(required=True, type='list', elements='str'),\n baseuri=dict(required=True),\n username=dict(),\n password=dict(no_log=True),\n auth_token=dict(no_log=True),\n timeout=dict(type='int', default=10),\n resource_id=dict()\n ),\n required_together=[\n ('username', 'password'),\n ],\n required_one_of=[\n ('username', 'auth_token'),\n ],\n mutually_exclusive=[\n ('username', 'auth_token'),\n ],\n supports_check_mode=False\n )\n\n category = module.params['category']\n command_list = module.params['command']\n\n # admin credentials used for authentication\n creds = {'user': module.params['username'],\n 'pswd': module.params['password'],\n 'token': module.params['auth_token']}\n\n # timeout\n timeout = module.params['timeout']\n\n # System, Manager or Chassis ID to modify\n resource_id = module.params['resource_id']\n\n # Build root URI\n root_uri = \"https://\" + module.params['baseuri']\n rf_utils = IdracRedfishUtils(creds, root_uri, timeout, module,\n resource_id=resource_id, data_modification=True)\n\n # Check that Category is valid\n if category not in CATEGORY_COMMANDS_ALL:\n module.fail_json(msg=to_native(\"Invalid Category '%s'. Valid Categories = %s\" % (category, list(CATEGORY_COMMANDS_ALL.keys()))))\n\n # Check that all commands are valid\n for cmd in command_list:\n # Fail if even one command given is invalid\n if cmd not in CATEGORY_COMMANDS_ALL[category]:\n module.fail_json(msg=to_native(\"Invalid Command '%s'. Valid Commands = %s\" % (cmd, CATEGORY_COMMANDS_ALL[category])))\n\n # Organize by Categories / Commands\n\n if category == \"Systems\":\n # execute only if we find a System resource\n # NOTE: Currently overriding the usage of 'data_modification' due to\n # how 'resource_id' is processed. In the case of CreateBiosConfigJob,\n # we interact with BOTH systems and managers, so you currently cannot\n # specify a single 'resource_id' to make both '_find_systems_resource'\n # and '_find_managers_resource' return success. Since\n # CreateBiosConfigJob doesn't use the matched 'resource_id' for a\n # system regardless of what's specified, disabling the 'resource_id'\n # inspection for the next call allows a specific manager to be\n # specified with 'resource_id'. If we ever need to expand the input\n # to inspect a specific system and manager in parallel, this will need\n # updates.\n rf_utils.data_modification = False\n result = rf_utils._find_systems_resource()\n rf_utils.data_modification = True\n if result['ret'] is False:\n module.fail_json(msg=to_native(result['msg']))\n\n for command in command_list:\n if command == \"CreateBiosConfigJob\":\n # execute only if we find a Managers resource\n result = rf_utils._find_managers_resource()\n if result['ret'] is False:\n module.fail_json(msg=to_native(result['msg']))\n result = rf_utils.create_bios_config_job()\n\n # Return data back or fail with proper message\n if result['ret'] is True:\n del result['ret']\n module.exit_json(changed=True, msg='Action was successful')\n else:\n module.fail_json(msg=to_native(result['msg']))\n\n\nif __name__ == '__main__':\n main()\n", "path": "plugins/modules/idrac_redfish_command.py"}]} | 3,302 | 313 |
gh_patches_debug_1094 | rasdani/github-patches | git_diff | ESMCI__cime-4035 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
cheyenne needs a module load python
Now that we require python 3.5+, we need to do a module load python on cheyenne.
The lack of this module load is responsible for a failure in `J_TestCreateNewcase.test_f_createnewcase_with_user_compset` if you run the whole `J_TestCreateNewcase` suite, and may cause other problems as well.
I'll get a fix in shortly.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/Tools/standard_script_setup.py`
Content:
```
1 """
2 Encapsulate the importing of python utils and logging setup, things
3 that every script should do.
4 """
5 # pylint: disable=unused-import
6
7 import sys, os
8 import __main__ as main
9 _CIMEROOT = os.path.join(os.path.dirname(os.path.abspath(__file__)), "..","..")
10 _LIB_DIR = os.path.join(_CIMEROOT, "scripts", "lib")
11 sys.path.append(_LIB_DIR)
12
13 # Important: Allows external tools to link up with CIME
14 os.environ["CIMEROOT"] = _CIMEROOT
15
16 import CIME.utils
17 CIME.utils.check_minimum_python_version(2, 7)
18 CIME.utils.stop_buffering_output()
19 import logging, argparse
20
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scripts/Tools/standard_script_setup.py b/scripts/Tools/standard_script_setup.py
--- a/scripts/Tools/standard_script_setup.py
+++ b/scripts/Tools/standard_script_setup.py
@@ -14,6 +14,6 @@
os.environ["CIMEROOT"] = _CIMEROOT
import CIME.utils
-CIME.utils.check_minimum_python_version(2, 7)
+CIME.utils.check_minimum_python_version(3, 6)
CIME.utils.stop_buffering_output()
import logging, argparse
| {"golden_diff": "diff --git a/scripts/Tools/standard_script_setup.py b/scripts/Tools/standard_script_setup.py\n--- a/scripts/Tools/standard_script_setup.py\n+++ b/scripts/Tools/standard_script_setup.py\n@@ -14,6 +14,6 @@\n os.environ[\"CIMEROOT\"] = _CIMEROOT\n \n import CIME.utils\n-CIME.utils.check_minimum_python_version(2, 7)\n+CIME.utils.check_minimum_python_version(3, 6)\n CIME.utils.stop_buffering_output()\n import logging, argparse\n", "issue": "cheyenne needs a module load python\nNow that we require python 3.5+, we need to do a module load python on cheyenne.\r\n\r\nThe lack of this module load is responsible for a failure in `J_TestCreateNewcase.test_f_createnewcase_with_user_compset` if you run the whole `J_TestCreateNewcase` suite, and may cause other problems as well.\r\n\r\nI'll get a fix in shortly.\n", "before_files": [{"content": "\"\"\"\nEncapsulate the importing of python utils and logging setup, things\nthat every script should do.\n\"\"\"\n# pylint: disable=unused-import\n\nimport sys, os\nimport __main__ as main\n_CIMEROOT = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"..\",\"..\")\n_LIB_DIR = os.path.join(_CIMEROOT, \"scripts\", \"lib\")\nsys.path.append(_LIB_DIR)\n\n# Important: Allows external tools to link up with CIME\nos.environ[\"CIMEROOT\"] = _CIMEROOT\n\nimport CIME.utils\nCIME.utils.check_minimum_python_version(2, 7)\nCIME.utils.stop_buffering_output()\nimport logging, argparse\n", "path": "scripts/Tools/standard_script_setup.py"}], "after_files": [{"content": "\"\"\"\nEncapsulate the importing of python utils and logging setup, things\nthat every script should do.\n\"\"\"\n# pylint: disable=unused-import\n\nimport sys, os\nimport __main__ as main\n_CIMEROOT = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"..\",\"..\")\n_LIB_DIR = os.path.join(_CIMEROOT, \"scripts\", \"lib\")\nsys.path.append(_LIB_DIR)\n\n# Important: Allows external tools to link up with CIME\nos.environ[\"CIMEROOT\"] = _CIMEROOT\n\nimport CIME.utils\nCIME.utils.check_minimum_python_version(3, 6)\nCIME.utils.stop_buffering_output()\nimport logging, argparse\n", "path": "scripts/Tools/standard_script_setup.py"}]} | 538 | 115 |
gh_patches_debug_33119 | rasdani/github-patches | git_diff | nilearn__nilearn-4334 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Globbing should be advertised in doc/examples.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/00_tutorials/plot_nilearn_101.py`
Content:
```
1 """
2 Basic nilearn example: manipulating and looking at data
3 =======================================================
4
5 A simple example showing how to load an existing Nifti file and use
6 basic nilearn functionalities.
7 """
8
9 # Let us use a Nifti file that is shipped with nilearn
10 from nilearn.datasets import MNI152_FILE_PATH
11
12 # Note that the variable MNI152_FILE_PATH is just a path to a Nifti file
13 print(f"Path to MNI152 template: {MNI152_FILE_PATH!r}")
14
15 # %%
16 # A first step: looking at our data
17 # ----------------------------------
18 #
19 # Let's quickly plot this file:
20 from nilearn import plotting
21
22 plotting.plot_img(MNI152_FILE_PATH)
23
24 # %%
25 # This is not a very pretty plot. We just used the simplest possible
26 # code. There is a whole :ref:`section of the documentation <plotting>`
27 # on making prettier code.
28 #
29 # **Exercise**: Try plotting one of your own files. In the above,
30 # MNI152_FILE_PATH is nothing more than a string with a path pointing to
31 # a nifti image. You can replace it with a string pointing to a file on
32 # your disk. Note that it should be a 3D volume, and not a 4D volume.
33
34 # %%
35 # Simple image manipulation: smoothing
36 # ------------------------------------
37 #
38 # Let's use an image-smoothing function from nilearn:
39 # :func:`nilearn.image.smooth_img`
40 #
41 # Functions containing 'img' can take either a filename or an image as input.
42 #
43 # Here we give as inputs the image filename and the smoothing value in mm
44 from nilearn import image
45
46 smooth_anat_img = image.smooth_img(MNI152_FILE_PATH, fwhm=3)
47
48 # While we are giving a file name as input, the function returns
49 # an in-memory object:
50 smooth_anat_img
51
52 # %%
53 # This is an in-memory object. We can pass it to nilearn function, for
54 # instance to look at it
55 plotting.plot_img(smooth_anat_img)
56
57 # %%
58 # We could also pass it to the smoothing function
59 more_smooth_anat_img = image.smooth_img(smooth_anat_img, fwhm=3)
60 plotting.plot_img(more_smooth_anat_img)
61
62 # %%
63 # Saving results to a file
64 # -------------------------
65 #
66 # We can save any in-memory object as follows:
67 from pathlib import Path
68
69 output_dir = Path.cwd() / "results" / "plot_nilearn_101"
70 output_dir.mkdir(exist_ok=True, parents=True)
71 print(f"Output will be saved to: {output_dir}")
72 more_smooth_anat_img.to_filename(output_dir / "more_smooth_anat_img.nii.gz")
73
74 # %%
75 # Finally, calling plotting.show() is necessary to display the figure
76 # when running as a script outside IPython
77 plotting.show()
78
79 # %%
80 # |
81 #
82 # ______
83 #
84 # To recap, all the nilearn tools can take data as filenames or in-memory
85 # objects, and return brain volumes as in-memory objects. These can be
86 # passed on to other nilearn tools, or saved to disk.
87
88 # sphinx_gallery_dummy_images=1
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/00_tutorials/plot_nilearn_101.py b/examples/00_tutorials/plot_nilearn_101.py
--- a/examples/00_tutorials/plot_nilearn_101.py
+++ b/examples/00_tutorials/plot_nilearn_101.py
@@ -59,17 +59,49 @@
more_smooth_anat_img = image.smooth_img(smooth_anat_img, fwhm=3)
plotting.plot_img(more_smooth_anat_img)
+
+# %%
+# Globbing over multiple 3D volumes
+# ---------------------------------
+# Nilearn also supports reading multiple volumes at once,
+# using glob-style patterns.
+# For instance, we can smooth volumes from many subjects
+# at once and get a 4D image as output.
+
+# %%
+# First let's fetch Haxby dataset for subject 1 and 2
+from nilearn import datasets
+
+haxby = datasets.fetch_haxby(subjects=[1, 2])
+
+# %%
+# Now we can find the anatomical images from both
+# subjects using the `*` wildcard
+from pathlib import Path
+
+anats_all_subjects = (
+ Path(datasets.get_data_dirs()[0]) / "haxby2001" / "subj*" / "anat*"
+)
+
+# %%
+# Now we can smooth all the anatomical images at once
+anats_all_subjects_smooth = image.smooth_img(anats_all_subjects, fwhm=5)
+
+# %%
+# This is a 4D image containing one volume per subject
+print(anats_all_subjects_smooth.shape)
+
# %%
# Saving results to a file
# -------------------------
#
# We can save any in-memory object as follows:
-from pathlib import Path
-
output_dir = Path.cwd() / "results" / "plot_nilearn_101"
output_dir.mkdir(exist_ok=True, parents=True)
print(f"Output will be saved to: {output_dir}")
-more_smooth_anat_img.to_filename(output_dir / "more_smooth_anat_img.nii.gz")
+anats_all_subjects_smooth.to_filename(
+ output_dir / "anats_all_subjects_smooth.nii.gz"
+)
# %%
# Finally, calling plotting.show() is necessary to display the figure
@@ -81,8 +113,9 @@
#
# ______
#
-# To recap, all the nilearn tools can take data as filenames or in-memory
-# objects, and return brain volumes as in-memory objects. These can be
+# To recap, all the nilearn tools can take data as filenames or
+# glob-style patterns or in-memory objects, and return brain
+# volumes as in-memory objects. These can be
# passed on to other nilearn tools, or saved to disk.
# sphinx_gallery_dummy_images=1
| {"golden_diff": "diff --git a/examples/00_tutorials/plot_nilearn_101.py b/examples/00_tutorials/plot_nilearn_101.py\n--- a/examples/00_tutorials/plot_nilearn_101.py\n+++ b/examples/00_tutorials/plot_nilearn_101.py\n@@ -59,17 +59,49 @@\n more_smooth_anat_img = image.smooth_img(smooth_anat_img, fwhm=3)\n plotting.plot_img(more_smooth_anat_img)\n \n+\n+# %%\n+# Globbing over multiple 3D volumes\n+# ---------------------------------\n+# Nilearn also supports reading multiple volumes at once,\n+# using glob-style patterns.\n+# For instance, we can smooth volumes from many subjects\n+# at once and get a 4D image as output.\n+\n+# %%\n+# First let's fetch Haxby dataset for subject 1 and 2\n+from nilearn import datasets\n+\n+haxby = datasets.fetch_haxby(subjects=[1, 2])\n+\n+# %%\n+# Now we can find the anatomical images from both\n+# subjects using the `*` wildcard\n+from pathlib import Path\n+\n+anats_all_subjects = (\n+ Path(datasets.get_data_dirs()[0]) / \"haxby2001\" / \"subj*\" / \"anat*\"\n+)\n+\n+# %%\n+# Now we can smooth all the anatomical images at once\n+anats_all_subjects_smooth = image.smooth_img(anats_all_subjects, fwhm=5)\n+\n+# %%\n+# This is a 4D image containing one volume per subject\n+print(anats_all_subjects_smooth.shape)\n+\n # %%\n # Saving results to a file\n # -------------------------\n #\n # We can save any in-memory object as follows:\n-from pathlib import Path\n-\n output_dir = Path.cwd() / \"results\" / \"plot_nilearn_101\"\n output_dir.mkdir(exist_ok=True, parents=True)\n print(f\"Output will be saved to: {output_dir}\")\n-more_smooth_anat_img.to_filename(output_dir / \"more_smooth_anat_img.nii.gz\")\n+anats_all_subjects_smooth.to_filename(\n+ output_dir / \"anats_all_subjects_smooth.nii.gz\"\n+)\n \n # %%\n # Finally, calling plotting.show() is necessary to display the figure\n@@ -81,8 +113,9 @@\n #\n # ______\n #\n-# To recap, all the nilearn tools can take data as filenames or in-memory\n-# objects, and return brain volumes as in-memory objects. These can be\n+# To recap, all the nilearn tools can take data as filenames or\n+# glob-style patterns or in-memory objects, and return brain\n+# volumes as in-memory objects. These can be\n # passed on to other nilearn tools, or saved to disk.\n \n # sphinx_gallery_dummy_images=1\n", "issue": "Globbing should be advertised in doc/examples.\n\n", "before_files": [{"content": "\"\"\"\nBasic nilearn example: manipulating and looking at data\n=======================================================\n\nA simple example showing how to load an existing Nifti file and use\nbasic nilearn functionalities.\n\"\"\"\n\n# Let us use a Nifti file that is shipped with nilearn\nfrom nilearn.datasets import MNI152_FILE_PATH\n\n# Note that the variable MNI152_FILE_PATH is just a path to a Nifti file\nprint(f\"Path to MNI152 template: {MNI152_FILE_PATH!r}\")\n\n# %%\n# A first step: looking at our data\n# ----------------------------------\n#\n# Let's quickly plot this file:\nfrom nilearn import plotting\n\nplotting.plot_img(MNI152_FILE_PATH)\n\n# %%\n# This is not a very pretty plot. We just used the simplest possible\n# code. There is a whole :ref:`section of the documentation <plotting>`\n# on making prettier code.\n#\n# **Exercise**: Try plotting one of your own files. In the above,\n# MNI152_FILE_PATH is nothing more than a string with a path pointing to\n# a nifti image. You can replace it with a string pointing to a file on\n# your disk. Note that it should be a 3D volume, and not a 4D volume.\n\n# %%\n# Simple image manipulation: smoothing\n# ------------------------------------\n#\n# Let's use an image-smoothing function from nilearn:\n# :func:`nilearn.image.smooth_img`\n#\n# Functions containing 'img' can take either a filename or an image as input.\n#\n# Here we give as inputs the image filename and the smoothing value in mm\nfrom nilearn import image\n\nsmooth_anat_img = image.smooth_img(MNI152_FILE_PATH, fwhm=3)\n\n# While we are giving a file name as input, the function returns\n# an in-memory object:\nsmooth_anat_img\n\n# %%\n# This is an in-memory object. We can pass it to nilearn function, for\n# instance to look at it\nplotting.plot_img(smooth_anat_img)\n\n# %%\n# We could also pass it to the smoothing function\nmore_smooth_anat_img = image.smooth_img(smooth_anat_img, fwhm=3)\nplotting.plot_img(more_smooth_anat_img)\n\n# %%\n# Saving results to a file\n# -------------------------\n#\n# We can save any in-memory object as follows:\nfrom pathlib import Path\n\noutput_dir = Path.cwd() / \"results\" / \"plot_nilearn_101\"\noutput_dir.mkdir(exist_ok=True, parents=True)\nprint(f\"Output will be saved to: {output_dir}\")\nmore_smooth_anat_img.to_filename(output_dir / \"more_smooth_anat_img.nii.gz\")\n\n# %%\n# Finally, calling plotting.show() is necessary to display the figure\n# when running as a script outside IPython\nplotting.show()\n\n# %%\n# |\n#\n# ______\n#\n# To recap, all the nilearn tools can take data as filenames or in-memory\n# objects, and return brain volumes as in-memory objects. These can be\n# passed on to other nilearn tools, or saved to disk.\n\n# sphinx_gallery_dummy_images=1\n", "path": "examples/00_tutorials/plot_nilearn_101.py"}], "after_files": [{"content": "\"\"\"\nBasic nilearn example: manipulating and looking at data\n=======================================================\n\nA simple example showing how to load an existing Nifti file and use\nbasic nilearn functionalities.\n\"\"\"\n\n# Let us use a Nifti file that is shipped with nilearn\nfrom nilearn.datasets import MNI152_FILE_PATH\n\n# Note that the variable MNI152_FILE_PATH is just a path to a Nifti file\nprint(f\"Path to MNI152 template: {MNI152_FILE_PATH!r}\")\n\n# %%\n# A first step: looking at our data\n# ----------------------------------\n#\n# Let's quickly plot this file:\nfrom nilearn import plotting\n\nplotting.plot_img(MNI152_FILE_PATH)\n\n# %%\n# This is not a very pretty plot. We just used the simplest possible\n# code. There is a whole :ref:`section of the documentation <plotting>`\n# on making prettier code.\n#\n# **Exercise**: Try plotting one of your own files. In the above,\n# MNI152_FILE_PATH is nothing more than a string with a path pointing to\n# a nifti image. You can replace it with a string pointing to a file on\n# your disk. Note that it should be a 3D volume, and not a 4D volume.\n\n# %%\n# Simple image manipulation: smoothing\n# ------------------------------------\n#\n# Let's use an image-smoothing function from nilearn:\n# :func:`nilearn.image.smooth_img`\n#\n# Functions containing 'img' can take either a filename or an image as input.\n#\n# Here we give as inputs the image filename and the smoothing value in mm\nfrom nilearn import image\n\nsmooth_anat_img = image.smooth_img(MNI152_FILE_PATH, fwhm=3)\n\n# While we are giving a file name as input, the function returns\n# an in-memory object:\nsmooth_anat_img\n\n# %%\n# This is an in-memory object. We can pass it to nilearn function, for\n# instance to look at it\nplotting.plot_img(smooth_anat_img)\n\n# %%\n# We could also pass it to the smoothing function\nmore_smooth_anat_img = image.smooth_img(smooth_anat_img, fwhm=3)\nplotting.plot_img(more_smooth_anat_img)\n\n\n# %%\n# Globbing over multiple 3D volumes\n# ---------------------------------\n# Nilearn also supports reading multiple volumes at once,\n# using glob-style patterns.\n# For instance, we can smooth volumes from many subjects\n# at once and get a 4D image as output.\n\n# %%\n# First let's fetch Haxby dataset for subject 1 and 2\nfrom nilearn import datasets\n\nhaxby = datasets.fetch_haxby(subjects=[1, 2])\n\n# %%\n# Now we can find the anatomical images from both\n# subjects using the `*` wildcard\nfrom pathlib import Path\n\nanats_all_subjects = (\n Path(datasets.get_data_dirs()[0]) / \"haxby2001\" / \"subj*\" / \"anat*\"\n)\n\n# %%\n# Now we can smooth all the anatomical images at once\nanats_all_subjects_smooth = image.smooth_img(anats_all_subjects, fwhm=5)\n\n# %%\n# This is a 4D image containing one volume per subject\nprint(anats_all_subjects_smooth.shape)\n\n# %%\n# Saving results to a file\n# -------------------------\n#\n# We can save any in-memory object as follows:\noutput_dir = Path.cwd() / \"results\" / \"plot_nilearn_101\"\noutput_dir.mkdir(exist_ok=True, parents=True)\nprint(f\"Output will be saved to: {output_dir}\")\nanats_all_subjects_smooth.to_filename(\n output_dir / \"anats_all_subjects_smooth.nii.gz\"\n)\n\n# %%\n# Finally, calling plotting.show() is necessary to display the figure\n# when running as a script outside IPython\nplotting.show()\n\n# %%\n# |\n#\n# ______\n#\n# To recap, all the nilearn tools can take data as filenames or\n# glob-style patterns or in-memory objects, and return brain\n# volumes as in-memory objects. These can be\n# passed on to other nilearn tools, or saved to disk.\n\n# sphinx_gallery_dummy_images=1\n", "path": "examples/00_tutorials/plot_nilearn_101.py"}]} | 1,148 | 626 |
gh_patches_debug_3966 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-1215 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Headers aren't always case insensitive
Headers like `Content-Length` should preserve case after going trough mitmproxy, because some versions of PHP don't like lowercase headers (this is a server-side misimplementation of [RFC 2616](https://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.2), but it would be nice if mitmproxy was able to handle this.).
Steps to reproduce:
Capture some traffic, replay it with `mitmdump -nc capture_file`. Everything works as intended.
Now replay it, but use a script that replaces the request content. The `Content-Length` header get's changed to `content-lenght`.
The problem probably lies at [netlib/http/message.py](https://github.com/mitmproxy/mitmproxy/blob/master/netlib/http/message.py#L104)
Mitmproxy Version: latest
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `netlib/multidict.py`
Content:
```
1 from __future__ import absolute_import, print_function, division
2
3 from abc import ABCMeta, abstractmethod
4
5
6 try:
7 from collections.abc import MutableMapping
8 except ImportError: # pragma: no cover
9 from collections import MutableMapping # Workaround for Python < 3.3
10
11 import six
12 from netlib import basetypes
13
14
15 @six.add_metaclass(ABCMeta)
16 class _MultiDict(MutableMapping, basetypes.Serializable):
17 def __repr__(self):
18 fields = (
19 repr(field)
20 for field in self.fields
21 )
22 return "{cls}[{fields}]".format(
23 cls=type(self).__name__,
24 fields=", ".join(fields)
25 )
26
27 @staticmethod
28 @abstractmethod
29 def _reduce_values(values):
30 """
31 If a user accesses multidict["foo"], this method
32 reduces all values for "foo" to a single value that is returned.
33 For example, HTTP headers are folded, whereas we will just take
34 the first cookie we found with that name.
35 """
36
37 @staticmethod
38 @abstractmethod
39 def _kconv(key):
40 """
41 This method converts a key to its canonical representation.
42 For example, HTTP headers are case-insensitive, so this method returns key.lower().
43 """
44
45 def __getitem__(self, key):
46 values = self.get_all(key)
47 if not values:
48 raise KeyError(key)
49 return self._reduce_values(values)
50
51 def __setitem__(self, key, value):
52 self.set_all(key, [value])
53
54 def __delitem__(self, key):
55 if key not in self:
56 raise KeyError(key)
57 key = self._kconv(key)
58 self.fields = tuple(
59 field for field in self.fields
60 if key != self._kconv(field[0])
61 )
62
63 def __iter__(self):
64 seen = set()
65 for key, _ in self.fields:
66 key_kconv = self._kconv(key)
67 if key_kconv not in seen:
68 seen.add(key_kconv)
69 yield key
70
71 def __len__(self):
72 return len(set(self._kconv(key) for key, _ in self.fields))
73
74 def __eq__(self, other):
75 if isinstance(other, MultiDict):
76 return self.fields == other.fields
77 return False
78
79 def __ne__(self, other):
80 return not self.__eq__(other)
81
82 def __hash__(self):
83 return hash(self.fields)
84
85 def get_all(self, key):
86 """
87 Return the list of all values for a given key.
88 If that key is not in the MultiDict, the return value will be an empty list.
89 """
90 key = self._kconv(key)
91 return [
92 value
93 for k, value in self.fields
94 if self._kconv(k) == key
95 ]
96
97 def set_all(self, key, values):
98 """
99 Remove the old values for a key and add new ones.
100 """
101 key_kconv = self._kconv(key)
102
103 new_fields = []
104 for field in self.fields:
105 if self._kconv(field[0]) == key_kconv:
106 if values:
107 new_fields.append(
108 (key, values.pop(0))
109 )
110 else:
111 new_fields.append(field)
112 while values:
113 new_fields.append(
114 (key, values.pop(0))
115 )
116 self.fields = tuple(new_fields)
117
118 def add(self, key, value):
119 """
120 Add an additional value for the given key at the bottom.
121 """
122 self.insert(len(self.fields), key, value)
123
124 def insert(self, index, key, value):
125 """
126 Insert an additional value for the given key at the specified position.
127 """
128 item = (key, value)
129 self.fields = self.fields[:index] + (item,) + self.fields[index:]
130
131 def keys(self, multi=False):
132 """
133 Get all keys.
134
135 Args:
136 multi(bool):
137 If True, one key per value will be returned.
138 If False, duplicate keys will only be returned once.
139 """
140 return (
141 k
142 for k, _ in self.items(multi)
143 )
144
145 def values(self, multi=False):
146 """
147 Get all values.
148
149 Args:
150 multi(bool):
151 If True, all values will be returned.
152 If False, only the first value per key will be returned.
153 """
154 return (
155 v
156 for _, v in self.items(multi)
157 )
158
159 def items(self, multi=False):
160 """
161 Get all (key, value) tuples.
162
163 Args:
164 multi(bool):
165 If True, all (key, value) pairs will be returned
166 If False, only the first (key, value) pair per unique key will be returned.
167 """
168 if multi:
169 return self.fields
170 else:
171 return super(_MultiDict, self).items()
172
173 def clear(self, key):
174 """
175 Removes all items with the specified key, and does not raise an
176 exception if the key does not exist.
177 """
178 if key in self:
179 del self[key]
180
181 def collect(self):
182 """
183 Returns a list of (key, value) tuples, where values are either
184 singular if threre is only one matching item for a key, or a list
185 if there are more than one. The order of the keys matches the order
186 in the underlying fields list.
187 """
188 coll = []
189 for key in self:
190 values = self.get_all(key)
191 if len(values) == 1:
192 coll.append([key, values[0]])
193 else:
194 coll.append([key, values])
195 return coll
196
197 def to_dict(self):
198 """
199 Get the MultiDict as a plain Python dict.
200 Keys with multiple values are returned as lists.
201
202 Example:
203
204 .. code-block:: python
205
206 # Simple dict with duplicate values.
207 >>> d
208 MultiDictView[("name", "value"), ("a", "false"), ("a", "42")]
209 >>> d.to_dict()
210 {
211 "name": "value",
212 "a": ["false", "42"]
213 }
214 """
215 d = {}
216 for k, v in self.collect():
217 d[k] = v
218 return d
219
220 def get_state(self):
221 return self.fields
222
223 def set_state(self, state):
224 self.fields = tuple(tuple(x) for x in state)
225
226 @classmethod
227 def from_state(cls, state):
228 return cls(state)
229
230
231 class MultiDict(_MultiDict):
232 def __init__(self, fields=()):
233 super(MultiDict, self).__init__()
234 self.fields = tuple(
235 tuple(i) for i in fields
236 )
237
238
239 @six.add_metaclass(ABCMeta)
240 class ImmutableMultiDict(MultiDict):
241 def _immutable(self, *_):
242 raise TypeError('{} objects are immutable'.format(self.__class__.__name__))
243
244 __delitem__ = set_all = insert = _immutable
245
246 def with_delitem(self, key):
247 """
248 Returns:
249 An updated ImmutableMultiDict. The original object will not be modified.
250 """
251 ret = self.copy()
252 super(ImmutableMultiDict, ret).__delitem__(key)
253 return ret
254
255 def with_set_all(self, key, values):
256 """
257 Returns:
258 An updated ImmutableMultiDict. The original object will not be modified.
259 """
260 ret = self.copy()
261 super(ImmutableMultiDict, ret).set_all(key, values)
262 return ret
263
264 def with_insert(self, index, key, value):
265 """
266 Returns:
267 An updated ImmutableMultiDict. The original object will not be modified.
268 """
269 ret = self.copy()
270 super(ImmutableMultiDict, ret).insert(index, key, value)
271 return ret
272
273
274 class MultiDictView(_MultiDict):
275 """
276 The MultiDictView provides the MultiDict interface over calculated data.
277 The view itself contains no state - data is retrieved from the parent on
278 request, and stored back to the parent on change.
279 """
280 def __init__(self, getter, setter):
281 self._getter = getter
282 self._setter = setter
283 super(MultiDictView, self).__init__()
284
285 @staticmethod
286 def _kconv(key):
287 # All request-attributes are case-sensitive.
288 return key
289
290 @staticmethod
291 def _reduce_values(values):
292 # We just return the first element if
293 # multiple elements exist with the same key.
294 return values[0]
295
296 @property
297 def fields(self):
298 return self._getter()
299
300 @fields.setter
301 def fields(self, value):
302 return self._setter(value)
303
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/netlib/multidict.py b/netlib/multidict.py
--- a/netlib/multidict.py
+++ b/netlib/multidict.py
@@ -105,7 +105,7 @@
if self._kconv(field[0]) == key_kconv:
if values:
new_fields.append(
- (key, values.pop(0))
+ (field[0], values.pop(0))
)
else:
new_fields.append(field)
| {"golden_diff": "diff --git a/netlib/multidict.py b/netlib/multidict.py\n--- a/netlib/multidict.py\n+++ b/netlib/multidict.py\n@@ -105,7 +105,7 @@\n if self._kconv(field[0]) == key_kconv:\n if values:\n new_fields.append(\n- (key, values.pop(0))\n+ (field[0], values.pop(0))\n )\n else:\n new_fields.append(field)\n", "issue": "Headers aren't always case insensitive\nHeaders like `Content-Length` should preserve case after going trough mitmproxy, because some versions of PHP don't like lowercase headers (this is a server-side misimplementation of [RFC 2616](https://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.2), but it would be nice if mitmproxy was able to handle this.). \n\nSteps to reproduce:\nCapture some traffic, replay it with `mitmdump -nc capture_file`. Everything works as intended.\nNow replay it, but use a script that replaces the request content. The `Content-Length` header get's changed to `content-lenght`. \n\nThe problem probably lies at [netlib/http/message.py](https://github.com/mitmproxy/mitmproxy/blob/master/netlib/http/message.py#L104)\nMitmproxy Version: latest\n\n", "before_files": [{"content": "from __future__ import absolute_import, print_function, division\n\nfrom abc import ABCMeta, abstractmethod\n\n\ntry:\n from collections.abc import MutableMapping\nexcept ImportError: # pragma: no cover\n from collections import MutableMapping # Workaround for Python < 3.3\n\nimport six\nfrom netlib import basetypes\n\n\[email protected]_metaclass(ABCMeta)\nclass _MultiDict(MutableMapping, basetypes.Serializable):\n def __repr__(self):\n fields = (\n repr(field)\n for field in self.fields\n )\n return \"{cls}[{fields}]\".format(\n cls=type(self).__name__,\n fields=\", \".join(fields)\n )\n\n @staticmethod\n @abstractmethod\n def _reduce_values(values):\n \"\"\"\n If a user accesses multidict[\"foo\"], this method\n reduces all values for \"foo\" to a single value that is returned.\n For example, HTTP headers are folded, whereas we will just take\n the first cookie we found with that name.\n \"\"\"\n\n @staticmethod\n @abstractmethod\n def _kconv(key):\n \"\"\"\n This method converts a key to its canonical representation.\n For example, HTTP headers are case-insensitive, so this method returns key.lower().\n \"\"\"\n\n def __getitem__(self, key):\n values = self.get_all(key)\n if not values:\n raise KeyError(key)\n return self._reduce_values(values)\n\n def __setitem__(self, key, value):\n self.set_all(key, [value])\n\n def __delitem__(self, key):\n if key not in self:\n raise KeyError(key)\n key = self._kconv(key)\n self.fields = tuple(\n field for field in self.fields\n if key != self._kconv(field[0])\n )\n\n def __iter__(self):\n seen = set()\n for key, _ in self.fields:\n key_kconv = self._kconv(key)\n if key_kconv not in seen:\n seen.add(key_kconv)\n yield key\n\n def __len__(self):\n return len(set(self._kconv(key) for key, _ in self.fields))\n\n def __eq__(self, other):\n if isinstance(other, MultiDict):\n return self.fields == other.fields\n return False\n\n def __ne__(self, other):\n return not self.__eq__(other)\n\n def __hash__(self):\n return hash(self.fields)\n\n def get_all(self, key):\n \"\"\"\n Return the list of all values for a given key.\n If that key is not in the MultiDict, the return value will be an empty list.\n \"\"\"\n key = self._kconv(key)\n return [\n value\n for k, value in self.fields\n if self._kconv(k) == key\n ]\n\n def set_all(self, key, values):\n \"\"\"\n Remove the old values for a key and add new ones.\n \"\"\"\n key_kconv = self._kconv(key)\n\n new_fields = []\n for field in self.fields:\n if self._kconv(field[0]) == key_kconv:\n if values:\n new_fields.append(\n (key, values.pop(0))\n )\n else:\n new_fields.append(field)\n while values:\n new_fields.append(\n (key, values.pop(0))\n )\n self.fields = tuple(new_fields)\n\n def add(self, key, value):\n \"\"\"\n Add an additional value for the given key at the bottom.\n \"\"\"\n self.insert(len(self.fields), key, value)\n\n def insert(self, index, key, value):\n \"\"\"\n Insert an additional value for the given key at the specified position.\n \"\"\"\n item = (key, value)\n self.fields = self.fields[:index] + (item,) + self.fields[index:]\n\n def keys(self, multi=False):\n \"\"\"\n Get all keys.\n\n Args:\n multi(bool):\n If True, one key per value will be returned.\n If False, duplicate keys will only be returned once.\n \"\"\"\n return (\n k\n for k, _ in self.items(multi)\n )\n\n def values(self, multi=False):\n \"\"\"\n Get all values.\n\n Args:\n multi(bool):\n If True, all values will be returned.\n If False, only the first value per key will be returned.\n \"\"\"\n return (\n v\n for _, v in self.items(multi)\n )\n\n def items(self, multi=False):\n \"\"\"\n Get all (key, value) tuples.\n\n Args:\n multi(bool):\n If True, all (key, value) pairs will be returned\n If False, only the first (key, value) pair per unique key will be returned.\n \"\"\"\n if multi:\n return self.fields\n else:\n return super(_MultiDict, self).items()\n\n def clear(self, key):\n \"\"\"\n Removes all items with the specified key, and does not raise an\n exception if the key does not exist.\n \"\"\"\n if key in self:\n del self[key]\n\n def collect(self):\n \"\"\"\n Returns a list of (key, value) tuples, where values are either\n singular if threre is only one matching item for a key, or a list\n if there are more than one. The order of the keys matches the order\n in the underlying fields list.\n \"\"\"\n coll = []\n for key in self:\n values = self.get_all(key)\n if len(values) == 1:\n coll.append([key, values[0]])\n else:\n coll.append([key, values])\n return coll\n\n def to_dict(self):\n \"\"\"\n Get the MultiDict as a plain Python dict.\n Keys with multiple values are returned as lists.\n\n Example:\n\n .. code-block:: python\n\n # Simple dict with duplicate values.\n >>> d\n MultiDictView[(\"name\", \"value\"), (\"a\", \"false\"), (\"a\", \"42\")]\n >>> d.to_dict()\n {\n \"name\": \"value\",\n \"a\": [\"false\", \"42\"]\n }\n \"\"\"\n d = {}\n for k, v in self.collect():\n d[k] = v\n return d\n\n def get_state(self):\n return self.fields\n\n def set_state(self, state):\n self.fields = tuple(tuple(x) for x in state)\n\n @classmethod\n def from_state(cls, state):\n return cls(state)\n\n\nclass MultiDict(_MultiDict):\n def __init__(self, fields=()):\n super(MultiDict, self).__init__()\n self.fields = tuple(\n tuple(i) for i in fields\n )\n\n\[email protected]_metaclass(ABCMeta)\nclass ImmutableMultiDict(MultiDict):\n def _immutable(self, *_):\n raise TypeError('{} objects are immutable'.format(self.__class__.__name__))\n\n __delitem__ = set_all = insert = _immutable\n\n def with_delitem(self, key):\n \"\"\"\n Returns:\n An updated ImmutableMultiDict. The original object will not be modified.\n \"\"\"\n ret = self.copy()\n super(ImmutableMultiDict, ret).__delitem__(key)\n return ret\n\n def with_set_all(self, key, values):\n \"\"\"\n Returns:\n An updated ImmutableMultiDict. The original object will not be modified.\n \"\"\"\n ret = self.copy()\n super(ImmutableMultiDict, ret).set_all(key, values)\n return ret\n\n def with_insert(self, index, key, value):\n \"\"\"\n Returns:\n An updated ImmutableMultiDict. The original object will not be modified.\n \"\"\"\n ret = self.copy()\n super(ImmutableMultiDict, ret).insert(index, key, value)\n return ret\n\n\nclass MultiDictView(_MultiDict):\n \"\"\"\n The MultiDictView provides the MultiDict interface over calculated data.\n The view itself contains no state - data is retrieved from the parent on\n request, and stored back to the parent on change.\n \"\"\"\n def __init__(self, getter, setter):\n self._getter = getter\n self._setter = setter\n super(MultiDictView, self).__init__()\n\n @staticmethod\n def _kconv(key):\n # All request-attributes are case-sensitive.\n return key\n\n @staticmethod\n def _reduce_values(values):\n # We just return the first element if\n # multiple elements exist with the same key.\n return values[0]\n\n @property\n def fields(self):\n return self._getter()\n\n @fields.setter\n def fields(self, value):\n return self._setter(value)\n", "path": "netlib/multidict.py"}], "after_files": [{"content": "from __future__ import absolute_import, print_function, division\n\nfrom abc import ABCMeta, abstractmethod\n\n\ntry:\n from collections.abc import MutableMapping\nexcept ImportError: # pragma: no cover\n from collections import MutableMapping # Workaround for Python < 3.3\n\nimport six\nfrom netlib import basetypes\n\n\[email protected]_metaclass(ABCMeta)\nclass _MultiDict(MutableMapping, basetypes.Serializable):\n def __repr__(self):\n fields = (\n repr(field)\n for field in self.fields\n )\n return \"{cls}[{fields}]\".format(\n cls=type(self).__name__,\n fields=\", \".join(fields)\n )\n\n @staticmethod\n @abstractmethod\n def _reduce_values(values):\n \"\"\"\n If a user accesses multidict[\"foo\"], this method\n reduces all values for \"foo\" to a single value that is returned.\n For example, HTTP headers are folded, whereas we will just take\n the first cookie we found with that name.\n \"\"\"\n\n @staticmethod\n @abstractmethod\n def _kconv(key):\n \"\"\"\n This method converts a key to its canonical representation.\n For example, HTTP headers are case-insensitive, so this method returns key.lower().\n \"\"\"\n\n def __getitem__(self, key):\n values = self.get_all(key)\n if not values:\n raise KeyError(key)\n return self._reduce_values(values)\n\n def __setitem__(self, key, value):\n self.set_all(key, [value])\n\n def __delitem__(self, key):\n if key not in self:\n raise KeyError(key)\n key = self._kconv(key)\n self.fields = tuple(\n field for field in self.fields\n if key != self._kconv(field[0])\n )\n\n def __iter__(self):\n seen = set()\n for key, _ in self.fields:\n key_kconv = self._kconv(key)\n if key_kconv not in seen:\n seen.add(key_kconv)\n yield key\n\n def __len__(self):\n return len(set(self._kconv(key) for key, _ in self.fields))\n\n def __eq__(self, other):\n if isinstance(other, MultiDict):\n return self.fields == other.fields\n return False\n\n def __ne__(self, other):\n return not self.__eq__(other)\n\n def __hash__(self):\n return hash(self.fields)\n\n def get_all(self, key):\n \"\"\"\n Return the list of all values for a given key.\n If that key is not in the MultiDict, the return value will be an empty list.\n \"\"\"\n key = self._kconv(key)\n return [\n value\n for k, value in self.fields\n if self._kconv(k) == key\n ]\n\n def set_all(self, key, values):\n \"\"\"\n Remove the old values for a key and add new ones.\n \"\"\"\n key_kconv = self._kconv(key)\n\n new_fields = []\n for field in self.fields:\n if self._kconv(field[0]) == key_kconv:\n if values:\n new_fields.append(\n (field[0], values.pop(0))\n )\n else:\n new_fields.append(field)\n while values:\n new_fields.append(\n (key, values.pop(0))\n )\n self.fields = tuple(new_fields)\n\n def add(self, key, value):\n \"\"\"\n Add an additional value for the given key at the bottom.\n \"\"\"\n self.insert(len(self.fields), key, value)\n\n def insert(self, index, key, value):\n \"\"\"\n Insert an additional value for the given key at the specified position.\n \"\"\"\n item = (key, value)\n self.fields = self.fields[:index] + (item,) + self.fields[index:]\n\n def keys(self, multi=False):\n \"\"\"\n Get all keys.\n\n Args:\n multi(bool):\n If True, one key per value will be returned.\n If False, duplicate keys will only be returned once.\n \"\"\"\n return (\n k\n for k, _ in self.items(multi)\n )\n\n def values(self, multi=False):\n \"\"\"\n Get all values.\n\n Args:\n multi(bool):\n If True, all values will be returned.\n If False, only the first value per key will be returned.\n \"\"\"\n return (\n v\n for _, v in self.items(multi)\n )\n\n def items(self, multi=False):\n \"\"\"\n Get all (key, value) tuples.\n\n Args:\n multi(bool):\n If True, all (key, value) pairs will be returned\n If False, only the first (key, value) pair per unique key will be returned.\n \"\"\"\n if multi:\n return self.fields\n else:\n return super(_MultiDict, self).items()\n\n def clear(self, key):\n \"\"\"\n Removes all items with the specified key, and does not raise an\n exception if the key does not exist.\n \"\"\"\n if key in self:\n del self[key]\n\n def collect(self):\n \"\"\"\n Returns a list of (key, value) tuples, where values are either\n singular if threre is only one matching item for a key, or a list\n if there are more than one. The order of the keys matches the order\n in the underlying fields list.\n \"\"\"\n coll = []\n for key in self:\n values = self.get_all(key)\n if len(values) == 1:\n coll.append([key, values[0]])\n else:\n coll.append([key, values])\n return coll\n\n def to_dict(self):\n \"\"\"\n Get the MultiDict as a plain Python dict.\n Keys with multiple values are returned as lists.\n\n Example:\n\n .. code-block:: python\n\n # Simple dict with duplicate values.\n >>> d\n MultiDictView[(\"name\", \"value\"), (\"a\", \"false\"), (\"a\", \"42\")]\n >>> d.to_dict()\n {\n \"name\": \"value\",\n \"a\": [\"false\", \"42\"]\n }\n \"\"\"\n d = {}\n for k, v in self.collect():\n d[k] = v\n return d\n\n def get_state(self):\n return self.fields\n\n def set_state(self, state):\n self.fields = tuple(tuple(x) for x in state)\n\n @classmethod\n def from_state(cls, state):\n return cls(state)\n\n\nclass MultiDict(_MultiDict):\n def __init__(self, fields=()):\n super(MultiDict, self).__init__()\n self.fields = tuple(\n tuple(i) for i in fields\n )\n\n\[email protected]_metaclass(ABCMeta)\nclass ImmutableMultiDict(MultiDict):\n def _immutable(self, *_):\n raise TypeError('{} objects are immutable'.format(self.__class__.__name__))\n\n __delitem__ = set_all = insert = _immutable\n\n def with_delitem(self, key):\n \"\"\"\n Returns:\n An updated ImmutableMultiDict. The original object will not be modified.\n \"\"\"\n ret = self.copy()\n super(ImmutableMultiDict, ret).__delitem__(key)\n return ret\n\n def with_set_all(self, key, values):\n \"\"\"\n Returns:\n An updated ImmutableMultiDict. The original object will not be modified.\n \"\"\"\n ret = self.copy()\n super(ImmutableMultiDict, ret).set_all(key, values)\n return ret\n\n def with_insert(self, index, key, value):\n \"\"\"\n Returns:\n An updated ImmutableMultiDict. The original object will not be modified.\n \"\"\"\n ret = self.copy()\n super(ImmutableMultiDict, ret).insert(index, key, value)\n return ret\n\n\nclass MultiDictView(_MultiDict):\n \"\"\"\n The MultiDictView provides the MultiDict interface over calculated data.\n The view itself contains no state - data is retrieved from the parent on\n request, and stored back to the parent on change.\n \"\"\"\n def __init__(self, getter, setter):\n self._getter = getter\n self._setter = setter\n super(MultiDictView, self).__init__()\n\n @staticmethod\n def _kconv(key):\n # All request-attributes are case-sensitive.\n return key\n\n @staticmethod\n def _reduce_values(values):\n # We just return the first element if\n # multiple elements exist with the same key.\n return values[0]\n\n @property\n def fields(self):\n return self._getter()\n\n @fields.setter\n def fields(self, value):\n return self._setter(value)\n", "path": "netlib/multidict.py"}]} | 3,139 | 111 |
gh_patches_debug_30588 | rasdani/github-patches | git_diff | elastic__apm-agent-python-1652 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[META 690] Ensure a minimum retry interval of 5 seconds in fetching central configuration
See meta issue for the description and details:
- Meta issue: https://github.com/elastic/apm/issues/690
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticapm/transport/http.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # BSD 3-Clause License
4 #
5 # Copyright (c) 2019, Elasticsearch BV
6 # All rights reserved.
7 #
8 # Redistribution and use in source and binary forms, with or without
9 # modification, are permitted provided that the following conditions are met:
10 #
11 # * Redistributions of source code must retain the above copyright notice, this
12 # list of conditions and the following disclaimer.
13 #
14 # * Redistributions in binary form must reproduce the above copyright notice,
15 # this list of conditions and the following disclaimer in the documentation
16 # and/or other materials provided with the distribution.
17 #
18 # * Neither the name of the copyright holder nor the names of its
19 # contributors may be used to endorse or promote products derived from
20 # this software without specific prior written permission.
21 #
22 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
23 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
24 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
25 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
26 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
27 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
28 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
29 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
30 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
31 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
32
33 import hashlib
34 import json
35 import re
36 import ssl
37 import urllib.parse
38 from urllib.request import getproxies_environment, proxy_bypass_environment
39
40 import urllib3
41 from urllib3.exceptions import MaxRetryError, TimeoutError
42
43 from elasticapm.transport.exceptions import TransportException
44 from elasticapm.transport.http_base import HTTPTransportBase
45 from elasticapm.utils import json_encoder, read_pem_file
46 from elasticapm.utils.logging import get_logger
47
48 try:
49 import certifi
50 except ImportError:
51 certifi = None
52
53 logger = get_logger("elasticapm.transport.http")
54
55
56 class Transport(HTTPTransportBase):
57 def __init__(self, url: str, *args, **kwargs) -> None:
58 super(Transport, self).__init__(url, *args, **kwargs)
59 pool_kwargs = {"cert_reqs": "CERT_REQUIRED", "ca_certs": self.ca_certs, "block": True}
60 if url.startswith("https"):
61 if self._server_cert:
62 pool_kwargs.update(
63 {"assert_fingerprint": self.cert_fingerprint, "assert_hostname": False, "cert_reqs": ssl.CERT_NONE}
64 )
65 del pool_kwargs["ca_certs"]
66 elif not self._verify_server_cert:
67 pool_kwargs["cert_reqs"] = ssl.CERT_NONE
68 pool_kwargs["assert_hostname"] = False
69 self._pool_kwargs = pool_kwargs
70 self._http = None
71 self._url = url
72
73 def send(self, data, forced_flush=False):
74 response = None
75
76 headers = self._headers.copy() if self._headers else {}
77 headers.update(self.auth_headers)
78 headers.update(
79 {
80 b"Content-Type": b"application/x-ndjson",
81 b"Content-Encoding": b"gzip",
82 }
83 )
84
85 url = self._url
86 if forced_flush:
87 url = f"{url}?flushed=true"
88 try:
89 try:
90 response = self.http.urlopen(
91 "POST", url, body=data, headers=headers, timeout=self._timeout, preload_content=False
92 )
93 logger.debug("Sent request, url=%s size=%.2fkb status=%s", url, len(data) / 1024.0, response.status)
94 except Exception as e:
95 print_trace = True
96 if isinstance(e, MaxRetryError) and isinstance(e.reason, TimeoutError):
97 message = "Connection to APM Server timed out " "(url: %s, timeout: %s seconds)" % (
98 self._url,
99 self._timeout,
100 )
101 print_trace = False
102 else:
103 message = "Unable to reach APM Server: %s (url: %s)" % (e, self._url)
104 raise TransportException(message, data, print_trace=print_trace)
105 body = response.read()
106 if response.status >= 400:
107 if response.status == 429: # rate-limited
108 message = "Temporarily rate limited: "
109 print_trace = False
110 else:
111 message = "HTTP %s: " % response.status
112 print_trace = True
113 message += body.decode("utf8", errors="replace")[:10000]
114 raise TransportException(message, data, print_trace=print_trace)
115 return response.getheader("Location")
116 finally:
117 if response:
118 response.close()
119
120 @property
121 def http(self) -> urllib3.PoolManager:
122 if not self._http:
123 url_parts = urllib.parse.urlparse(self._url)
124 proxies = getproxies_environment()
125 proxy_url = proxies.get("https", proxies.get("http", None))
126 if proxy_url and not proxy_bypass_environment(url_parts.netloc):
127 self._http = urllib3.ProxyManager(proxy_url, **self._pool_kwargs)
128 else:
129 self._http = urllib3.PoolManager(**self._pool_kwargs)
130 return self._http
131
132 def handle_fork(self) -> None:
133 # reset http pool to avoid sharing connections with the parent process
134 self._http = None
135
136 def get_config(self, current_version=None, keys=None):
137 """
138 Gets configuration from a remote APM Server
139
140 :param current_version: version of the current configuration
141 :param keys: a JSON-serializable dict to identify this instance, e.g.
142 {
143 "service": {
144 "name": "foo",
145 "environment": "bar"
146 }
147 }
148 :return: a three-tuple of new version, config dictionary and validity in seconds.
149 Any element of the tuple can be None.
150 """
151 url = self._config_url
152 data = json_encoder.dumps(keys).encode("utf-8")
153 headers = self._headers.copy()
154 headers[b"Content-Type"] = "application/json"
155 headers.update(self.auth_headers)
156 max_age = 300
157 if current_version:
158 headers["If-None-Match"] = current_version
159 try:
160 response = self.http.urlopen(
161 "POST", url, body=data, headers=headers, timeout=self._timeout, preload_content=False
162 )
163 except (urllib3.exceptions.RequestError, urllib3.exceptions.HTTPError) as e:
164 logger.debug("HTTP error while fetching remote config: %s", str(e))
165 return current_version, None, max_age
166 body = response.read()
167 if "Cache-Control" in response.headers:
168 try:
169 max_age = int(next(re.finditer(r"max-age=(\d+)", response.headers["Cache-Control"])).groups()[0])
170 except StopIteration:
171 logger.debug("Could not parse Cache-Control header: %s", response.headers["Cache-Control"])
172 if response.status == 304:
173 # config is unchanged, return
174 logger.debug("Configuration unchanged")
175 return current_version, None, max_age
176 elif response.status >= 400:
177 return None, None, max_age
178
179 if not body:
180 logger.debug("APM Server answered with empty body and status code %s", response.status)
181 return current_version, None, max_age
182 body = body.decode("utf-8")
183 try:
184 data = json_encoder.loads(body)
185 return response.headers.get("Etag"), data, max_age
186 except json.JSONDecodeError:
187 logger.warning("Failed decoding APM Server response as JSON: %s", body)
188 return current_version, None, max_age
189
190 def _process_queue(self):
191 if not self.client.server_version:
192 self.fetch_server_info()
193 super()._process_queue()
194
195 def fetch_server_info(self):
196 headers = self._headers.copy() if self._headers else {}
197 headers.update(self.auth_headers)
198 headers[b"accept"] = b"text/plain"
199 try:
200 response = self.http.urlopen("GET", self._server_info_url, headers=headers, timeout=self._timeout)
201 body = response.data
202 data = json_encoder.loads(body.decode("utf8"))
203 version = data["version"]
204 logger.debug("Fetched APM Server version %s", version)
205 self.client.server_version = version_string_to_tuple(version)
206 except (urllib3.exceptions.RequestError, urllib3.exceptions.HTTPError) as e:
207 logger.warning("HTTP error while fetching server information: %s", str(e))
208 except json.JSONDecodeError as e:
209 logger.warning("JSON decoding error while fetching server information: %s", str(e))
210 except (KeyError, TypeError):
211 logger.warning("No version key found in server response: %s", response.data)
212
213 @property
214 def cert_fingerprint(self):
215 if self._server_cert:
216 with open(self._server_cert, "rb") as f:
217 cert_data = read_pem_file(f)
218 digest = hashlib.sha256()
219 digest.update(cert_data)
220 return digest.hexdigest()
221 return None
222
223 @property
224 def auth_headers(self):
225 headers = super(Transport, self).auth_headers
226 return {k.encode("ascii"): v.encode("ascii") for k, v in headers.items()}
227
228 @property
229 def ca_certs(self):
230 """
231 Return location of certificate store. If it is available and not disabled via setting,
232 this will return the location of the certifi certificate store.
233 """
234 return certifi.where() if (certifi and self.client.config.use_certifi) else None
235
236
237 def version_string_to_tuple(version):
238 if version:
239 version_parts = re.split(r"[.\-]", version)
240 return tuple(int(p) if p.isdigit() else p for p in version_parts)
241 return ()
242
243
244 # left for backwards compatibility
245 AsyncTransport = Transport
246
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/elasticapm/transport/http.py b/elasticapm/transport/http.py
--- a/elasticapm/transport/http.py
+++ b/elasticapm/transport/http.py
@@ -164,11 +164,9 @@
logger.debug("HTTP error while fetching remote config: %s", str(e))
return current_version, None, max_age
body = response.read()
- if "Cache-Control" in response.headers:
- try:
- max_age = int(next(re.finditer(r"max-age=(\d+)", response.headers["Cache-Control"])).groups()[0])
- except StopIteration:
- logger.debug("Could not parse Cache-Control header: %s", response.headers["Cache-Control"])
+
+ max_age = self._get_cache_control_max_age(response.headers) or max_age
+
if response.status == 304:
# config is unchanged, return
logger.debug("Configuration unchanged")
@@ -187,6 +185,22 @@
logger.warning("Failed decoding APM Server response as JSON: %s", body)
return current_version, None, max_age
+ def _get_cache_control_max_age(self, response_headers):
+ max_age = None
+ if "Cache-Control" in response_headers:
+ try:
+ cc_max_age = int(next(re.finditer(r"max-age=(\d+)", response_headers["Cache-Control"])).groups()[0])
+ if cc_max_age <= 0:
+ # max_age remains at default value
+ pass
+ elif cc_max_age < 5:
+ max_age = 5
+ else:
+ max_age = cc_max_age
+ except StopIteration:
+ logger.debug("Could not parse Cache-Control header: %s", response_headers["Cache-Control"])
+ return max_age
+
def _process_queue(self):
if not self.client.server_version:
self.fetch_server_info()
| {"golden_diff": "diff --git a/elasticapm/transport/http.py b/elasticapm/transport/http.py\n--- a/elasticapm/transport/http.py\n+++ b/elasticapm/transport/http.py\n@@ -164,11 +164,9 @@\n logger.debug(\"HTTP error while fetching remote config: %s\", str(e))\n return current_version, None, max_age\n body = response.read()\n- if \"Cache-Control\" in response.headers:\n- try:\n- max_age = int(next(re.finditer(r\"max-age=(\\d+)\", response.headers[\"Cache-Control\"])).groups()[0])\n- except StopIteration:\n- logger.debug(\"Could not parse Cache-Control header: %s\", response.headers[\"Cache-Control\"])\n+\n+ max_age = self._get_cache_control_max_age(response.headers) or max_age\n+\n if response.status == 304:\n # config is unchanged, return\n logger.debug(\"Configuration unchanged\")\n@@ -187,6 +185,22 @@\n logger.warning(\"Failed decoding APM Server response as JSON: %s\", body)\n return current_version, None, max_age\n \n+ def _get_cache_control_max_age(self, response_headers):\n+ max_age = None\n+ if \"Cache-Control\" in response_headers:\n+ try:\n+ cc_max_age = int(next(re.finditer(r\"max-age=(\\d+)\", response_headers[\"Cache-Control\"])).groups()[0])\n+ if cc_max_age <= 0:\n+ # max_age remains at default value\n+ pass\n+ elif cc_max_age < 5:\n+ max_age = 5\n+ else:\n+ max_age = cc_max_age\n+ except StopIteration:\n+ logger.debug(\"Could not parse Cache-Control header: %s\", response_headers[\"Cache-Control\"])\n+ return max_age\n+\n def _process_queue(self):\n if not self.client.server_version:\n self.fetch_server_info()\n", "issue": "[META 690] Ensure a minimum retry interval of 5 seconds in fetching central configuration\nSee meta issue for the description and details:\r\n- Meta issue: https://github.com/elastic/apm/issues/690\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nimport hashlib\nimport json\nimport re\nimport ssl\nimport urllib.parse\nfrom urllib.request import getproxies_environment, proxy_bypass_environment\n\nimport urllib3\nfrom urllib3.exceptions import MaxRetryError, TimeoutError\n\nfrom elasticapm.transport.exceptions import TransportException\nfrom elasticapm.transport.http_base import HTTPTransportBase\nfrom elasticapm.utils import json_encoder, read_pem_file\nfrom elasticapm.utils.logging import get_logger\n\ntry:\n import certifi\nexcept ImportError:\n certifi = None\n\nlogger = get_logger(\"elasticapm.transport.http\")\n\n\nclass Transport(HTTPTransportBase):\n def __init__(self, url: str, *args, **kwargs) -> None:\n super(Transport, self).__init__(url, *args, **kwargs)\n pool_kwargs = {\"cert_reqs\": \"CERT_REQUIRED\", \"ca_certs\": self.ca_certs, \"block\": True}\n if url.startswith(\"https\"):\n if self._server_cert:\n pool_kwargs.update(\n {\"assert_fingerprint\": self.cert_fingerprint, \"assert_hostname\": False, \"cert_reqs\": ssl.CERT_NONE}\n )\n del pool_kwargs[\"ca_certs\"]\n elif not self._verify_server_cert:\n pool_kwargs[\"cert_reqs\"] = ssl.CERT_NONE\n pool_kwargs[\"assert_hostname\"] = False\n self._pool_kwargs = pool_kwargs\n self._http = None\n self._url = url\n\n def send(self, data, forced_flush=False):\n response = None\n\n headers = self._headers.copy() if self._headers else {}\n headers.update(self.auth_headers)\n headers.update(\n {\n b\"Content-Type\": b\"application/x-ndjson\",\n b\"Content-Encoding\": b\"gzip\",\n }\n )\n\n url = self._url\n if forced_flush:\n url = f\"{url}?flushed=true\"\n try:\n try:\n response = self.http.urlopen(\n \"POST\", url, body=data, headers=headers, timeout=self._timeout, preload_content=False\n )\n logger.debug(\"Sent request, url=%s size=%.2fkb status=%s\", url, len(data) / 1024.0, response.status)\n except Exception as e:\n print_trace = True\n if isinstance(e, MaxRetryError) and isinstance(e.reason, TimeoutError):\n message = \"Connection to APM Server timed out \" \"(url: %s, timeout: %s seconds)\" % (\n self._url,\n self._timeout,\n )\n print_trace = False\n else:\n message = \"Unable to reach APM Server: %s (url: %s)\" % (e, self._url)\n raise TransportException(message, data, print_trace=print_trace)\n body = response.read()\n if response.status >= 400:\n if response.status == 429: # rate-limited\n message = \"Temporarily rate limited: \"\n print_trace = False\n else:\n message = \"HTTP %s: \" % response.status\n print_trace = True\n message += body.decode(\"utf8\", errors=\"replace\")[:10000]\n raise TransportException(message, data, print_trace=print_trace)\n return response.getheader(\"Location\")\n finally:\n if response:\n response.close()\n\n @property\n def http(self) -> urllib3.PoolManager:\n if not self._http:\n url_parts = urllib.parse.urlparse(self._url)\n proxies = getproxies_environment()\n proxy_url = proxies.get(\"https\", proxies.get(\"http\", None))\n if proxy_url and not proxy_bypass_environment(url_parts.netloc):\n self._http = urllib3.ProxyManager(proxy_url, **self._pool_kwargs)\n else:\n self._http = urllib3.PoolManager(**self._pool_kwargs)\n return self._http\n\n def handle_fork(self) -> None:\n # reset http pool to avoid sharing connections with the parent process\n self._http = None\n\n def get_config(self, current_version=None, keys=None):\n \"\"\"\n Gets configuration from a remote APM Server\n\n :param current_version: version of the current configuration\n :param keys: a JSON-serializable dict to identify this instance, e.g.\n {\n \"service\": {\n \"name\": \"foo\",\n \"environment\": \"bar\"\n }\n }\n :return: a three-tuple of new version, config dictionary and validity in seconds.\n Any element of the tuple can be None.\n \"\"\"\n url = self._config_url\n data = json_encoder.dumps(keys).encode(\"utf-8\")\n headers = self._headers.copy()\n headers[b\"Content-Type\"] = \"application/json\"\n headers.update(self.auth_headers)\n max_age = 300\n if current_version:\n headers[\"If-None-Match\"] = current_version\n try:\n response = self.http.urlopen(\n \"POST\", url, body=data, headers=headers, timeout=self._timeout, preload_content=False\n )\n except (urllib3.exceptions.RequestError, urllib3.exceptions.HTTPError) as e:\n logger.debug(\"HTTP error while fetching remote config: %s\", str(e))\n return current_version, None, max_age\n body = response.read()\n if \"Cache-Control\" in response.headers:\n try:\n max_age = int(next(re.finditer(r\"max-age=(\\d+)\", response.headers[\"Cache-Control\"])).groups()[0])\n except StopIteration:\n logger.debug(\"Could not parse Cache-Control header: %s\", response.headers[\"Cache-Control\"])\n if response.status == 304:\n # config is unchanged, return\n logger.debug(\"Configuration unchanged\")\n return current_version, None, max_age\n elif response.status >= 400:\n return None, None, max_age\n\n if not body:\n logger.debug(\"APM Server answered with empty body and status code %s\", response.status)\n return current_version, None, max_age\n body = body.decode(\"utf-8\")\n try:\n data = json_encoder.loads(body)\n return response.headers.get(\"Etag\"), data, max_age\n except json.JSONDecodeError:\n logger.warning(\"Failed decoding APM Server response as JSON: %s\", body)\n return current_version, None, max_age\n\n def _process_queue(self):\n if not self.client.server_version:\n self.fetch_server_info()\n super()._process_queue()\n\n def fetch_server_info(self):\n headers = self._headers.copy() if self._headers else {}\n headers.update(self.auth_headers)\n headers[b\"accept\"] = b\"text/plain\"\n try:\n response = self.http.urlopen(\"GET\", self._server_info_url, headers=headers, timeout=self._timeout)\n body = response.data\n data = json_encoder.loads(body.decode(\"utf8\"))\n version = data[\"version\"]\n logger.debug(\"Fetched APM Server version %s\", version)\n self.client.server_version = version_string_to_tuple(version)\n except (urllib3.exceptions.RequestError, urllib3.exceptions.HTTPError) as e:\n logger.warning(\"HTTP error while fetching server information: %s\", str(e))\n except json.JSONDecodeError as e:\n logger.warning(\"JSON decoding error while fetching server information: %s\", str(e))\n except (KeyError, TypeError):\n logger.warning(\"No version key found in server response: %s\", response.data)\n\n @property\n def cert_fingerprint(self):\n if self._server_cert:\n with open(self._server_cert, \"rb\") as f:\n cert_data = read_pem_file(f)\n digest = hashlib.sha256()\n digest.update(cert_data)\n return digest.hexdigest()\n return None\n\n @property\n def auth_headers(self):\n headers = super(Transport, self).auth_headers\n return {k.encode(\"ascii\"): v.encode(\"ascii\") for k, v in headers.items()}\n\n @property\n def ca_certs(self):\n \"\"\"\n Return location of certificate store. If it is available and not disabled via setting,\n this will return the location of the certifi certificate store.\n \"\"\"\n return certifi.where() if (certifi and self.client.config.use_certifi) else None\n\n\ndef version_string_to_tuple(version):\n if version:\n version_parts = re.split(r\"[.\\-]\", version)\n return tuple(int(p) if p.isdigit() else p for p in version_parts)\n return ()\n\n\n# left for backwards compatibility\nAsyncTransport = Transport\n", "path": "elasticapm/transport/http.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nimport hashlib\nimport json\nimport re\nimport ssl\nimport urllib.parse\nfrom urllib.request import getproxies_environment, proxy_bypass_environment\n\nimport urllib3\nfrom urllib3.exceptions import MaxRetryError, TimeoutError\n\nfrom elasticapm.transport.exceptions import TransportException\nfrom elasticapm.transport.http_base import HTTPTransportBase\nfrom elasticapm.utils import json_encoder, read_pem_file\nfrom elasticapm.utils.logging import get_logger\n\ntry:\n import certifi\nexcept ImportError:\n certifi = None\n\nlogger = get_logger(\"elasticapm.transport.http\")\n\n\nclass Transport(HTTPTransportBase):\n def __init__(self, url: str, *args, **kwargs) -> None:\n super(Transport, self).__init__(url, *args, **kwargs)\n pool_kwargs = {\"cert_reqs\": \"CERT_REQUIRED\", \"ca_certs\": self.ca_certs, \"block\": True}\n if url.startswith(\"https\"):\n if self._server_cert:\n pool_kwargs.update(\n {\"assert_fingerprint\": self.cert_fingerprint, \"assert_hostname\": False, \"cert_reqs\": ssl.CERT_NONE}\n )\n del pool_kwargs[\"ca_certs\"]\n elif not self._verify_server_cert:\n pool_kwargs[\"cert_reqs\"] = ssl.CERT_NONE\n pool_kwargs[\"assert_hostname\"] = False\n self._pool_kwargs = pool_kwargs\n self._http = None\n self._url = url\n\n def send(self, data, forced_flush=False):\n response = None\n\n headers = self._headers.copy() if self._headers else {}\n headers.update(self.auth_headers)\n headers.update(\n {\n b\"Content-Type\": b\"application/x-ndjson\",\n b\"Content-Encoding\": b\"gzip\",\n }\n )\n\n url = self._url\n if forced_flush:\n url = f\"{url}?flushed=true\"\n try:\n try:\n response = self.http.urlopen(\n \"POST\", url, body=data, headers=headers, timeout=self._timeout, preload_content=False\n )\n logger.debug(\"Sent request, url=%s size=%.2fkb status=%s\", url, len(data) / 1024.0, response.status)\n except Exception as e:\n print_trace = True\n if isinstance(e, MaxRetryError) and isinstance(e.reason, TimeoutError):\n message = \"Connection to APM Server timed out \" \"(url: %s, timeout: %s seconds)\" % (\n self._url,\n self._timeout,\n )\n print_trace = False\n else:\n message = \"Unable to reach APM Server: %s (url: %s)\" % (e, self._url)\n raise TransportException(message, data, print_trace=print_trace)\n body = response.read()\n if response.status >= 400:\n if response.status == 429: # rate-limited\n message = \"Temporarily rate limited: \"\n print_trace = False\n else:\n message = \"HTTP %s: \" % response.status\n print_trace = True\n message += body.decode(\"utf8\", errors=\"replace\")[:10000]\n raise TransportException(message, data, print_trace=print_trace)\n return response.getheader(\"Location\")\n finally:\n if response:\n response.close()\n\n @property\n def http(self) -> urllib3.PoolManager:\n if not self._http:\n url_parts = urllib.parse.urlparse(self._url)\n proxies = getproxies_environment()\n proxy_url = proxies.get(\"https\", proxies.get(\"http\", None))\n if proxy_url and not proxy_bypass_environment(url_parts.netloc):\n self._http = urllib3.ProxyManager(proxy_url, **self._pool_kwargs)\n else:\n self._http = urllib3.PoolManager(**self._pool_kwargs)\n return self._http\n\n def handle_fork(self) -> None:\n # reset http pool to avoid sharing connections with the parent process\n self._http = None\n\n def get_config(self, current_version=None, keys=None):\n \"\"\"\n Gets configuration from a remote APM Server\n\n :param current_version: version of the current configuration\n :param keys: a JSON-serializable dict to identify this instance, e.g.\n {\n \"service\": {\n \"name\": \"foo\",\n \"environment\": \"bar\"\n }\n }\n :return: a three-tuple of new version, config dictionary and validity in seconds.\n Any element of the tuple can be None.\n \"\"\"\n url = self._config_url\n data = json_encoder.dumps(keys).encode(\"utf-8\")\n headers = self._headers.copy()\n headers[b\"Content-Type\"] = \"application/json\"\n headers.update(self.auth_headers)\n max_age = 300\n if current_version:\n headers[\"If-None-Match\"] = current_version\n try:\n response = self.http.urlopen(\n \"POST\", url, body=data, headers=headers, timeout=self._timeout, preload_content=False\n )\n except (urllib3.exceptions.RequestError, urllib3.exceptions.HTTPError) as e:\n logger.debug(\"HTTP error while fetching remote config: %s\", str(e))\n return current_version, None, max_age\n body = response.read()\n\n max_age = self._get_cache_control_max_age(response.headers) or max_age\n\n if response.status == 304:\n # config is unchanged, return\n logger.debug(\"Configuration unchanged\")\n return current_version, None, max_age\n elif response.status >= 400:\n return None, None, max_age\n\n if not body:\n logger.debug(\"APM Server answered with empty body and status code %s\", response.status)\n return current_version, None, max_age\n body = body.decode(\"utf-8\")\n try:\n data = json_encoder.loads(body)\n return response.headers.get(\"Etag\"), data, max_age\n except json.JSONDecodeError:\n logger.warning(\"Failed decoding APM Server response as JSON: %s\", body)\n return current_version, None, max_age\n\n def _get_cache_control_max_age(self, response_headers):\n max_age = None\n if \"Cache-Control\" in response_headers:\n try:\n cc_max_age = int(next(re.finditer(r\"max-age=(\\d+)\", response_headers[\"Cache-Control\"])).groups()[0])\n if cc_max_age <= 0:\n # max_age remains at default value\n pass\n elif cc_max_age < 5:\n max_age = 5\n else:\n max_age = cc_max_age\n except StopIteration:\n logger.debug(\"Could not parse Cache-Control header: %s\", response_headers[\"Cache-Control\"])\n return max_age\n\n def _process_queue(self):\n if not self.client.server_version:\n self.fetch_server_info()\n super()._process_queue()\n\n def fetch_server_info(self):\n headers = self._headers.copy() if self._headers else {}\n headers.update(self.auth_headers)\n headers[b\"accept\"] = b\"text/plain\"\n try:\n response = self.http.urlopen(\"GET\", self._server_info_url, headers=headers, timeout=self._timeout)\n body = response.data\n data = json_encoder.loads(body.decode(\"utf8\"))\n version = data[\"version\"]\n logger.debug(\"Fetched APM Server version %s\", version)\n self.client.server_version = version_string_to_tuple(version)\n except (urllib3.exceptions.RequestError, urllib3.exceptions.HTTPError) as e:\n logger.warning(\"HTTP error while fetching server information: %s\", str(e))\n except json.JSONDecodeError as e:\n logger.warning(\"JSON decoding error while fetching server information: %s\", str(e))\n except (KeyError, TypeError):\n logger.warning(\"No version key found in server response: %s\", response.data)\n\n @property\n def cert_fingerprint(self):\n if self._server_cert:\n with open(self._server_cert, \"rb\") as f:\n cert_data = read_pem_file(f)\n digest = hashlib.sha256()\n digest.update(cert_data)\n return digest.hexdigest()\n return None\n\n @property\n def auth_headers(self):\n headers = super(Transport, self).auth_headers\n return {k.encode(\"ascii\"): v.encode(\"ascii\") for k, v in headers.items()}\n\n @property\n def ca_certs(self):\n \"\"\"\n Return location of certificate store. If it is available and not disabled via setting,\n this will return the location of the certifi certificate store.\n \"\"\"\n return certifi.where() if (certifi and self.client.config.use_certifi) else None\n\n\ndef version_string_to_tuple(version):\n if version:\n version_parts = re.split(r\"[.\\-]\", version)\n return tuple(int(p) if p.isdigit() else p for p in version_parts)\n return ()\n\n\n# left for backwards compatibility\nAsyncTransport = Transport\n", "path": "elasticapm/transport/http.py"}]} | 3,130 | 430 |
gh_patches_debug_16643 | rasdani/github-patches | git_diff | sktime__sktime-5330 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] `temporal_train_test_split` does not work on panel datatypes with unequal length series.
**Describe the bug**
<!--
A clear and concise description of what the bug is.
-->
Also relates to #4968
`temporal_train_test_split` wrongly split panel datatypes (splitting per unique unequal time series). It could be that the split function does not support this type yet, If so, it should throw an error msg telling that it does not currently supported the datatypes.
**To Reproduce**
<!--
Add a Minimal, Complete, and Verifiable example (for more details, see e.g. https://stackoverflow.com/help/mcve
If the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com
-->
```python
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.utils._testing.panel import _make_panel
y = _make_panel(n_instances=2, n_timepoints=6)
# make first instance series shorter than the second
y.iloc[4:6] =None
y.dropna(inplace=True)
train_size, test_size = temporal_train_test_split(y, test_size=2)
# show shapes
print(
f"""
{y.shape=}
{train_size.shape=}, {test_size.shape=} #train size should be (6,1) 2+4=6
"""
)
# has also the same issue as #4968 below is the minimal example
train_fh, test_fh = temporal_train_test_split(y, fh=[1,2])
# show shapes
print(
f"""
{y.shape=}
{train_fh.shape=}, {test_fh.shape=} #train size should be (6,1) and test (4,1)
"""
)
```
output
```
y.shape=(10, 1)
train_size.shape=(4, 1), test_size.shape=(4, 1) #train size should be 6 (2+4)
y.shape=(10, 1)
train_fh.shape=(8, 1), test_fh.shape=(2, 1) #train size should be (6,1) and test (4,1)
```
**Versions**
<details>
<!--
Please run the following code snippet and paste the output here:
from sktime import show_versions; show_versions()
-->
main at [3cf69ed](https://github.com/sktime/sktime/commit/3cf69eddba315d6130b661ca5fe8e132e236aa47)
</details>
<!-- Thanks for contributing! -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sktime/split/__init__.py`
Content:
```
1 """Module for splitters."""
2
3 __all__ = [
4 "CutoffSplitter",
5 "ExpandingGreedySplitter",
6 "ExpandingWindowSplitter",
7 "SameLocSplitter",
8 "SingleWindowSplitter",
9 "SlidingWindowSplitter",
10 "TestPlusTrainSplitter",
11 "temporal_train_test_split",
12 ]
13
14 from sktime.split.cutoff import CutoffSplitter
15 from sktime.split.expandinggreedy import ExpandingGreedySplitter
16 from sktime.split.expandingwindow import ExpandingWindowSplitter
17 from sktime.split.sameloc import SameLocSplitter
18 from sktime.split.singlewindow import SingleWindowSplitter
19 from sktime.split.slidingwindow import SlidingWindowSplitter
20 from sktime.split.temporal_train_test_split import temporal_train_test_split
21 from sktime.split.testplustrain import TestPlusTrainSplitter
22
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sktime/split/__init__.py b/sktime/split/__init__.py
--- a/sktime/split/__init__.py
+++ b/sktime/split/__init__.py
@@ -7,6 +7,7 @@
"SameLocSplitter",
"SingleWindowSplitter",
"SlidingWindowSplitter",
+ "TemporalTrainTestSplitter",
"TestPlusTrainSplitter",
"temporal_train_test_split",
]
@@ -17,5 +18,8 @@
from sktime.split.sameloc import SameLocSplitter
from sktime.split.singlewindow import SingleWindowSplitter
from sktime.split.slidingwindow import SlidingWindowSplitter
-from sktime.split.temporal_train_test_split import temporal_train_test_split
+from sktime.split.temporal_train_test_split import (
+ TemporalTrainTestSplitter,
+ temporal_train_test_split,
+)
from sktime.split.testplustrain import TestPlusTrainSplitter
| {"golden_diff": "diff --git a/sktime/split/__init__.py b/sktime/split/__init__.py\n--- a/sktime/split/__init__.py\n+++ b/sktime/split/__init__.py\n@@ -7,6 +7,7 @@\n \"SameLocSplitter\",\n \"SingleWindowSplitter\",\n \"SlidingWindowSplitter\",\n+ \"TemporalTrainTestSplitter\",\n \"TestPlusTrainSplitter\",\n \"temporal_train_test_split\",\n ]\n@@ -17,5 +18,8 @@\n from sktime.split.sameloc import SameLocSplitter\n from sktime.split.singlewindow import SingleWindowSplitter\n from sktime.split.slidingwindow import SlidingWindowSplitter\n-from sktime.split.temporal_train_test_split import temporal_train_test_split\n+from sktime.split.temporal_train_test_split import (\n+ TemporalTrainTestSplitter,\n+ temporal_train_test_split,\n+)\n from sktime.split.testplustrain import TestPlusTrainSplitter\n", "issue": "[BUG] `temporal_train_test_split` does not work on panel datatypes with unequal length series.\n**Describe the bug**\r\n<!--\r\nA clear and concise description of what the bug is.\r\n-->\r\nAlso relates to #4968 \r\n\r\n`temporal_train_test_split` wrongly split panel datatypes (splitting per unique unequal time series). It could be that the split function does not support this type yet, If so, it should throw an error msg telling that it does not currently supported the datatypes.\r\n\r\n**To Reproduce**\r\n<!--\r\nAdd a Minimal, Complete, and Verifiable example (for more details, see e.g. https://stackoverflow.com/help/mcve\r\n\r\nIf the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com\r\n-->\r\n\r\n```python\r\nfrom sktime.forecasting.model_selection import temporal_train_test_split\r\nfrom sktime.utils._testing.panel import _make_panel\r\n\r\ny = _make_panel(n_instances=2, n_timepoints=6)\r\n# make first instance series shorter than the second\r\ny.iloc[4:6] =None\r\ny.dropna(inplace=True)\r\n\r\ntrain_size, test_size = temporal_train_test_split(y, test_size=2) \r\n# show shapes\r\nprint(\r\n f\"\"\"\r\n {y.shape=} \r\n {train_size.shape=}, {test_size.shape=} #train size should be (6,1) 2+4=6\r\n \"\"\"\r\n)\r\n\r\n# has also the same issue as #4968 below is the minimal example\r\ntrain_fh, test_fh = temporal_train_test_split(y, fh=[1,2]) \r\n# show shapes\r\nprint(\r\n f\"\"\"\r\n {y.shape=} \r\n {train_fh.shape=}, {test_fh.shape=} #train size should be (6,1) and test (4,1)\r\n \"\"\"\r\n)\r\n```\r\n\r\noutput\r\n```\r\ny.shape=(10, 1) \r\ntrain_size.shape=(4, 1), test_size.shape=(4, 1) #train size should be 6 (2+4)\r\n\r\ny.shape=(10, 1) \r\ntrain_fh.shape=(8, 1), test_fh.shape=(2, 1) #train size should be (6,1) and test (4,1)\r\n```\r\n\r\n**Versions**\r\n<details>\r\n\r\n<!--\r\nPlease run the following code snippet and paste the output here:\r\n\r\nfrom sktime import show_versions; show_versions()\r\n-->\r\n main at [3cf69ed](https://github.com/sktime/sktime/commit/3cf69eddba315d6130b661ca5fe8e132e236aa47)\r\n\r\n</details>\r\n\r\n<!-- Thanks for contributing! -->\r\n\n", "before_files": [{"content": "\"\"\"Module for splitters.\"\"\"\n\n__all__ = [\n \"CutoffSplitter\",\n \"ExpandingGreedySplitter\",\n \"ExpandingWindowSplitter\",\n \"SameLocSplitter\",\n \"SingleWindowSplitter\",\n \"SlidingWindowSplitter\",\n \"TestPlusTrainSplitter\",\n \"temporal_train_test_split\",\n]\n\nfrom sktime.split.cutoff import CutoffSplitter\nfrom sktime.split.expandinggreedy import ExpandingGreedySplitter\nfrom sktime.split.expandingwindow import ExpandingWindowSplitter\nfrom sktime.split.sameloc import SameLocSplitter\nfrom sktime.split.singlewindow import SingleWindowSplitter\nfrom sktime.split.slidingwindow import SlidingWindowSplitter\nfrom sktime.split.temporal_train_test_split import temporal_train_test_split\nfrom sktime.split.testplustrain import TestPlusTrainSplitter\n", "path": "sktime/split/__init__.py"}], "after_files": [{"content": "\"\"\"Module for splitters.\"\"\"\n\n__all__ = [\n \"CutoffSplitter\",\n \"ExpandingGreedySplitter\",\n \"ExpandingWindowSplitter\",\n \"SameLocSplitter\",\n \"SingleWindowSplitter\",\n \"SlidingWindowSplitter\",\n \"TemporalTrainTestSplitter\",\n \"TestPlusTrainSplitter\",\n \"temporal_train_test_split\",\n]\n\nfrom sktime.split.cutoff import CutoffSplitter\nfrom sktime.split.expandinggreedy import ExpandingGreedySplitter\nfrom sktime.split.expandingwindow import ExpandingWindowSplitter\nfrom sktime.split.sameloc import SameLocSplitter\nfrom sktime.split.singlewindow import SingleWindowSplitter\nfrom sktime.split.slidingwindow import SlidingWindowSplitter\nfrom sktime.split.temporal_train_test_split import (\n TemporalTrainTestSplitter,\n temporal_train_test_split,\n)\nfrom sktime.split.testplustrain import TestPlusTrainSplitter\n", "path": "sktime/split/__init__.py"}]} | 1,068 | 217 |
gh_patches_debug_28195 | rasdani/github-patches | git_diff | pretix__pretix-2399 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Regression: File question required when editing ticket
This is the same issue as #1408, I'm not quite sure whether this should be a new bug because it's a regression or if the old bug should be reopened?
Long story short: We noticed in Pretix 4.3 that customers editing their order will be faced with file questions being `required` even though they have uploaded a file for that question already. The requirement comes from the `input` being marked as `required` and the browsers refusing to submit the form without something in there.
This was fixed in #1443, but it seems this doesn't work anymore. I haven't had time to look into this more, so I don't have a clue where it might fail.
Regression: File question required when editing ticket
This is the same issue as #1408, I'm not quite sure whether this should be a new bug because it's a regression or if the old bug should be reopened?
Long story short: We noticed in Pretix 4.3 that customers editing their order will be faced with file questions being `required` even though they have uploaded a file for that question already. The requirement comes from the `input` being marked as `required` and the browsers refusing to submit the form without something in there.
This was fixed in #1443, but it seems this doesn't work anymore. I haven't had time to look into this more, so I don't have a clue where it might fail.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pretix/base/forms/widgets.py`
Content:
```
1 #
2 # This file is part of pretix (Community Edition).
3 #
4 # Copyright (C) 2014-2020 Raphael Michel and contributors
5 # Copyright (C) 2020-2021 rami.io GmbH and contributors
6 #
7 # This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General
8 # Public License as published by the Free Software Foundation in version 3 of the License.
9 #
10 # ADDITIONAL TERMS APPLY: Pursuant to Section 7 of the GNU Affero General Public License, additional terms are
11 # applicable granting you additional permissions and placing additional restrictions on your usage of this software.
12 # Please refer to the pretix LICENSE file to obtain the full terms applicable to this work. If you did not receive
13 # this file, see <https://pretix.eu/about/en/license>.
14 #
15 # This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied
16 # warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more
17 # details.
18 #
19 # You should have received a copy of the GNU Affero General Public License along with this program. If not, see
20 # <https://www.gnu.org/licenses/>.
21 #
22
23 # This file is based on an earlier version of pretix which was released under the Apache License 2.0. The full text of
24 # the Apache License 2.0 can be obtained at <http://www.apache.org/licenses/LICENSE-2.0>.
25 #
26 # This file may have since been changed and any changes are released under the terms of AGPLv3 as described above. A
27 # full history of changes and contributors is available at <https://github.com/pretix/pretix>.
28 #
29 # This file contains Apache-licensed contributions copyrighted by: Felix Schäfer
30 #
31 # Unless required by applicable law or agreed to in writing, software distributed under the Apache License 2.0 is
32 # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
33 # License for the specific language governing permissions and limitations under the License.
34
35 import os
36 from datetime import date
37
38 from django import forms
39 from django.utils.formats import get_format
40 from django.utils.functional import lazy
41 from django.utils.timezone import get_current_timezone, now
42 from django.utils.translation import gettext_lazy as _
43
44
45 class DatePickerWidget(forms.DateInput):
46 def __init__(self, attrs=None, date_format=None):
47 attrs = attrs or {}
48 if 'placeholder' in attrs:
49 del attrs['placeholder']
50 date_attrs = dict(attrs)
51 date_attrs.setdefault('class', 'form-control')
52 date_attrs['class'] += ' datepickerfield'
53 date_attrs['autocomplete'] = 'off'
54
55 def placeholder():
56 df = date_format or get_format('DATE_INPUT_FORMATS')[0]
57 return now().replace(
58 year=2000, month=12, day=31, hour=18, minute=0, second=0, microsecond=0
59 ).strftime(df)
60
61 date_attrs['placeholder'] = lazy(placeholder, str)
62
63 forms.DateInput.__init__(self, date_attrs, date_format)
64
65
66 class TimePickerWidget(forms.TimeInput):
67 def __init__(self, attrs=None, time_format=None):
68 attrs = attrs or {}
69 if 'placeholder' in attrs:
70 del attrs['placeholder']
71 time_attrs = dict(attrs)
72 time_attrs.setdefault('class', 'form-control')
73 time_attrs['class'] += ' timepickerfield'
74 time_attrs['autocomplete'] = 'off'
75
76 def placeholder():
77 tf = time_format or get_format('TIME_INPUT_FORMATS')[0]
78 return now().replace(
79 year=2000, month=1, day=1, hour=0, minute=0, second=0, microsecond=0
80 ).strftime(tf)
81
82 time_attrs['placeholder'] = lazy(placeholder, str)
83
84 forms.TimeInput.__init__(self, time_attrs, time_format)
85
86
87 class UploadedFileWidget(forms.ClearableFileInput):
88 def __init__(self, *args, **kwargs):
89 # Browsers can't recognize that the server already has a file uploaded
90 # Don't mark this input as being required if we already have an answer
91 # (this needs to be done via the attrs, otherwise we wouldn't get the "required" star on the field label)
92 attrs = kwargs.get('attrs', {})
93 if kwargs.get('required') and kwargs.get('initial'):
94 attrs.update({'required': None})
95 kwargs.update({'attrs': attrs})
96
97 self.position = kwargs.pop('position')
98 self.event = kwargs.pop('event')
99 self.answer = kwargs.pop('answer')
100 super().__init__(*args, **kwargs)
101
102 class FakeFile:
103 def __init__(self, file, position, event, answer):
104 self.file = file
105 self.position = position
106 self.event = event
107 self.answer = answer
108
109 def __str__(self):
110 return os.path.basename(self.file.name).split('.', 1)[-1]
111
112 @property
113 def url(self):
114 from pretix.base.models import OrderPosition
115 from pretix.multidomain.urlreverse import eventreverse
116
117 if isinstance(self.position, OrderPosition):
118 return eventreverse(self.event, 'presale:event.order.download.answer', kwargs={
119 'order': self.position.order.code,
120 'secret': self.position.order.secret,
121 'answer': self.answer.pk,
122 })
123 else:
124 return eventreverse(self.event, 'presale:event.cart.download.answer', kwargs={
125 'answer': self.answer.pk,
126 })
127
128 def format_value(self, value):
129 if self.is_initial(value):
130 return self.FakeFile(value, self.position, self.event, self.answer)
131
132
133 class SplitDateTimePickerWidget(forms.SplitDateTimeWidget):
134 template_name = 'pretixbase/forms/widgets/splitdatetime.html'
135
136 def __init__(self, attrs=None, date_format=None, time_format=None, min_date=None, max_date=None):
137 attrs = attrs or {}
138 if 'placeholder' in attrs:
139 del attrs['placeholder']
140 date_attrs = dict(attrs)
141 time_attrs = dict(attrs)
142 date_attrs.setdefault('class', 'form-control splitdatetimepart')
143 time_attrs.setdefault('class', 'form-control splitdatetimepart')
144 date_attrs.setdefault('autocomplete', 'off')
145 time_attrs.setdefault('autocomplete', 'off')
146 date_attrs['class'] += ' datepickerfield'
147 time_attrs['class'] += ' timepickerfield'
148 date_attrs['autocomplete'] = 'off'
149 time_attrs['autocomplete'] = 'off'
150 if min_date:
151 date_attrs['data-min'] = (
152 min_date if isinstance(min_date, date) else min_date.astimezone(get_current_timezone()).date()
153 ).isoformat()
154 if max_date:
155 date_attrs['data-max'] = (
156 max_date if isinstance(max_date, date) else max_date.astimezone(get_current_timezone()).date()
157 ).isoformat()
158
159 def date_placeholder():
160 df = date_format or get_format('DATE_INPUT_FORMATS')[0]
161 return now().replace(
162 year=2000, month=12, day=31, hour=18, minute=0, second=0, microsecond=0
163 ).strftime(df)
164
165 def time_placeholder():
166 tf = time_format or get_format('TIME_INPUT_FORMATS')[0]
167 return now().replace(
168 year=2000, month=1, day=1, hour=0, minute=0, second=0, microsecond=0
169 ).strftime(tf)
170
171 date_attrs['placeholder'] = lazy(date_placeholder, str)
172 time_attrs['placeholder'] = lazy(time_placeholder, str)
173
174 widgets = (
175 forms.DateInput(attrs=date_attrs, format=date_format),
176 forms.TimeInput(attrs=time_attrs, format=time_format),
177 )
178 # Skip one hierarchy level
179 forms.MultiWidget.__init__(self, widgets, attrs)
180
181
182 class BusinessBooleanRadio(forms.RadioSelect):
183 def __init__(self, require_business=False, attrs=None):
184 self.require_business = require_business
185 if self.require_business:
186 choices = (
187 ('business', _('Business or institutional customer')),
188 )
189 else:
190 choices = (
191 ('individual', _('Individual customer')),
192 ('business', _('Business or institutional customer')),
193 )
194 super().__init__(attrs, choices)
195
196 def format_value(self, value):
197 if self.require_business:
198 return 'business'
199 try:
200 return {True: 'business', False: 'individual'}[value]
201 except KeyError:
202 return 'individual'
203
204 def value_from_datadict(self, data, files, name):
205 value = data.get(name)
206 if self.require_business:
207 return True
208 return {
209 'business': True,
210 True: True,
211 'True': True,
212 'individual': False,
213 'False': False,
214 False: False,
215 }.get(value)
216
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/pretix/base/forms/widgets.py b/src/pretix/base/forms/widgets.py
--- a/src/pretix/base/forms/widgets.py
+++ b/src/pretix/base/forms/widgets.py
@@ -86,14 +86,6 @@
class UploadedFileWidget(forms.ClearableFileInput):
def __init__(self, *args, **kwargs):
- # Browsers can't recognize that the server already has a file uploaded
- # Don't mark this input as being required if we already have an answer
- # (this needs to be done via the attrs, otherwise we wouldn't get the "required" star on the field label)
- attrs = kwargs.get('attrs', {})
- if kwargs.get('required') and kwargs.get('initial'):
- attrs.update({'required': None})
- kwargs.update({'attrs': attrs})
-
self.position = kwargs.pop('position')
self.event = kwargs.pop('event')
self.answer = kwargs.pop('answer')
@@ -125,6 +117,15 @@
'answer': self.answer.pk,
})
+ def get_context(self, name, value, attrs):
+ # Browsers can't recognize that the server already has a file uploaded
+ # Don't mark this input as being required if we already have an answer
+ # (this needs to be done via the attrs, otherwise we wouldn't get the "required" star on the field label)
+ ctx = super().get_context(name, value, attrs)
+ if ctx['widget']['is_initial']:
+ ctx['widget']['attrs']['required'] = False
+ return ctx
+
def format_value(self, value):
if self.is_initial(value):
return self.FakeFile(value, self.position, self.event, self.answer)
| {"golden_diff": "diff --git a/src/pretix/base/forms/widgets.py b/src/pretix/base/forms/widgets.py\n--- a/src/pretix/base/forms/widgets.py\n+++ b/src/pretix/base/forms/widgets.py\n@@ -86,14 +86,6 @@\n \n class UploadedFileWidget(forms.ClearableFileInput):\n def __init__(self, *args, **kwargs):\n- # Browsers can't recognize that the server already has a file uploaded\n- # Don't mark this input as being required if we already have an answer\n- # (this needs to be done via the attrs, otherwise we wouldn't get the \"required\" star on the field label)\n- attrs = kwargs.get('attrs', {})\n- if kwargs.get('required') and kwargs.get('initial'):\n- attrs.update({'required': None})\n- kwargs.update({'attrs': attrs})\n-\n self.position = kwargs.pop('position')\n self.event = kwargs.pop('event')\n self.answer = kwargs.pop('answer')\n@@ -125,6 +117,15 @@\n 'answer': self.answer.pk,\n })\n \n+ def get_context(self, name, value, attrs):\n+ # Browsers can't recognize that the server already has a file uploaded\n+ # Don't mark this input as being required if we already have an answer\n+ # (this needs to be done via the attrs, otherwise we wouldn't get the \"required\" star on the field label)\n+ ctx = super().get_context(name, value, attrs)\n+ if ctx['widget']['is_initial']:\n+ ctx['widget']['attrs']['required'] = False\n+ return ctx\n+\n def format_value(self, value):\n if self.is_initial(value):\n return self.FakeFile(value, self.position, self.event, self.answer)\n", "issue": "Regression: File question required when editing ticket\nThis is the same issue as #1408, I'm not quite sure whether this should be a new bug because it's a regression or if the old bug should be reopened?\r\n\r\nLong story short: We noticed in Pretix 4.3 that customers editing their order will be faced with file questions being `required` even though they have uploaded a file for that question already. The requirement comes from the `input` being marked as `required` and the browsers refusing to submit the form without something in there.\r\n\r\nThis was fixed in #1443, but it seems this doesn't work anymore. I haven't had time to look into this more, so I don't have a clue where it might fail.\nRegression: File question required when editing ticket\nThis is the same issue as #1408, I'm not quite sure whether this should be a new bug because it's a regression or if the old bug should be reopened?\r\n\r\nLong story short: We noticed in Pretix 4.3 that customers editing their order will be faced with file questions being `required` even though they have uploaded a file for that question already. The requirement comes from the `input` being marked as `required` and the browsers refusing to submit the form without something in there.\r\n\r\nThis was fixed in #1443, but it seems this doesn't work anymore. I haven't had time to look into this more, so I don't have a clue where it might fail.\n", "before_files": [{"content": "#\n# This file is part of pretix (Community Edition).\n#\n# Copyright (C) 2014-2020 Raphael Michel and contributors\n# Copyright (C) 2020-2021 rami.io GmbH and contributors\n#\n# This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General\n# Public License as published by the Free Software Foundation in version 3 of the License.\n#\n# ADDITIONAL TERMS APPLY: Pursuant to Section 7 of the GNU Affero General Public License, additional terms are\n# applicable granting you additional permissions and placing additional restrictions on your usage of this software.\n# Please refer to the pretix LICENSE file to obtain the full terms applicable to this work. If you did not receive\n# this file, see <https://pretix.eu/about/en/license>.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied\n# warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more\n# details.\n#\n# You should have received a copy of the GNU Affero General Public License along with this program. If not, see\n# <https://www.gnu.org/licenses/>.\n#\n\n# This file is based on an earlier version of pretix which was released under the Apache License 2.0. The full text of\n# the Apache License 2.0 can be obtained at <http://www.apache.org/licenses/LICENSE-2.0>.\n#\n# This file may have since been changed and any changes are released under the terms of AGPLv3 as described above. A\n# full history of changes and contributors is available at <https://github.com/pretix/pretix>.\n#\n# This file contains Apache-licensed contributions copyrighted by: Felix Sch\u00e4fer\n#\n# Unless required by applicable law or agreed to in writing, software distributed under the Apache License 2.0 is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations under the License.\n\nimport os\nfrom datetime import date\n\nfrom django import forms\nfrom django.utils.formats import get_format\nfrom django.utils.functional import lazy\nfrom django.utils.timezone import get_current_timezone, now\nfrom django.utils.translation import gettext_lazy as _\n\n\nclass DatePickerWidget(forms.DateInput):\n def __init__(self, attrs=None, date_format=None):\n attrs = attrs or {}\n if 'placeholder' in attrs:\n del attrs['placeholder']\n date_attrs = dict(attrs)\n date_attrs.setdefault('class', 'form-control')\n date_attrs['class'] += ' datepickerfield'\n date_attrs['autocomplete'] = 'off'\n\n def placeholder():\n df = date_format or get_format('DATE_INPUT_FORMATS')[0]\n return now().replace(\n year=2000, month=12, day=31, hour=18, minute=0, second=0, microsecond=0\n ).strftime(df)\n\n date_attrs['placeholder'] = lazy(placeholder, str)\n\n forms.DateInput.__init__(self, date_attrs, date_format)\n\n\nclass TimePickerWidget(forms.TimeInput):\n def __init__(self, attrs=None, time_format=None):\n attrs = attrs or {}\n if 'placeholder' in attrs:\n del attrs['placeholder']\n time_attrs = dict(attrs)\n time_attrs.setdefault('class', 'form-control')\n time_attrs['class'] += ' timepickerfield'\n time_attrs['autocomplete'] = 'off'\n\n def placeholder():\n tf = time_format or get_format('TIME_INPUT_FORMATS')[0]\n return now().replace(\n year=2000, month=1, day=1, hour=0, minute=0, second=0, microsecond=0\n ).strftime(tf)\n\n time_attrs['placeholder'] = lazy(placeholder, str)\n\n forms.TimeInput.__init__(self, time_attrs, time_format)\n\n\nclass UploadedFileWidget(forms.ClearableFileInput):\n def __init__(self, *args, **kwargs):\n # Browsers can't recognize that the server already has a file uploaded\n # Don't mark this input as being required if we already have an answer\n # (this needs to be done via the attrs, otherwise we wouldn't get the \"required\" star on the field label)\n attrs = kwargs.get('attrs', {})\n if kwargs.get('required') and kwargs.get('initial'):\n attrs.update({'required': None})\n kwargs.update({'attrs': attrs})\n\n self.position = kwargs.pop('position')\n self.event = kwargs.pop('event')\n self.answer = kwargs.pop('answer')\n super().__init__(*args, **kwargs)\n\n class FakeFile:\n def __init__(self, file, position, event, answer):\n self.file = file\n self.position = position\n self.event = event\n self.answer = answer\n\n def __str__(self):\n return os.path.basename(self.file.name).split('.', 1)[-1]\n\n @property\n def url(self):\n from pretix.base.models import OrderPosition\n from pretix.multidomain.urlreverse import eventreverse\n\n if isinstance(self.position, OrderPosition):\n return eventreverse(self.event, 'presale:event.order.download.answer', kwargs={\n 'order': self.position.order.code,\n 'secret': self.position.order.secret,\n 'answer': self.answer.pk,\n })\n else:\n return eventreverse(self.event, 'presale:event.cart.download.answer', kwargs={\n 'answer': self.answer.pk,\n })\n\n def format_value(self, value):\n if self.is_initial(value):\n return self.FakeFile(value, self.position, self.event, self.answer)\n\n\nclass SplitDateTimePickerWidget(forms.SplitDateTimeWidget):\n template_name = 'pretixbase/forms/widgets/splitdatetime.html'\n\n def __init__(self, attrs=None, date_format=None, time_format=None, min_date=None, max_date=None):\n attrs = attrs or {}\n if 'placeholder' in attrs:\n del attrs['placeholder']\n date_attrs = dict(attrs)\n time_attrs = dict(attrs)\n date_attrs.setdefault('class', 'form-control splitdatetimepart')\n time_attrs.setdefault('class', 'form-control splitdatetimepart')\n date_attrs.setdefault('autocomplete', 'off')\n time_attrs.setdefault('autocomplete', 'off')\n date_attrs['class'] += ' datepickerfield'\n time_attrs['class'] += ' timepickerfield'\n date_attrs['autocomplete'] = 'off'\n time_attrs['autocomplete'] = 'off'\n if min_date:\n date_attrs['data-min'] = (\n min_date if isinstance(min_date, date) else min_date.astimezone(get_current_timezone()).date()\n ).isoformat()\n if max_date:\n date_attrs['data-max'] = (\n max_date if isinstance(max_date, date) else max_date.astimezone(get_current_timezone()).date()\n ).isoformat()\n\n def date_placeholder():\n df = date_format or get_format('DATE_INPUT_FORMATS')[0]\n return now().replace(\n year=2000, month=12, day=31, hour=18, minute=0, second=0, microsecond=0\n ).strftime(df)\n\n def time_placeholder():\n tf = time_format or get_format('TIME_INPUT_FORMATS')[0]\n return now().replace(\n year=2000, month=1, day=1, hour=0, minute=0, second=0, microsecond=0\n ).strftime(tf)\n\n date_attrs['placeholder'] = lazy(date_placeholder, str)\n time_attrs['placeholder'] = lazy(time_placeholder, str)\n\n widgets = (\n forms.DateInput(attrs=date_attrs, format=date_format),\n forms.TimeInput(attrs=time_attrs, format=time_format),\n )\n # Skip one hierarchy level\n forms.MultiWidget.__init__(self, widgets, attrs)\n\n\nclass BusinessBooleanRadio(forms.RadioSelect):\n def __init__(self, require_business=False, attrs=None):\n self.require_business = require_business\n if self.require_business:\n choices = (\n ('business', _('Business or institutional customer')),\n )\n else:\n choices = (\n ('individual', _('Individual customer')),\n ('business', _('Business or institutional customer')),\n )\n super().__init__(attrs, choices)\n\n def format_value(self, value):\n if self.require_business:\n return 'business'\n try:\n return {True: 'business', False: 'individual'}[value]\n except KeyError:\n return 'individual'\n\n def value_from_datadict(self, data, files, name):\n value = data.get(name)\n if self.require_business:\n return True\n return {\n 'business': True,\n True: True,\n 'True': True,\n 'individual': False,\n 'False': False,\n False: False,\n }.get(value)\n", "path": "src/pretix/base/forms/widgets.py"}], "after_files": [{"content": "#\n# This file is part of pretix (Community Edition).\n#\n# Copyright (C) 2014-2020 Raphael Michel and contributors\n# Copyright (C) 2020-2021 rami.io GmbH and contributors\n#\n# This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General\n# Public License as published by the Free Software Foundation in version 3 of the License.\n#\n# ADDITIONAL TERMS APPLY: Pursuant to Section 7 of the GNU Affero General Public License, additional terms are\n# applicable granting you additional permissions and placing additional restrictions on your usage of this software.\n# Please refer to the pretix LICENSE file to obtain the full terms applicable to this work. If you did not receive\n# this file, see <https://pretix.eu/about/en/license>.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied\n# warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more\n# details.\n#\n# You should have received a copy of the GNU Affero General Public License along with this program. If not, see\n# <https://www.gnu.org/licenses/>.\n#\n\n# This file is based on an earlier version of pretix which was released under the Apache License 2.0. The full text of\n# the Apache License 2.0 can be obtained at <http://www.apache.org/licenses/LICENSE-2.0>.\n#\n# This file may have since been changed and any changes are released under the terms of AGPLv3 as described above. A\n# full history of changes and contributors is available at <https://github.com/pretix/pretix>.\n#\n# This file contains Apache-licensed contributions copyrighted by: Felix Sch\u00e4fer\n#\n# Unless required by applicable law or agreed to in writing, software distributed under the Apache License 2.0 is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations under the License.\n\nimport os\nfrom datetime import date\n\nfrom django import forms\nfrom django.utils.formats import get_format\nfrom django.utils.functional import lazy\nfrom django.utils.timezone import get_current_timezone, now\nfrom django.utils.translation import gettext_lazy as _\n\n\nclass DatePickerWidget(forms.DateInput):\n def __init__(self, attrs=None, date_format=None):\n attrs = attrs or {}\n if 'placeholder' in attrs:\n del attrs['placeholder']\n date_attrs = dict(attrs)\n date_attrs.setdefault('class', 'form-control')\n date_attrs['class'] += ' datepickerfield'\n date_attrs['autocomplete'] = 'off'\n\n def placeholder():\n df = date_format or get_format('DATE_INPUT_FORMATS')[0]\n return now().replace(\n year=2000, month=12, day=31, hour=18, minute=0, second=0, microsecond=0\n ).strftime(df)\n\n date_attrs['placeholder'] = lazy(placeholder, str)\n\n forms.DateInput.__init__(self, date_attrs, date_format)\n\n\nclass TimePickerWidget(forms.TimeInput):\n def __init__(self, attrs=None, time_format=None):\n attrs = attrs or {}\n if 'placeholder' in attrs:\n del attrs['placeholder']\n time_attrs = dict(attrs)\n time_attrs.setdefault('class', 'form-control')\n time_attrs['class'] += ' timepickerfield'\n time_attrs['autocomplete'] = 'off'\n\n def placeholder():\n tf = time_format or get_format('TIME_INPUT_FORMATS')[0]\n return now().replace(\n year=2000, month=1, day=1, hour=0, minute=0, second=0, microsecond=0\n ).strftime(tf)\n\n time_attrs['placeholder'] = lazy(placeholder, str)\n\n forms.TimeInput.__init__(self, time_attrs, time_format)\n\n\nclass UploadedFileWidget(forms.ClearableFileInput):\n def __init__(self, *args, **kwargs):\n self.position = kwargs.pop('position')\n self.event = kwargs.pop('event')\n self.answer = kwargs.pop('answer')\n super().__init__(*args, **kwargs)\n\n class FakeFile:\n def __init__(self, file, position, event, answer):\n self.file = file\n self.position = position\n self.event = event\n self.answer = answer\n\n def __str__(self):\n return os.path.basename(self.file.name).split('.', 1)[-1]\n\n @property\n def url(self):\n from pretix.base.models import OrderPosition\n from pretix.multidomain.urlreverse import eventreverse\n\n if isinstance(self.position, OrderPosition):\n return eventreverse(self.event, 'presale:event.order.download.answer', kwargs={\n 'order': self.position.order.code,\n 'secret': self.position.order.secret,\n 'answer': self.answer.pk,\n })\n else:\n return eventreverse(self.event, 'presale:event.cart.download.answer', kwargs={\n 'answer': self.answer.pk,\n })\n\n def get_context(self, name, value, attrs):\n # Browsers can't recognize that the server already has a file uploaded\n # Don't mark this input as being required if we already have an answer\n # (this needs to be done via the attrs, otherwise we wouldn't get the \"required\" star on the field label)\n ctx = super().get_context(name, value, attrs)\n if ctx['widget']['is_initial']:\n ctx['widget']['attrs']['required'] = False\n return ctx\n\n def format_value(self, value):\n if self.is_initial(value):\n return self.FakeFile(value, self.position, self.event, self.answer)\n\n\nclass SplitDateTimePickerWidget(forms.SplitDateTimeWidget):\n template_name = 'pretixbase/forms/widgets/splitdatetime.html'\n\n def __init__(self, attrs=None, date_format=None, time_format=None, min_date=None, max_date=None):\n attrs = attrs or {}\n if 'placeholder' in attrs:\n del attrs['placeholder']\n date_attrs = dict(attrs)\n time_attrs = dict(attrs)\n date_attrs.setdefault('class', 'form-control splitdatetimepart')\n time_attrs.setdefault('class', 'form-control splitdatetimepart')\n date_attrs.setdefault('autocomplete', 'off')\n time_attrs.setdefault('autocomplete', 'off')\n date_attrs['class'] += ' datepickerfield'\n time_attrs['class'] += ' timepickerfield'\n date_attrs['autocomplete'] = 'off'\n time_attrs['autocomplete'] = 'off'\n if min_date:\n date_attrs['data-min'] = (\n min_date if isinstance(min_date, date) else min_date.astimezone(get_current_timezone()).date()\n ).isoformat()\n if max_date:\n date_attrs['data-max'] = (\n max_date if isinstance(max_date, date) else max_date.astimezone(get_current_timezone()).date()\n ).isoformat()\n\n def date_placeholder():\n df = date_format or get_format('DATE_INPUT_FORMATS')[0]\n return now().replace(\n year=2000, month=12, day=31, hour=18, minute=0, second=0, microsecond=0\n ).strftime(df)\n\n def time_placeholder():\n tf = time_format or get_format('TIME_INPUT_FORMATS')[0]\n return now().replace(\n year=2000, month=1, day=1, hour=0, minute=0, second=0, microsecond=0\n ).strftime(tf)\n\n date_attrs['placeholder'] = lazy(date_placeholder, str)\n time_attrs['placeholder'] = lazy(time_placeholder, str)\n\n widgets = (\n forms.DateInput(attrs=date_attrs, format=date_format),\n forms.TimeInput(attrs=time_attrs, format=time_format),\n )\n # Skip one hierarchy level\n forms.MultiWidget.__init__(self, widgets, attrs)\n\n\nclass BusinessBooleanRadio(forms.RadioSelect):\n def __init__(self, require_business=False, attrs=None):\n self.require_business = require_business\n if self.require_business:\n choices = (\n ('business', _('Business or institutional customer')),\n )\n else:\n choices = (\n ('individual', _('Individual customer')),\n ('business', _('Business or institutional customer')),\n )\n super().__init__(attrs, choices)\n\n def format_value(self, value):\n if self.require_business:\n return 'business'\n try:\n return {True: 'business', False: 'individual'}[value]\n except KeyError:\n return 'individual'\n\n def value_from_datadict(self, data, files, name):\n value = data.get(name)\n if self.require_business:\n return True\n return {\n 'business': True,\n True: True,\n 'True': True,\n 'individual': False,\n 'False': False,\n False: False,\n }.get(value)\n", "path": "src/pretix/base/forms/widgets.py"}]} | 3,051 | 391 |
gh_patches_debug_15488 | rasdani/github-patches | git_diff | DataDog__dd-trace-py-4193 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
aioredis raises CancelledError in _finish_span
### Which version of dd-trace-py are you using?
~~0.53.0~~ 0.58.0
### Which version of pip are you using?
21.3.1
### Which version of the libraries are you using?
django==3.2.11
django-redis==5.0.0
channels==3.0.4
daphne==3.0.2
### How can we reproduce your problem?
I am using code similar to the following:
asgi.py
```
import django
from channels.routing import get_default_application
from ddtrace.contrib.asgi import TraceMiddleware
django.setup()
application = TraceMiddleware(get_default_application())
```
routing.py
```
from django.urls import re_path
import my_app.consumers
websocket_urlpatterns = [
re_path(r"^ws/test/$", consumers.TestConsumer.as_asgi()),
]
```
my_app/consumers.py
```
from channels.generic.websocket import WebsocketConsumer
class TestConsumer(WebsocketConsumer):
groups = ["broadcast"]
def connect(self):
self.accept()
def receive(self, text_data=None, bytes_data=None):
raise Exception("An test exception")
```
I am running the application with: `ddtrace-run daphne asgi:application --bind 0.0.0.0 --port 8001`
### What is the result that you get?
I don't get any traces at all, and my logs show this:
```
handle: <Handle traced_13_execute_command.<locals>._finish_span(<Future cancelled>) at /usr/local/lib/python3.10/site-packages/ddtrace/contrib/aioredis/patch.py:140>
Traceback (most recent call last):
File "/usr/local/lib/python3.10/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/usr/local/lib/python3.10/site-packages/ddtrace/contrib/aioredis/patch.py", line 146, in _finish_span
future.result()
asyncio.exceptions.CancelledError
```
### What is the result that you expected?
No errors
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ddtrace/contrib/aioredis/patch.py`
Content:
```
1 import asyncio
2 import sys
3
4 import aioredis
5
6 from ddtrace import config
7 from ddtrace.internal.utils.wrappers import unwrap as _u
8 from ddtrace.pin import Pin
9 from ddtrace.vendor.wrapt import wrap_function_wrapper as _w
10
11 from .. import trace_utils
12 from ...constants import ANALYTICS_SAMPLE_RATE_KEY
13 from ...constants import SPAN_MEASURED_KEY
14 from ...ext import SpanTypes
15 from ...ext import net
16 from ...ext import redis as redisx
17 from ...internal.utils.formats import stringify_cache_args
18 from ..redis.util import _trace_redis_cmd
19 from ..redis.util import _trace_redis_execute_pipeline
20
21
22 try:
23 from aioredis.commands.transaction import _RedisBuffer
24 except ImportError:
25 _RedisBuffer = None
26
27 config._add("aioredis", dict(_default_service="redis"))
28
29 aioredis_version_str = getattr(aioredis, "__version__", "0.0.0")
30 aioredis_version = tuple([int(i) for i in aioredis_version_str.split(".")])
31
32
33 def patch():
34 if getattr(aioredis, "_datadog_patch", False):
35 return
36 setattr(aioredis, "_datadog_patch", True)
37 pin = Pin()
38 if aioredis_version >= (2, 0):
39 _w("aioredis.client", "Redis.execute_command", traced_execute_command)
40 _w("aioredis.client", "Redis.pipeline", traced_pipeline)
41 _w("aioredis.client", "Pipeline.execute", traced_execute_pipeline)
42 pin.onto(aioredis.client.Redis)
43 else:
44 _w("aioredis", "Redis.execute", traced_13_execute_command)
45 _w("aioredis", "Redis.pipeline", traced_13_pipeline)
46 _w("aioredis.commands.transaction", "Pipeline.execute", traced_13_execute_pipeline)
47 pin.onto(aioredis.Redis)
48
49
50 def unpatch():
51 if not getattr(aioredis, "_datadog_patch", False):
52 return
53
54 setattr(aioredis, "_datadog_patch", False)
55 if aioredis_version >= (2, 0):
56 _u(aioredis.client.Redis, "execute_command")
57 _u(aioredis.client.Redis, "pipeline")
58 _u(aioredis.client.Pipeline, "execute")
59 else:
60 _u(aioredis.Redis, "execute")
61 _u(aioredis.Redis, "pipeline")
62 _u(aioredis.commands.transaction.Pipeline, "execute")
63
64
65 async def traced_execute_command(func, instance, args, kwargs):
66 pin = Pin.get_from(instance)
67 if not pin or not pin.enabled():
68 return await func(*args, **kwargs)
69
70 with _trace_redis_cmd(pin, config.aioredis, instance, args):
71 return await func(*args, **kwargs)
72
73
74 def traced_pipeline(func, instance, args, kwargs):
75 pipeline = func(*args, **kwargs)
76 pin = Pin.get_from(instance)
77 if pin:
78 pin.onto(pipeline)
79 return pipeline
80
81
82 async def traced_execute_pipeline(func, instance, args, kwargs):
83 pin = Pin.get_from(instance)
84 if not pin or not pin.enabled():
85 return await func(*args, **kwargs)
86
87 cmds = [stringify_cache_args(c) for c, _ in instance.command_stack]
88 resource = "\n".join(cmds)
89 with _trace_redis_execute_pipeline(pin, config.aioredis, resource, instance):
90 return await func(*args, **kwargs)
91
92
93 def traced_13_pipeline(func, instance, args, kwargs):
94 pipeline = func(*args, **kwargs)
95 pin = Pin.get_from(instance)
96 if pin:
97 pin.onto(pipeline)
98 return pipeline
99
100
101 def traced_13_execute_command(func, instance, args, kwargs):
102 # If we have a _RedisBuffer then we are in a pipeline
103 if isinstance(instance.connection, _RedisBuffer):
104 return func(*args, **kwargs)
105
106 pin = Pin.get_from(instance)
107 if not pin or not pin.enabled():
108 return func(*args, **kwargs)
109
110 # Don't activate the span since this operation is performed as a future which concludes sometime later on in
111 # execution so subsequent operations in the stack are not necessarily semantically related
112 # (we don't want this span to be the parent of all other spans created before the future is resolved)
113 parent = pin.tracer.current_span()
114 span = pin.tracer.start_span(
115 redisx.CMD,
116 service=trace_utils.ext_service(pin, config.aioredis),
117 span_type=SpanTypes.REDIS,
118 activate=False,
119 child_of=parent,
120 )
121
122 span.set_tag(SPAN_MEASURED_KEY)
123 query = stringify_cache_args(args)
124 span.resource = query
125 span.set_tag(redisx.RAWCMD, query)
126 if pin.tags:
127 span.set_tags(pin.tags)
128
129 span.set_tags(
130 {
131 net.TARGET_HOST: instance.address[0],
132 net.TARGET_PORT: instance.address[1],
133 redisx.DB: instance.db or 0,
134 }
135 )
136 span.set_metric(redisx.ARGS_LEN, len(args))
137 # set analytics sample rate if enabled
138 span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, config.aioredis.get_analytics_sample_rate())
139
140 def _finish_span(future):
141 try:
142 # Accessing the result will raise an exception if:
143 # - The future was cancelled
144 # - There was an error executing the future (`future.exception()`)
145 # - The future is in an invalid state
146 future.result()
147 except Exception:
148 span.set_exc_info(*sys.exc_info())
149 finally:
150 span.finish()
151
152 task = func(*args, **kwargs)
153 # Execute command returns a coroutine when no free connections are available
154 # https://github.com/aio-libs/aioredis-py/blob/v1.3.1/aioredis/pool.py#L191
155 task = asyncio.ensure_future(task)
156 task.add_done_callback(_finish_span)
157 return task
158
159
160 async def traced_13_execute_pipeline(func, instance, args, kwargs):
161 pin = Pin.get_from(instance)
162 if not pin or not pin.enabled():
163 return await func(*args, **kwargs)
164
165 cmds = []
166 for _, cmd, cmd_args, _ in instance._pipeline:
167 parts = [cmd]
168 parts.extend(cmd_args)
169 cmds.append(stringify_cache_args(parts))
170 resource = "\n".join(cmds)
171 with pin.tracer.trace(
172 redisx.CMD,
173 resource=resource,
174 service=trace_utils.ext_service(pin, config.aioredis),
175 span_type=SpanTypes.REDIS,
176 ) as span:
177
178 span.set_tags(
179 {
180 net.TARGET_HOST: instance._pool_or_conn.address[0],
181 net.TARGET_PORT: instance._pool_or_conn.address[1],
182 redisx.DB: instance._pool_or_conn.db or 0,
183 }
184 )
185
186 span.set_tag(SPAN_MEASURED_KEY)
187 span.set_tag(redisx.RAWCMD, resource)
188 span.set_metric(redisx.PIPELINE_LEN, len(instance._pipeline))
189 # set analytics sample rate if enabled
190 span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, config.aioredis.get_analytics_sample_rate())
191
192 return await func(*args, **kwargs)
193
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ddtrace/contrib/aioredis/patch.py b/ddtrace/contrib/aioredis/patch.py
--- a/ddtrace/contrib/aioredis/patch.py
+++ b/ddtrace/contrib/aioredis/patch.py
@@ -140,11 +140,12 @@
def _finish_span(future):
try:
# Accessing the result will raise an exception if:
- # - The future was cancelled
+ # - The future was cancelled (CancelledError)
# - There was an error executing the future (`future.exception()`)
# - The future is in an invalid state
future.result()
- except Exception:
+ # CancelledError exceptions extend from BaseException as of Python 3.8, instead of usual Exception
+ except BaseException:
span.set_exc_info(*sys.exc_info())
finally:
span.finish()
| {"golden_diff": "diff --git a/ddtrace/contrib/aioredis/patch.py b/ddtrace/contrib/aioredis/patch.py\n--- a/ddtrace/contrib/aioredis/patch.py\n+++ b/ddtrace/contrib/aioredis/patch.py\n@@ -140,11 +140,12 @@\n def _finish_span(future):\n try:\n # Accessing the result will raise an exception if:\n- # - The future was cancelled\n+ # - The future was cancelled (CancelledError)\n # - There was an error executing the future (`future.exception()`)\n # - The future is in an invalid state\n future.result()\n- except Exception:\n+ # CancelledError exceptions extend from BaseException as of Python 3.8, instead of usual Exception\n+ except BaseException:\n span.set_exc_info(*sys.exc_info())\n finally:\n span.finish()\n", "issue": "aioredis raises CancelledError in _finish_span \n### Which version of dd-trace-py are you using?\r\n\r\n~~0.53.0~~ 0.58.0\r\n\r\n### Which version of pip are you using?\r\n\r\n21.3.1\r\n\r\n### Which version of the libraries are you using?\r\n\r\ndjango==3.2.11\r\ndjango-redis==5.0.0\r\nchannels==3.0.4\r\ndaphne==3.0.2\r\n\r\n### How can we reproduce your problem?\r\n\r\nI am using code similar to the following:\r\n\r\nasgi.py\r\n\r\n```\r\nimport django\r\nfrom channels.routing import get_default_application\r\nfrom ddtrace.contrib.asgi import TraceMiddleware\r\n\r\ndjango.setup()\r\napplication = TraceMiddleware(get_default_application())\r\n```\r\n\r\nrouting.py\r\n\r\n```\r\nfrom django.urls import re_path\r\nimport my_app.consumers\r\n\r\nwebsocket_urlpatterns = [\r\n re_path(r\"^ws/test/$\", consumers.TestConsumer.as_asgi()),\r\n]\r\n```\r\n\r\nmy_app/consumers.py\r\n\r\n```\r\nfrom channels.generic.websocket import WebsocketConsumer\r\n\r\nclass TestConsumer(WebsocketConsumer):\r\n groups = [\"broadcast\"]\r\n\r\n def connect(self):\r\n self.accept()\r\n\r\n def receive(self, text_data=None, bytes_data=None):\r\n raise Exception(\"An test exception\")\r\n```\r\n\r\nI am running the application with: `ddtrace-run daphne asgi:application --bind 0.0.0.0 --port 8001`\r\n\r\n### What is the result that you get?\r\n\r\nI don't get any traces at all, and my logs show this:\r\n\r\n```\r\nhandle: <Handle traced_13_execute_command.<locals>._finish_span(<Future cancelled>) at /usr/local/lib/python3.10/site-packages/ddtrace/contrib/aioredis/patch.py:140>\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.10/asyncio/events.py\", line 80, in _run\r\n self._context.run(self._callback, *self._args)\r\n File \"/usr/local/lib/python3.10/site-packages/ddtrace/contrib/aioredis/patch.py\", line 146, in _finish_span\r\n future.result()\r\nasyncio.exceptions.CancelledError\r\n```\r\n\r\n\r\n### What is the result that you expected?\r\n\r\nNo errors\r\n\n", "before_files": [{"content": "import asyncio\nimport sys\n\nimport aioredis\n\nfrom ddtrace import config\nfrom ddtrace.internal.utils.wrappers import unwrap as _u\nfrom ddtrace.pin import Pin\nfrom ddtrace.vendor.wrapt import wrap_function_wrapper as _w\n\nfrom .. import trace_utils\nfrom ...constants import ANALYTICS_SAMPLE_RATE_KEY\nfrom ...constants import SPAN_MEASURED_KEY\nfrom ...ext import SpanTypes\nfrom ...ext import net\nfrom ...ext import redis as redisx\nfrom ...internal.utils.formats import stringify_cache_args\nfrom ..redis.util import _trace_redis_cmd\nfrom ..redis.util import _trace_redis_execute_pipeline\n\n\ntry:\n from aioredis.commands.transaction import _RedisBuffer\nexcept ImportError:\n _RedisBuffer = None\n\nconfig._add(\"aioredis\", dict(_default_service=\"redis\"))\n\naioredis_version_str = getattr(aioredis, \"__version__\", \"0.0.0\")\naioredis_version = tuple([int(i) for i in aioredis_version_str.split(\".\")])\n\n\ndef patch():\n if getattr(aioredis, \"_datadog_patch\", False):\n return\n setattr(aioredis, \"_datadog_patch\", True)\n pin = Pin()\n if aioredis_version >= (2, 0):\n _w(\"aioredis.client\", \"Redis.execute_command\", traced_execute_command)\n _w(\"aioredis.client\", \"Redis.pipeline\", traced_pipeline)\n _w(\"aioredis.client\", \"Pipeline.execute\", traced_execute_pipeline)\n pin.onto(aioredis.client.Redis)\n else:\n _w(\"aioredis\", \"Redis.execute\", traced_13_execute_command)\n _w(\"aioredis\", \"Redis.pipeline\", traced_13_pipeline)\n _w(\"aioredis.commands.transaction\", \"Pipeline.execute\", traced_13_execute_pipeline)\n pin.onto(aioredis.Redis)\n\n\ndef unpatch():\n if not getattr(aioredis, \"_datadog_patch\", False):\n return\n\n setattr(aioredis, \"_datadog_patch\", False)\n if aioredis_version >= (2, 0):\n _u(aioredis.client.Redis, \"execute_command\")\n _u(aioredis.client.Redis, \"pipeline\")\n _u(aioredis.client.Pipeline, \"execute\")\n else:\n _u(aioredis.Redis, \"execute\")\n _u(aioredis.Redis, \"pipeline\")\n _u(aioredis.commands.transaction.Pipeline, \"execute\")\n\n\nasync def traced_execute_command(func, instance, args, kwargs):\n pin = Pin.get_from(instance)\n if not pin or not pin.enabled():\n return await func(*args, **kwargs)\n\n with _trace_redis_cmd(pin, config.aioredis, instance, args):\n return await func(*args, **kwargs)\n\n\ndef traced_pipeline(func, instance, args, kwargs):\n pipeline = func(*args, **kwargs)\n pin = Pin.get_from(instance)\n if pin:\n pin.onto(pipeline)\n return pipeline\n\n\nasync def traced_execute_pipeline(func, instance, args, kwargs):\n pin = Pin.get_from(instance)\n if not pin or not pin.enabled():\n return await func(*args, **kwargs)\n\n cmds = [stringify_cache_args(c) for c, _ in instance.command_stack]\n resource = \"\\n\".join(cmds)\n with _trace_redis_execute_pipeline(pin, config.aioredis, resource, instance):\n return await func(*args, **kwargs)\n\n\ndef traced_13_pipeline(func, instance, args, kwargs):\n pipeline = func(*args, **kwargs)\n pin = Pin.get_from(instance)\n if pin:\n pin.onto(pipeline)\n return pipeline\n\n\ndef traced_13_execute_command(func, instance, args, kwargs):\n # If we have a _RedisBuffer then we are in a pipeline\n if isinstance(instance.connection, _RedisBuffer):\n return func(*args, **kwargs)\n\n pin = Pin.get_from(instance)\n if not pin or not pin.enabled():\n return func(*args, **kwargs)\n\n # Don't activate the span since this operation is performed as a future which concludes sometime later on in\n # execution so subsequent operations in the stack are not necessarily semantically related\n # (we don't want this span to be the parent of all other spans created before the future is resolved)\n parent = pin.tracer.current_span()\n span = pin.tracer.start_span(\n redisx.CMD,\n service=trace_utils.ext_service(pin, config.aioredis),\n span_type=SpanTypes.REDIS,\n activate=False,\n child_of=parent,\n )\n\n span.set_tag(SPAN_MEASURED_KEY)\n query = stringify_cache_args(args)\n span.resource = query\n span.set_tag(redisx.RAWCMD, query)\n if pin.tags:\n span.set_tags(pin.tags)\n\n span.set_tags(\n {\n net.TARGET_HOST: instance.address[0],\n net.TARGET_PORT: instance.address[1],\n redisx.DB: instance.db or 0,\n }\n )\n span.set_metric(redisx.ARGS_LEN, len(args))\n # set analytics sample rate if enabled\n span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, config.aioredis.get_analytics_sample_rate())\n\n def _finish_span(future):\n try:\n # Accessing the result will raise an exception if:\n # - The future was cancelled\n # - There was an error executing the future (`future.exception()`)\n # - The future is in an invalid state\n future.result()\n except Exception:\n span.set_exc_info(*sys.exc_info())\n finally:\n span.finish()\n\n task = func(*args, **kwargs)\n # Execute command returns a coroutine when no free connections are available\n # https://github.com/aio-libs/aioredis-py/blob/v1.3.1/aioredis/pool.py#L191\n task = asyncio.ensure_future(task)\n task.add_done_callback(_finish_span)\n return task\n\n\nasync def traced_13_execute_pipeline(func, instance, args, kwargs):\n pin = Pin.get_from(instance)\n if not pin or not pin.enabled():\n return await func(*args, **kwargs)\n\n cmds = []\n for _, cmd, cmd_args, _ in instance._pipeline:\n parts = [cmd]\n parts.extend(cmd_args)\n cmds.append(stringify_cache_args(parts))\n resource = \"\\n\".join(cmds)\n with pin.tracer.trace(\n redisx.CMD,\n resource=resource,\n service=trace_utils.ext_service(pin, config.aioredis),\n span_type=SpanTypes.REDIS,\n ) as span:\n\n span.set_tags(\n {\n net.TARGET_HOST: instance._pool_or_conn.address[0],\n net.TARGET_PORT: instance._pool_or_conn.address[1],\n redisx.DB: instance._pool_or_conn.db or 0,\n }\n )\n\n span.set_tag(SPAN_MEASURED_KEY)\n span.set_tag(redisx.RAWCMD, resource)\n span.set_metric(redisx.PIPELINE_LEN, len(instance._pipeline))\n # set analytics sample rate if enabled\n span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, config.aioredis.get_analytics_sample_rate())\n\n return await func(*args, **kwargs)\n", "path": "ddtrace/contrib/aioredis/patch.py"}], "after_files": [{"content": "import asyncio\nimport sys\n\nimport aioredis\n\nfrom ddtrace import config\nfrom ddtrace.internal.utils.wrappers import unwrap as _u\nfrom ddtrace.pin import Pin\nfrom ddtrace.vendor.wrapt import wrap_function_wrapper as _w\n\nfrom .. import trace_utils\nfrom ...constants import ANALYTICS_SAMPLE_RATE_KEY\nfrom ...constants import SPAN_MEASURED_KEY\nfrom ...ext import SpanTypes\nfrom ...ext import net\nfrom ...ext import redis as redisx\nfrom ...internal.utils.formats import stringify_cache_args\nfrom ..redis.util import _trace_redis_cmd\nfrom ..redis.util import _trace_redis_execute_pipeline\n\n\ntry:\n from aioredis.commands.transaction import _RedisBuffer\nexcept ImportError:\n _RedisBuffer = None\n\nconfig._add(\"aioredis\", dict(_default_service=\"redis\"))\n\naioredis_version_str = getattr(aioredis, \"__version__\", \"0.0.0\")\naioredis_version = tuple([int(i) for i in aioredis_version_str.split(\".\")])\n\n\ndef patch():\n if getattr(aioredis, \"_datadog_patch\", False):\n return\n setattr(aioredis, \"_datadog_patch\", True)\n pin = Pin()\n if aioredis_version >= (2, 0):\n _w(\"aioredis.client\", \"Redis.execute_command\", traced_execute_command)\n _w(\"aioredis.client\", \"Redis.pipeline\", traced_pipeline)\n _w(\"aioredis.client\", \"Pipeline.execute\", traced_execute_pipeline)\n pin.onto(aioredis.client.Redis)\n else:\n _w(\"aioredis\", \"Redis.execute\", traced_13_execute_command)\n _w(\"aioredis\", \"Redis.pipeline\", traced_13_pipeline)\n _w(\"aioredis.commands.transaction\", \"Pipeline.execute\", traced_13_execute_pipeline)\n pin.onto(aioredis.Redis)\n\n\ndef unpatch():\n if not getattr(aioredis, \"_datadog_patch\", False):\n return\n\n setattr(aioredis, \"_datadog_patch\", False)\n if aioredis_version >= (2, 0):\n _u(aioredis.client.Redis, \"execute_command\")\n _u(aioredis.client.Redis, \"pipeline\")\n _u(aioredis.client.Pipeline, \"execute\")\n else:\n _u(aioredis.Redis, \"execute\")\n _u(aioredis.Redis, \"pipeline\")\n _u(aioredis.commands.transaction.Pipeline, \"execute\")\n\n\nasync def traced_execute_command(func, instance, args, kwargs):\n pin = Pin.get_from(instance)\n if not pin or not pin.enabled():\n return await func(*args, **kwargs)\n\n with _trace_redis_cmd(pin, config.aioredis, instance, args):\n return await func(*args, **kwargs)\n\n\ndef traced_pipeline(func, instance, args, kwargs):\n pipeline = func(*args, **kwargs)\n pin = Pin.get_from(instance)\n if pin:\n pin.onto(pipeline)\n return pipeline\n\n\nasync def traced_execute_pipeline(func, instance, args, kwargs):\n pin = Pin.get_from(instance)\n if not pin or not pin.enabled():\n return await func(*args, **kwargs)\n\n cmds = [stringify_cache_args(c) for c, _ in instance.command_stack]\n resource = \"\\n\".join(cmds)\n with _trace_redis_execute_pipeline(pin, config.aioredis, resource, instance):\n return await func(*args, **kwargs)\n\n\ndef traced_13_pipeline(func, instance, args, kwargs):\n pipeline = func(*args, **kwargs)\n pin = Pin.get_from(instance)\n if pin:\n pin.onto(pipeline)\n return pipeline\n\n\ndef traced_13_execute_command(func, instance, args, kwargs):\n # If we have a _RedisBuffer then we are in a pipeline\n if isinstance(instance.connection, _RedisBuffer):\n return func(*args, **kwargs)\n\n pin = Pin.get_from(instance)\n if not pin or not pin.enabled():\n return func(*args, **kwargs)\n\n # Don't activate the span since this operation is performed as a future which concludes sometime later on in\n # execution so subsequent operations in the stack are not necessarily semantically related\n # (we don't want this span to be the parent of all other spans created before the future is resolved)\n parent = pin.tracer.current_span()\n span = pin.tracer.start_span(\n redisx.CMD,\n service=trace_utils.ext_service(pin, config.aioredis),\n span_type=SpanTypes.REDIS,\n activate=False,\n child_of=parent,\n )\n\n span.set_tag(SPAN_MEASURED_KEY)\n query = stringify_cache_args(args)\n span.resource = query\n span.set_tag(redisx.RAWCMD, query)\n if pin.tags:\n span.set_tags(pin.tags)\n\n span.set_tags(\n {\n net.TARGET_HOST: instance.address[0],\n net.TARGET_PORT: instance.address[1],\n redisx.DB: instance.db or 0,\n }\n )\n span.set_metric(redisx.ARGS_LEN, len(args))\n # set analytics sample rate if enabled\n span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, config.aioredis.get_analytics_sample_rate())\n\n def _finish_span(future):\n try:\n # Accessing the result will raise an exception if:\n # - The future was cancelled (CancelledError)\n # - There was an error executing the future (`future.exception()`)\n # - The future is in an invalid state\n future.result()\n # CancelledError exceptions extend from BaseException as of Python 3.8, instead of usual Exception\n except BaseException:\n span.set_exc_info(*sys.exc_info())\n finally:\n span.finish()\n\n task = func(*args, **kwargs)\n # Execute command returns a coroutine when no free connections are available\n # https://github.com/aio-libs/aioredis-py/blob/v1.3.1/aioredis/pool.py#L191\n task = asyncio.ensure_future(task)\n task.add_done_callback(_finish_span)\n return task\n\n\nasync def traced_13_execute_pipeline(func, instance, args, kwargs):\n pin = Pin.get_from(instance)\n if not pin or not pin.enabled():\n return await func(*args, **kwargs)\n\n cmds = []\n for _, cmd, cmd_args, _ in instance._pipeline:\n parts = [cmd]\n parts.extend(cmd_args)\n cmds.append(stringify_cache_args(parts))\n resource = \"\\n\".join(cmds)\n with pin.tracer.trace(\n redisx.CMD,\n resource=resource,\n service=trace_utils.ext_service(pin, config.aioredis),\n span_type=SpanTypes.REDIS,\n ) as span:\n\n span.set_tags(\n {\n net.TARGET_HOST: instance._pool_or_conn.address[0],\n net.TARGET_PORT: instance._pool_or_conn.address[1],\n redisx.DB: instance._pool_or_conn.db or 0,\n }\n )\n\n span.set_tag(SPAN_MEASURED_KEY)\n span.set_tag(redisx.RAWCMD, resource)\n span.set_metric(redisx.PIPELINE_LEN, len(instance._pipeline))\n # set analytics sample rate if enabled\n span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, config.aioredis.get_analytics_sample_rate())\n\n return await func(*args, **kwargs)\n", "path": "ddtrace/contrib/aioredis/patch.py"}]} | 2,824 | 202 |
gh_patches_debug_43783 | rasdani/github-patches | git_diff | feast-dev__feast-3425 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`Feature.proto` should have a description field
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
I would like to attach a human-readable description to each field in a feature view. Since data scientists shop for individual features and not feature-views, it's useful to have documentation associated with each feature individually (rather than for the entire feature view). Persisting it here will allow us to surface them in various UIs.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
Similar to `Entity.proto`, we should add a description field to features and store them in the registry.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
For now, we're just using a `description` tag on each feature. This works, but it's clunky. It also means that in our UI, we have to remove certain tags before showing the raw tags to the users.
**Additional context**
Add any other context or screenshots about the feature request here.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sdk/python/feast/field.py`
Content:
```
1 # Copyright 2022 The Feast Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import Dict, Optional
16
17 from typeguard import typechecked
18
19 from feast.feature import Feature
20 from feast.protos.feast.core.Feature_pb2 import FeatureSpecV2 as FieldProto
21 from feast.types import FeastType, from_value_type
22 from feast.value_type import ValueType
23
24
25 @typechecked
26 class Field:
27 """
28 A Field represents a set of values with the same structure.
29
30 Attributes:
31 name: The name of the field.
32 dtype: The type of the field, such as string or float.
33 tags: User-defined metadata in dictionary form.
34 """
35
36 name: str
37 dtype: FeastType
38 tags: Dict[str, str]
39
40 def __init__(
41 self,
42 *,
43 name: str,
44 dtype: FeastType,
45 tags: Optional[Dict[str, str]] = None,
46 ):
47 """
48 Creates a Field object.
49
50 Args:
51 name: The name of the field.
52 dtype: The type of the field, such as string or float.
53 tags (optional): User-defined metadata in dictionary form.
54 """
55 self.name = name
56 self.dtype = dtype
57 self.tags = tags or {}
58
59 def __eq__(self, other):
60 if type(self) != type(other):
61 return False
62
63 if (
64 self.name != other.name
65 or self.dtype != other.dtype
66 or self.tags != other.tags
67 ):
68 return False
69 return True
70
71 def __hash__(self):
72 return hash((self.name, hash(self.dtype)))
73
74 def __lt__(self, other):
75 return self.name < other.name
76
77 def __repr__(self):
78 return f"{self.name}-{self.dtype}"
79
80 def __str__(self):
81 return f"Field(name={self.name}, dtype={self.dtype}, tags={self.tags})"
82
83 def to_proto(self) -> FieldProto:
84 """Converts a Field object to its protobuf representation."""
85 value_type = self.dtype.to_value_type()
86 return FieldProto(name=self.name, value_type=value_type.value, tags=self.tags)
87
88 @classmethod
89 def from_proto(cls, field_proto: FieldProto):
90 """
91 Creates a Field object from a protobuf representation.
92
93 Args:
94 field_proto: FieldProto protobuf object
95 """
96 value_type = ValueType(field_proto.value_type)
97 return cls(
98 name=field_proto.name,
99 dtype=from_value_type(value_type=value_type),
100 tags=dict(field_proto.tags),
101 )
102
103 @classmethod
104 def from_feature(cls, feature: Feature):
105 """
106 Creates a Field object from a Feature object.
107
108 Args:
109 feature: Feature object to convert.
110 """
111 return cls(
112 name=feature.name, dtype=from_value_type(feature.dtype), tags=feature.labels
113 )
114
```
Path: `sdk/python/feast/templates/local/feature_repo/example_repo.py`
Content:
```
1 # This is an example feature definition file
2
3 from datetime import timedelta
4
5 import pandas as pd
6
7 from feast import (
8 Entity,
9 FeatureService,
10 FeatureView,
11 Field,
12 FileSource,
13 PushSource,
14 RequestSource,
15 )
16 from feast.on_demand_feature_view import on_demand_feature_view
17 from feast.types import Float32, Float64, Int64
18
19 # Define an entity for the driver. You can think of an entity as a primary key used to
20 # fetch features.
21 driver = Entity(name="driver", join_keys=["driver_id"])
22
23 # Read data from parquet files. Parquet is convenient for local development mode. For
24 # production, you can use your favorite DWH, such as BigQuery. See Feast documentation
25 # for more info.
26 driver_stats_source = FileSource(
27 name="driver_hourly_stats_source",
28 path="%PARQUET_PATH%",
29 timestamp_field="event_timestamp",
30 created_timestamp_column="created",
31 )
32
33 # Our parquet files contain sample data that includes a driver_id column, timestamps and
34 # three feature column. Here we define a Feature View that will allow us to serve this
35 # data to our model online.
36 driver_stats_fv = FeatureView(
37 # The unique name of this feature view. Two feature views in a single
38 # project cannot have the same name
39 name="driver_hourly_stats",
40 entities=[driver],
41 ttl=timedelta(days=1),
42 # The list of features defined below act as a schema to both define features
43 # for both materialization of features into a store, and are used as references
44 # during retrieval for building a training dataset or serving features
45 schema=[
46 Field(name="conv_rate", dtype=Float32),
47 Field(name="acc_rate", dtype=Float32),
48 Field(name="avg_daily_trips", dtype=Int64),
49 ],
50 online=True,
51 source=driver_stats_source,
52 # Tags are user defined key/value pairs that are attached to each
53 # feature view
54 tags={"team": "driver_performance"},
55 )
56
57 # Define a request data source which encodes features / information only
58 # available at request time (e.g. part of the user initiated HTTP request)
59 input_request = RequestSource(
60 name="vals_to_add",
61 schema=[
62 Field(name="val_to_add", dtype=Int64),
63 Field(name="val_to_add_2", dtype=Int64),
64 ],
65 )
66
67
68 # Define an on demand feature view which can generate new features based on
69 # existing feature views and RequestSource features
70 @on_demand_feature_view(
71 sources=[driver_stats_fv, input_request],
72 schema=[
73 Field(name="conv_rate_plus_val1", dtype=Float64),
74 Field(name="conv_rate_plus_val2", dtype=Float64),
75 ],
76 )
77 def transformed_conv_rate(inputs: pd.DataFrame) -> pd.DataFrame:
78 df = pd.DataFrame()
79 df["conv_rate_plus_val1"] = inputs["conv_rate"] + inputs["val_to_add"]
80 df["conv_rate_plus_val2"] = inputs["conv_rate"] + inputs["val_to_add_2"]
81 return df
82
83
84 # This groups features into a model version
85 driver_activity_v1 = FeatureService(
86 name="driver_activity_v1",
87 features=[
88 driver_stats_fv[["conv_rate"]], # Sub-selects a feature from a feature view
89 transformed_conv_rate, # Selects all features from the feature view
90 ],
91 )
92 driver_activity_v2 = FeatureService(
93 name="driver_activity_v2", features=[driver_stats_fv, transformed_conv_rate]
94 )
95
96 # Defines a way to push data (to be available offline, online or both) into Feast.
97 driver_stats_push_source = PushSource(
98 name="driver_stats_push_source",
99 batch_source=driver_stats_source,
100 )
101
102 # Defines a slightly modified version of the feature view from above, where the source
103 # has been changed to the push source. This allows fresh features to be directly pushed
104 # to the online store for this feature view.
105 driver_stats_fresh_fv = FeatureView(
106 name="driver_hourly_stats_fresh",
107 entities=[driver],
108 ttl=timedelta(days=1),
109 schema=[
110 Field(name="conv_rate", dtype=Float32),
111 Field(name="acc_rate", dtype=Float32),
112 Field(name="avg_daily_trips", dtype=Int64),
113 ],
114 online=True,
115 source=driver_stats_push_source, # Changed from above
116 tags={"team": "driver_performance"},
117 )
118
119
120 # Define an on demand feature view which can generate new features based on
121 # existing feature views and RequestSource features
122 @on_demand_feature_view(
123 sources=[driver_stats_fresh_fv, input_request], # relies on fresh version of FV
124 schema=[
125 Field(name="conv_rate_plus_val1", dtype=Float64),
126 Field(name="conv_rate_plus_val2", dtype=Float64),
127 ],
128 )
129 def transformed_conv_rate_fresh(inputs: pd.DataFrame) -> pd.DataFrame:
130 df = pd.DataFrame()
131 df["conv_rate_plus_val1"] = inputs["conv_rate"] + inputs["val_to_add"]
132 df["conv_rate_plus_val2"] = inputs["conv_rate"] + inputs["val_to_add_2"]
133 return df
134
135
136 driver_activity_v3 = FeatureService(
137 name="driver_activity_v3",
138 features=[driver_stats_fresh_fv, transformed_conv_rate_fresh],
139 )
140
```
Path: `sdk/python/feast/feature.py`
Content:
```
1 # Copyright 2020 The Feast Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import Dict, Optional
16
17 from feast.protos.feast.core.Feature_pb2 import FeatureSpecV2 as FeatureSpecProto
18 from feast.protos.feast.types.Value_pb2 import ValueType as ValueTypeProto
19 from feast.value_type import ValueType
20
21
22 class Feature:
23 """
24 A Feature represents a class of serveable feature.
25
26 Args:
27 name: Name of the feature.
28 dtype: The type of the feature, such as string or float.
29 labels (optional): User-defined metadata in dictionary form.
30 """
31
32 def __init__(
33 self,
34 name: str,
35 dtype: ValueType,
36 labels: Optional[Dict[str, str]] = None,
37 ):
38 """Creates a Feature object."""
39 self._name = name
40 if not isinstance(dtype, ValueType):
41 raise ValueError("dtype is not a valid ValueType")
42 if dtype is ValueType.UNKNOWN:
43 raise ValueError(f"dtype cannot be {dtype}")
44 self._dtype = dtype
45 if labels is None:
46 self._labels = dict()
47 else:
48 self._labels = labels
49
50 def __eq__(self, other):
51 if self.name != other.name or self.dtype != other.dtype:
52 return False
53 return True
54
55 def __lt__(self, other):
56 return self.name < other.name
57
58 def __repr__(self):
59 # return string representation of the reference
60 return f"{self.name}-{self.dtype}"
61
62 def __str__(self):
63 # readable string of the reference
64 return f"Feature<{self.__repr__()}>"
65
66 @property
67 def name(self):
68 """
69 Gets the name of this feature.
70 """
71 return self._name
72
73 @property
74 def dtype(self) -> ValueType:
75 """
76 Gets the data type of this feature.
77 """
78 return self._dtype
79
80 @property
81 def labels(self) -> Dict[str, str]:
82 """
83 Gets the labels of this feature.
84 """
85 return self._labels
86
87 def to_proto(self) -> FeatureSpecProto:
88 """
89 Converts Feature object to its Protocol Buffer representation.
90
91 Returns:
92 A FeatureSpecProto protobuf.
93 """
94 value_type = ValueTypeProto.Enum.Value(self.dtype.name)
95
96 return FeatureSpecProto(
97 name=self.name,
98 value_type=value_type,
99 tags=self.labels,
100 )
101
102 @classmethod
103 def from_proto(cls, feature_proto: FeatureSpecProto):
104 """
105 Args:
106 feature_proto: FeatureSpecV2 protobuf object
107
108 Returns:
109 Feature object
110 """
111 feature = cls(
112 name=feature_proto.name,
113 dtype=ValueType(feature_proto.value_type),
114 labels=dict(feature_proto.tags),
115 )
116
117 return feature
118
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sdk/python/feast/feature.py b/sdk/python/feast/feature.py
--- a/sdk/python/feast/feature.py
+++ b/sdk/python/feast/feature.py
@@ -33,6 +33,7 @@
self,
name: str,
dtype: ValueType,
+ description: str = "",
labels: Optional[Dict[str, str]] = None,
):
"""Creates a Feature object."""
@@ -42,6 +43,7 @@
if dtype is ValueType.UNKNOWN:
raise ValueError(f"dtype cannot be {dtype}")
self._dtype = dtype
+ self._description = description
if labels is None:
self._labels = dict()
else:
@@ -77,6 +79,13 @@
"""
return self._dtype
+ @property
+ def description(self) -> str:
+ """
+ Gets the description of the feature
+ """
+ return self._description
+
@property
def labels(self) -> Dict[str, str]:
"""
@@ -96,6 +105,7 @@
return FeatureSpecProto(
name=self.name,
value_type=value_type,
+ description=self.description,
tags=self.labels,
)
@@ -111,6 +121,7 @@
feature = cls(
name=feature_proto.name,
dtype=ValueType(feature_proto.value_type),
+ description=feature_proto.description,
labels=dict(feature_proto.tags),
)
diff --git a/sdk/python/feast/field.py b/sdk/python/feast/field.py
--- a/sdk/python/feast/field.py
+++ b/sdk/python/feast/field.py
@@ -30,7 +30,7 @@
Attributes:
name: The name of the field.
dtype: The type of the field, such as string or float.
- tags: User-defined metadata in dictionary form.
+ tags (optional): User-defined metadata in dictionary form.
"""
name: str
@@ -42,6 +42,7 @@
*,
name: str,
dtype: FeastType,
+ description: str = "",
tags: Optional[Dict[str, str]] = None,
):
"""
@@ -54,6 +55,7 @@
"""
self.name = name
self.dtype = dtype
+ self.description = description
self.tags = tags or {}
def __eq__(self, other):
@@ -83,7 +85,12 @@
def to_proto(self) -> FieldProto:
"""Converts a Field object to its protobuf representation."""
value_type = self.dtype.to_value_type()
- return FieldProto(name=self.name, value_type=value_type.value, tags=self.tags)
+ return FieldProto(
+ name=self.name,
+ value_type=value_type.value,
+ description=self.description,
+ tags=self.tags,
+ )
@classmethod
def from_proto(cls, field_proto: FieldProto):
@@ -109,5 +116,8 @@
feature: Feature object to convert.
"""
return cls(
- name=feature.name, dtype=from_value_type(feature.dtype), tags=feature.labels
+ name=feature.name,
+ dtype=from_value_type(feature.dtype),
+ description=feature.description,
+ tags=feature.labels,
)
diff --git a/sdk/python/feast/templates/local/feature_repo/example_repo.py b/sdk/python/feast/templates/local/feature_repo/example_repo.py
--- a/sdk/python/feast/templates/local/feature_repo/example_repo.py
+++ b/sdk/python/feast/templates/local/feature_repo/example_repo.py
@@ -45,7 +45,7 @@
schema=[
Field(name="conv_rate", dtype=Float32),
Field(name="acc_rate", dtype=Float32),
- Field(name="avg_daily_trips", dtype=Int64),
+ Field(name="avg_daily_trips", dtype=Int64, description="Average daily trips"),
],
online=True,
source=driver_stats_source,
| {"golden_diff": "diff --git a/sdk/python/feast/feature.py b/sdk/python/feast/feature.py\n--- a/sdk/python/feast/feature.py\n+++ b/sdk/python/feast/feature.py\n@@ -33,6 +33,7 @@\n self,\n name: str,\n dtype: ValueType,\n+ description: str = \"\",\n labels: Optional[Dict[str, str]] = None,\n ):\n \"\"\"Creates a Feature object.\"\"\"\n@@ -42,6 +43,7 @@\n if dtype is ValueType.UNKNOWN:\n raise ValueError(f\"dtype cannot be {dtype}\")\n self._dtype = dtype\n+ self._description = description\n if labels is None:\n self._labels = dict()\n else:\n@@ -77,6 +79,13 @@\n \"\"\"\n return self._dtype\n \n+ @property\n+ def description(self) -> str:\n+ \"\"\"\n+ Gets the description of the feature\n+ \"\"\"\n+ return self._description\n+\n @property\n def labels(self) -> Dict[str, str]:\n \"\"\"\n@@ -96,6 +105,7 @@\n return FeatureSpecProto(\n name=self.name,\n value_type=value_type,\n+ description=self.description,\n tags=self.labels,\n )\n \n@@ -111,6 +121,7 @@\n feature = cls(\n name=feature_proto.name,\n dtype=ValueType(feature_proto.value_type),\n+ description=feature_proto.description,\n labels=dict(feature_proto.tags),\n )\n \ndiff --git a/sdk/python/feast/field.py b/sdk/python/feast/field.py\n--- a/sdk/python/feast/field.py\n+++ b/sdk/python/feast/field.py\n@@ -30,7 +30,7 @@\n Attributes:\n name: The name of the field.\n dtype: The type of the field, such as string or float.\n- tags: User-defined metadata in dictionary form.\n+ tags (optional): User-defined metadata in dictionary form.\n \"\"\"\n \n name: str\n@@ -42,6 +42,7 @@\n *,\n name: str,\n dtype: FeastType,\n+ description: str = \"\",\n tags: Optional[Dict[str, str]] = None,\n ):\n \"\"\"\n@@ -54,6 +55,7 @@\n \"\"\"\n self.name = name\n self.dtype = dtype\n+ self.description = description\n self.tags = tags or {}\n \n def __eq__(self, other):\n@@ -83,7 +85,12 @@\n def to_proto(self) -> FieldProto:\n \"\"\"Converts a Field object to its protobuf representation.\"\"\"\n value_type = self.dtype.to_value_type()\n- return FieldProto(name=self.name, value_type=value_type.value, tags=self.tags)\n+ return FieldProto(\n+ name=self.name,\n+ value_type=value_type.value,\n+ description=self.description,\n+ tags=self.tags,\n+ )\n \n @classmethod\n def from_proto(cls, field_proto: FieldProto):\n@@ -109,5 +116,8 @@\n feature: Feature object to convert.\n \"\"\"\n return cls(\n- name=feature.name, dtype=from_value_type(feature.dtype), tags=feature.labels\n+ name=feature.name,\n+ dtype=from_value_type(feature.dtype),\n+ description=feature.description,\n+ tags=feature.labels,\n )\ndiff --git a/sdk/python/feast/templates/local/feature_repo/example_repo.py b/sdk/python/feast/templates/local/feature_repo/example_repo.py\n--- a/sdk/python/feast/templates/local/feature_repo/example_repo.py\n+++ b/sdk/python/feast/templates/local/feature_repo/example_repo.py\n@@ -45,7 +45,7 @@\n schema=[\n Field(name=\"conv_rate\", dtype=Float32),\n Field(name=\"acc_rate\", dtype=Float32),\n- Field(name=\"avg_daily_trips\", dtype=Int64),\n+ Field(name=\"avg_daily_trips\", dtype=Int64, description=\"Average daily trips\"),\n ],\n online=True,\n source=driver_stats_source,\n", "issue": "`Feature.proto` should have a description field\n**Is your feature request related to a problem? Please describe.**\r\nA clear and concise description of what the problem is. Ex. I'm always frustrated when [...]\r\n\r\nI would like to attach a human-readable description to each field in a feature view. Since data scientists shop for individual features and not feature-views, it's useful to have documentation associated with each feature individually (rather than for the entire feature view). Persisting it here will allow us to surface them in various UIs.\r\n\r\n**Describe the solution you'd like**\r\nA clear and concise description of what you want to happen.\r\n\r\nSimilar to `Entity.proto`, we should add a description field to features and store them in the registry.\r\n\r\n**Describe alternatives you've considered**\r\nA clear and concise description of any alternative solutions or features you've considered.\r\n\r\nFor now, we're just using a `description` tag on each feature. This works, but it's clunky. It also means that in our UI, we have to remove certain tags before showing the raw tags to the users.\r\n\r\n**Additional context**\r\nAdd any other context or screenshots about the feature request here.\r\n\n", "before_files": [{"content": "# Copyright 2022 The Feast Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Dict, Optional\n\nfrom typeguard import typechecked\n\nfrom feast.feature import Feature\nfrom feast.protos.feast.core.Feature_pb2 import FeatureSpecV2 as FieldProto\nfrom feast.types import FeastType, from_value_type\nfrom feast.value_type import ValueType\n\n\n@typechecked\nclass Field:\n \"\"\"\n A Field represents a set of values with the same structure.\n\n Attributes:\n name: The name of the field.\n dtype: The type of the field, such as string or float.\n tags: User-defined metadata in dictionary form.\n \"\"\"\n\n name: str\n dtype: FeastType\n tags: Dict[str, str]\n\n def __init__(\n self,\n *,\n name: str,\n dtype: FeastType,\n tags: Optional[Dict[str, str]] = None,\n ):\n \"\"\"\n Creates a Field object.\n\n Args:\n name: The name of the field.\n dtype: The type of the field, such as string or float.\n tags (optional): User-defined metadata in dictionary form.\n \"\"\"\n self.name = name\n self.dtype = dtype\n self.tags = tags or {}\n\n def __eq__(self, other):\n if type(self) != type(other):\n return False\n\n if (\n self.name != other.name\n or self.dtype != other.dtype\n or self.tags != other.tags\n ):\n return False\n return True\n\n def __hash__(self):\n return hash((self.name, hash(self.dtype)))\n\n def __lt__(self, other):\n return self.name < other.name\n\n def __repr__(self):\n return f\"{self.name}-{self.dtype}\"\n\n def __str__(self):\n return f\"Field(name={self.name}, dtype={self.dtype}, tags={self.tags})\"\n\n def to_proto(self) -> FieldProto:\n \"\"\"Converts a Field object to its protobuf representation.\"\"\"\n value_type = self.dtype.to_value_type()\n return FieldProto(name=self.name, value_type=value_type.value, tags=self.tags)\n\n @classmethod\n def from_proto(cls, field_proto: FieldProto):\n \"\"\"\n Creates a Field object from a protobuf representation.\n\n Args:\n field_proto: FieldProto protobuf object\n \"\"\"\n value_type = ValueType(field_proto.value_type)\n return cls(\n name=field_proto.name,\n dtype=from_value_type(value_type=value_type),\n tags=dict(field_proto.tags),\n )\n\n @classmethod\n def from_feature(cls, feature: Feature):\n \"\"\"\n Creates a Field object from a Feature object.\n\n Args:\n feature: Feature object to convert.\n \"\"\"\n return cls(\n name=feature.name, dtype=from_value_type(feature.dtype), tags=feature.labels\n )\n", "path": "sdk/python/feast/field.py"}, {"content": "# This is an example feature definition file\n\nfrom datetime import timedelta\n\nimport pandas as pd\n\nfrom feast import (\n Entity,\n FeatureService,\n FeatureView,\n Field,\n FileSource,\n PushSource,\n RequestSource,\n)\nfrom feast.on_demand_feature_view import on_demand_feature_view\nfrom feast.types import Float32, Float64, Int64\n\n# Define an entity for the driver. You can think of an entity as a primary key used to\n# fetch features.\ndriver = Entity(name=\"driver\", join_keys=[\"driver_id\"])\n\n# Read data from parquet files. Parquet is convenient for local development mode. For\n# production, you can use your favorite DWH, such as BigQuery. See Feast documentation\n# for more info.\ndriver_stats_source = FileSource(\n name=\"driver_hourly_stats_source\",\n path=\"%PARQUET_PATH%\",\n timestamp_field=\"event_timestamp\",\n created_timestamp_column=\"created\",\n)\n\n# Our parquet files contain sample data that includes a driver_id column, timestamps and\n# three feature column. Here we define a Feature View that will allow us to serve this\n# data to our model online.\ndriver_stats_fv = FeatureView(\n # The unique name of this feature view. Two feature views in a single\n # project cannot have the same name\n name=\"driver_hourly_stats\",\n entities=[driver],\n ttl=timedelta(days=1),\n # The list of features defined below act as a schema to both define features\n # for both materialization of features into a store, and are used as references\n # during retrieval for building a training dataset or serving features\n schema=[\n Field(name=\"conv_rate\", dtype=Float32),\n Field(name=\"acc_rate\", dtype=Float32),\n Field(name=\"avg_daily_trips\", dtype=Int64),\n ],\n online=True,\n source=driver_stats_source,\n # Tags are user defined key/value pairs that are attached to each\n # feature view\n tags={\"team\": \"driver_performance\"},\n)\n\n# Define a request data source which encodes features / information only\n# available at request time (e.g. part of the user initiated HTTP request)\ninput_request = RequestSource(\n name=\"vals_to_add\",\n schema=[\n Field(name=\"val_to_add\", dtype=Int64),\n Field(name=\"val_to_add_2\", dtype=Int64),\n ],\n)\n\n\n# Define an on demand feature view which can generate new features based on\n# existing feature views and RequestSource features\n@on_demand_feature_view(\n sources=[driver_stats_fv, input_request],\n schema=[\n Field(name=\"conv_rate_plus_val1\", dtype=Float64),\n Field(name=\"conv_rate_plus_val2\", dtype=Float64),\n ],\n)\ndef transformed_conv_rate(inputs: pd.DataFrame) -> pd.DataFrame:\n df = pd.DataFrame()\n df[\"conv_rate_plus_val1\"] = inputs[\"conv_rate\"] + inputs[\"val_to_add\"]\n df[\"conv_rate_plus_val2\"] = inputs[\"conv_rate\"] + inputs[\"val_to_add_2\"]\n return df\n\n\n# This groups features into a model version\ndriver_activity_v1 = FeatureService(\n name=\"driver_activity_v1\",\n features=[\n driver_stats_fv[[\"conv_rate\"]], # Sub-selects a feature from a feature view\n transformed_conv_rate, # Selects all features from the feature view\n ],\n)\ndriver_activity_v2 = FeatureService(\n name=\"driver_activity_v2\", features=[driver_stats_fv, transformed_conv_rate]\n)\n\n# Defines a way to push data (to be available offline, online or both) into Feast.\ndriver_stats_push_source = PushSource(\n name=\"driver_stats_push_source\",\n batch_source=driver_stats_source,\n)\n\n# Defines a slightly modified version of the feature view from above, where the source\n# has been changed to the push source. This allows fresh features to be directly pushed\n# to the online store for this feature view.\ndriver_stats_fresh_fv = FeatureView(\n name=\"driver_hourly_stats_fresh\",\n entities=[driver],\n ttl=timedelta(days=1),\n schema=[\n Field(name=\"conv_rate\", dtype=Float32),\n Field(name=\"acc_rate\", dtype=Float32),\n Field(name=\"avg_daily_trips\", dtype=Int64),\n ],\n online=True,\n source=driver_stats_push_source, # Changed from above\n tags={\"team\": \"driver_performance\"},\n)\n\n\n# Define an on demand feature view which can generate new features based on\n# existing feature views and RequestSource features\n@on_demand_feature_view(\n sources=[driver_stats_fresh_fv, input_request], # relies on fresh version of FV\n schema=[\n Field(name=\"conv_rate_plus_val1\", dtype=Float64),\n Field(name=\"conv_rate_plus_val2\", dtype=Float64),\n ],\n)\ndef transformed_conv_rate_fresh(inputs: pd.DataFrame) -> pd.DataFrame:\n df = pd.DataFrame()\n df[\"conv_rate_plus_val1\"] = inputs[\"conv_rate\"] + inputs[\"val_to_add\"]\n df[\"conv_rate_plus_val2\"] = inputs[\"conv_rate\"] + inputs[\"val_to_add_2\"]\n return df\n\n\ndriver_activity_v3 = FeatureService(\n name=\"driver_activity_v3\",\n features=[driver_stats_fresh_fv, transformed_conv_rate_fresh],\n)\n", "path": "sdk/python/feast/templates/local/feature_repo/example_repo.py"}, {"content": "# Copyright 2020 The Feast Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Dict, Optional\n\nfrom feast.protos.feast.core.Feature_pb2 import FeatureSpecV2 as FeatureSpecProto\nfrom feast.protos.feast.types.Value_pb2 import ValueType as ValueTypeProto\nfrom feast.value_type import ValueType\n\n\nclass Feature:\n \"\"\"\n A Feature represents a class of serveable feature.\n\n Args:\n name: Name of the feature.\n dtype: The type of the feature, such as string or float.\n labels (optional): User-defined metadata in dictionary form.\n \"\"\"\n\n def __init__(\n self,\n name: str,\n dtype: ValueType,\n labels: Optional[Dict[str, str]] = None,\n ):\n \"\"\"Creates a Feature object.\"\"\"\n self._name = name\n if not isinstance(dtype, ValueType):\n raise ValueError(\"dtype is not a valid ValueType\")\n if dtype is ValueType.UNKNOWN:\n raise ValueError(f\"dtype cannot be {dtype}\")\n self._dtype = dtype\n if labels is None:\n self._labels = dict()\n else:\n self._labels = labels\n\n def __eq__(self, other):\n if self.name != other.name or self.dtype != other.dtype:\n return False\n return True\n\n def __lt__(self, other):\n return self.name < other.name\n\n def __repr__(self):\n # return string representation of the reference\n return f\"{self.name}-{self.dtype}\"\n\n def __str__(self):\n # readable string of the reference\n return f\"Feature<{self.__repr__()}>\"\n\n @property\n def name(self):\n \"\"\"\n Gets the name of this feature.\n \"\"\"\n return self._name\n\n @property\n def dtype(self) -> ValueType:\n \"\"\"\n Gets the data type of this feature.\n \"\"\"\n return self._dtype\n\n @property\n def labels(self) -> Dict[str, str]:\n \"\"\"\n Gets the labels of this feature.\n \"\"\"\n return self._labels\n\n def to_proto(self) -> FeatureSpecProto:\n \"\"\"\n Converts Feature object to its Protocol Buffer representation.\n\n Returns:\n A FeatureSpecProto protobuf.\n \"\"\"\n value_type = ValueTypeProto.Enum.Value(self.dtype.name)\n\n return FeatureSpecProto(\n name=self.name,\n value_type=value_type,\n tags=self.labels,\n )\n\n @classmethod\n def from_proto(cls, feature_proto: FeatureSpecProto):\n \"\"\"\n Args:\n feature_proto: FeatureSpecV2 protobuf object\n\n Returns:\n Feature object\n \"\"\"\n feature = cls(\n name=feature_proto.name,\n dtype=ValueType(feature_proto.value_type),\n labels=dict(feature_proto.tags),\n )\n\n return feature\n", "path": "sdk/python/feast/feature.py"}], "after_files": [{"content": "# Copyright 2022 The Feast Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Dict, Optional\n\nfrom typeguard import typechecked\n\nfrom feast.feature import Feature\nfrom feast.protos.feast.core.Feature_pb2 import FeatureSpecV2 as FieldProto\nfrom feast.types import FeastType, from_value_type\nfrom feast.value_type import ValueType\n\n\n@typechecked\nclass Field:\n \"\"\"\n A Field represents a set of values with the same structure.\n\n Attributes:\n name: The name of the field.\n dtype: The type of the field, such as string or float.\n tags (optional): User-defined metadata in dictionary form.\n \"\"\"\n\n name: str\n dtype: FeastType\n tags: Dict[str, str]\n\n def __init__(\n self,\n *,\n name: str,\n dtype: FeastType,\n description: str = \"\",\n tags: Optional[Dict[str, str]] = None,\n ):\n \"\"\"\n Creates a Field object.\n\n Args:\n name: The name of the field.\n dtype: The type of the field, such as string or float.\n tags (optional): User-defined metadata in dictionary form.\n \"\"\"\n self.name = name\n self.dtype = dtype\n self.description = description\n self.tags = tags or {}\n\n def __eq__(self, other):\n if type(self) != type(other):\n return False\n\n if (\n self.name != other.name\n or self.dtype != other.dtype\n or self.tags != other.tags\n ):\n return False\n return True\n\n def __hash__(self):\n return hash((self.name, hash(self.dtype)))\n\n def __lt__(self, other):\n return self.name < other.name\n\n def __repr__(self):\n return f\"{self.name}-{self.dtype}\"\n\n def __str__(self):\n return f\"Field(name={self.name}, dtype={self.dtype}, tags={self.tags})\"\n\n def to_proto(self) -> FieldProto:\n \"\"\"Converts a Field object to its protobuf representation.\"\"\"\n value_type = self.dtype.to_value_type()\n return FieldProto(\n name=self.name,\n value_type=value_type.value,\n description=self.description,\n tags=self.tags,\n )\n\n @classmethod\n def from_proto(cls, field_proto: FieldProto):\n \"\"\"\n Creates a Field object from a protobuf representation.\n\n Args:\n field_proto: FieldProto protobuf object\n \"\"\"\n value_type = ValueType(field_proto.value_type)\n return cls(\n name=field_proto.name,\n dtype=from_value_type(value_type=value_type),\n tags=dict(field_proto.tags),\n )\n\n @classmethod\n def from_feature(cls, feature: Feature):\n \"\"\"\n Creates a Field object from a Feature object.\n\n Args:\n feature: Feature object to convert.\n \"\"\"\n return cls(\n name=feature.name,\n dtype=from_value_type(feature.dtype),\n description=feature.description,\n tags=feature.labels,\n )\n", "path": "sdk/python/feast/field.py"}, {"content": "# This is an example feature definition file\n\nfrom datetime import timedelta\n\nimport pandas as pd\n\nfrom feast import (\n Entity,\n FeatureService,\n FeatureView,\n Field,\n FileSource,\n PushSource,\n RequestSource,\n)\nfrom feast.on_demand_feature_view import on_demand_feature_view\nfrom feast.types import Float32, Float64, Int64\n\n# Define an entity for the driver. You can think of an entity as a primary key used to\n# fetch features.\ndriver = Entity(name=\"driver\", join_keys=[\"driver_id\"])\n\n# Read data from parquet files. Parquet is convenient for local development mode. For\n# production, you can use your favorite DWH, such as BigQuery. See Feast documentation\n# for more info.\ndriver_stats_source = FileSource(\n name=\"driver_hourly_stats_source\",\n path=\"%PARQUET_PATH%\",\n timestamp_field=\"event_timestamp\",\n created_timestamp_column=\"created\",\n)\n\n# Our parquet files contain sample data that includes a driver_id column, timestamps and\n# three feature column. Here we define a Feature View that will allow us to serve this\n# data to our model online.\ndriver_stats_fv = FeatureView(\n # The unique name of this feature view. Two feature views in a single\n # project cannot have the same name\n name=\"driver_hourly_stats\",\n entities=[driver],\n ttl=timedelta(days=1),\n # The list of features defined below act as a schema to both define features\n # for both materialization of features into a store, and are used as references\n # during retrieval for building a training dataset or serving features\n schema=[\n Field(name=\"conv_rate\", dtype=Float32),\n Field(name=\"acc_rate\", dtype=Float32),\n Field(name=\"avg_daily_trips\", dtype=Int64, description=\"Average daily trips\"),\n ],\n online=True,\n source=driver_stats_source,\n # Tags are user defined key/value pairs that are attached to each\n # feature view\n tags={\"team\": \"driver_performance\"},\n)\n\n# Define a request data source which encodes features / information only\n# available at request time (e.g. part of the user initiated HTTP request)\ninput_request = RequestSource(\n name=\"vals_to_add\",\n schema=[\n Field(name=\"val_to_add\", dtype=Int64),\n Field(name=\"val_to_add_2\", dtype=Int64),\n ],\n)\n\n\n# Define an on demand feature view which can generate new features based on\n# existing feature views and RequestSource features\n@on_demand_feature_view(\n sources=[driver_stats_fv, input_request],\n schema=[\n Field(name=\"conv_rate_plus_val1\", dtype=Float64),\n Field(name=\"conv_rate_plus_val2\", dtype=Float64),\n ],\n)\ndef transformed_conv_rate(inputs: pd.DataFrame) -> pd.DataFrame:\n df = pd.DataFrame()\n df[\"conv_rate_plus_val1\"] = inputs[\"conv_rate\"] + inputs[\"val_to_add\"]\n df[\"conv_rate_plus_val2\"] = inputs[\"conv_rate\"] + inputs[\"val_to_add_2\"]\n return df\n\n\n# This groups features into a model version\ndriver_activity_v1 = FeatureService(\n name=\"driver_activity_v1\",\n features=[\n driver_stats_fv[[\"conv_rate\"]], # Sub-selects a feature from a feature view\n transformed_conv_rate, # Selects all features from the feature view\n ],\n)\ndriver_activity_v2 = FeatureService(\n name=\"driver_activity_v2\", features=[driver_stats_fv, transformed_conv_rate]\n)\n\n# Defines a way to push data (to be available offline, online or both) into Feast.\ndriver_stats_push_source = PushSource(\n name=\"driver_stats_push_source\",\n batch_source=driver_stats_source,\n)\n\n# Defines a slightly modified version of the feature view from above, where the source\n# has been changed to the push source. This allows fresh features to be directly pushed\n# to the online store for this feature view.\ndriver_stats_fresh_fv = FeatureView(\n name=\"driver_hourly_stats_fresh\",\n entities=[driver],\n ttl=timedelta(days=1),\n schema=[\n Field(name=\"conv_rate\", dtype=Float32),\n Field(name=\"acc_rate\", dtype=Float32),\n Field(name=\"avg_daily_trips\", dtype=Int64),\n ],\n online=True,\n source=driver_stats_push_source, # Changed from above\n tags={\"team\": \"driver_performance\"},\n)\n\n\n# Define an on demand feature view which can generate new features based on\n# existing feature views and RequestSource features\n@on_demand_feature_view(\n sources=[driver_stats_fresh_fv, input_request], # relies on fresh version of FV\n schema=[\n Field(name=\"conv_rate_plus_val1\", dtype=Float64),\n Field(name=\"conv_rate_plus_val2\", dtype=Float64),\n ],\n)\ndef transformed_conv_rate_fresh(inputs: pd.DataFrame) -> pd.DataFrame:\n df = pd.DataFrame()\n df[\"conv_rate_plus_val1\"] = inputs[\"conv_rate\"] + inputs[\"val_to_add\"]\n df[\"conv_rate_plus_val2\"] = inputs[\"conv_rate\"] + inputs[\"val_to_add_2\"]\n return df\n\n\ndriver_activity_v3 = FeatureService(\n name=\"driver_activity_v3\",\n features=[driver_stats_fresh_fv, transformed_conv_rate_fresh],\n)\n", "path": "sdk/python/feast/templates/local/feature_repo/example_repo.py"}, {"content": "# Copyright 2020 The Feast Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Dict, Optional\n\nfrom feast.protos.feast.core.Feature_pb2 import FeatureSpecV2 as FeatureSpecProto\nfrom feast.protos.feast.types.Value_pb2 import ValueType as ValueTypeProto\nfrom feast.value_type import ValueType\n\n\nclass Feature:\n \"\"\"\n A Feature represents a class of serveable feature.\n\n Args:\n name: Name of the feature.\n dtype: The type of the feature, such as string or float.\n labels (optional): User-defined metadata in dictionary form.\n \"\"\"\n\n def __init__(\n self,\n name: str,\n dtype: ValueType,\n description: str = \"\",\n labels: Optional[Dict[str, str]] = None,\n ):\n \"\"\"Creates a Feature object.\"\"\"\n self._name = name\n if not isinstance(dtype, ValueType):\n raise ValueError(\"dtype is not a valid ValueType\")\n if dtype is ValueType.UNKNOWN:\n raise ValueError(f\"dtype cannot be {dtype}\")\n self._dtype = dtype\n self._description = description\n if labels is None:\n self._labels = dict()\n else:\n self._labels = labels\n\n def __eq__(self, other):\n if self.name != other.name or self.dtype != other.dtype:\n return False\n return True\n\n def __lt__(self, other):\n return self.name < other.name\n\n def __repr__(self):\n # return string representation of the reference\n return f\"{self.name}-{self.dtype}\"\n\n def __str__(self):\n # readable string of the reference\n return f\"Feature<{self.__repr__()}>\"\n\n @property\n def name(self):\n \"\"\"\n Gets the name of this feature.\n \"\"\"\n return self._name\n\n @property\n def dtype(self) -> ValueType:\n \"\"\"\n Gets the data type of this feature.\n \"\"\"\n return self._dtype\n\n @property\n def description(self) -> str:\n \"\"\"\n Gets the description of the feature\n \"\"\"\n return self._description\n\n @property\n def labels(self) -> Dict[str, str]:\n \"\"\"\n Gets the labels of this feature.\n \"\"\"\n return self._labels\n\n def to_proto(self) -> FeatureSpecProto:\n \"\"\"\n Converts Feature object to its Protocol Buffer representation.\n\n Returns:\n A FeatureSpecProto protobuf.\n \"\"\"\n value_type = ValueTypeProto.Enum.Value(self.dtype.name)\n\n return FeatureSpecProto(\n name=self.name,\n value_type=value_type,\n description=self.description,\n tags=self.labels,\n )\n\n @classmethod\n def from_proto(cls, feature_proto: FeatureSpecProto):\n \"\"\"\n Args:\n feature_proto: FeatureSpecV2 protobuf object\n\n Returns:\n Feature object\n \"\"\"\n feature = cls(\n name=feature_proto.name,\n dtype=ValueType(feature_proto.value_type),\n description=feature_proto.description,\n labels=dict(feature_proto.tags),\n )\n\n return feature\n", "path": "sdk/python/feast/feature.py"}]} | 3,940 | 898 |
gh_patches_debug_47400 | rasdani/github-patches | git_diff | python-telegram-bot__python-telegram-bot-2132 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add support for Py 3.9
With Py 3.9 out, we should add it to the tests at some point. Maybe that can wait, until 3.9.x, x>0 is out, though
Need to check, if all the job thingies work out, as APS doesn't support py3.9 yet and there has been a [report](https://t.me/pythontelegrambotgroup/382731) that it doesn't work (with PTB).
On a related note: APS seems to be [preparing for v4.0](https://github.com/agronholm/apscheduler/issues/465), which will break some stuff, but also supports py3.9 and even uses the new ZoneInfo (also backporting to py3.6+), lifting the restriction to use `pytz` timezones. I already subscribed to releases. I guess updating APS in PTB should be done only when 4.x, x>0 is out and we're doing breaking things anyway …
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 """The setup and build script for the python-telegram-bot library."""
3
4 import codecs
5 import os
6 import sys
7
8 from setuptools import setup, find_packages
9
10
11 def requirements():
12 """Build the requirements list for this project"""
13 requirements_list = []
14
15 with open('requirements.txt') as requirements:
16 for install in requirements:
17 requirements_list.append(install.strip())
18
19 return requirements_list
20
21
22 packages = find_packages(exclude=['tests*'])
23 requirements = requirements()
24
25 # Allow for a package install to not use the vendored urllib3
26 UPSTREAM_URLLIB3_FLAG = '--with-upstream-urllib3'
27 if UPSTREAM_URLLIB3_FLAG in sys.argv:
28 sys.argv.remove(UPSTREAM_URLLIB3_FLAG)
29 requirements.append('urllib3 >= 1.19.1')
30 packages = [x for x in packages if not x.startswith('telegram.vendor.ptb_urllib3')]
31
32 with codecs.open('README.rst', 'r', 'utf-8') as fd:
33 fn = os.path.join('telegram', 'version.py')
34 with open(fn) as fh:
35 code = compile(fh.read(), fn, 'exec')
36 exec(code)
37
38 setup(name='python-telegram-bot',
39 version=__version__,
40 author='Leandro Toledo',
41 author_email='[email protected]',
42 license='LGPLv3',
43 url='https://python-telegram-bot.org/',
44 keywords='python telegram bot api wrapper',
45 description="We have made you a wrapper you can't refuse",
46 long_description=fd.read(),
47 packages=packages,
48 install_requires=requirements,
49 extras_require={
50 'json': 'ujson',
51 'socks': 'PySocks'
52 },
53 include_package_data=True,
54 classifiers=[
55 'Development Status :: 5 - Production/Stable',
56 'Intended Audience :: Developers',
57 'License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3)',
58 'Operating System :: OS Independent',
59 'Topic :: Software Development :: Libraries :: Python Modules',
60 'Topic :: Communications :: Chat',
61 'Topic :: Internet',
62 'Programming Language :: Python',
63 'Programming Language :: Python :: 3',
64 'Programming Language :: Python :: 3.6',
65 'Programming Language :: Python :: 3.7',
66 'Programming Language :: Python :: 3.8',
67 ],)
68
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -64,4 +64,5 @@
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
+ 'Programming Language :: Python :: 3.9',
],)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -64,4 +64,5 @@\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n+ 'Programming Language :: Python :: 3.9',\n ],)\n", "issue": "Add support for Py 3.9\nWith Py 3.9 out, we should add it to the tests at some point. Maybe that can wait, until 3.9.x, x>0 is out, though\r\n\r\nNeed to check, if all the job thingies work out, as APS doesn't support py3.9 yet and there has been a [report](https://t.me/pythontelegrambotgroup/382731) that it doesn't work (with PTB).\r\n\r\nOn a related note: APS seems to be [preparing for v4.0](https://github.com/agronholm/apscheduler/issues/465), which will break some stuff, but also supports py3.9 and even uses the new ZoneInfo (also backporting to py3.6+), lifting the restriction to use `pytz` timezones. I already subscribed to releases. I guess updating APS in PTB should be done only when 4.x, x>0 is out and we're doing breaking things anyway \u2026\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"The setup and build script for the python-telegram-bot library.\"\"\"\n\nimport codecs\nimport os\nimport sys\n\nfrom setuptools import setup, find_packages\n\n\ndef requirements():\n \"\"\"Build the requirements list for this project\"\"\"\n requirements_list = []\n\n with open('requirements.txt') as requirements:\n for install in requirements:\n requirements_list.append(install.strip())\n\n return requirements_list\n\n\npackages = find_packages(exclude=['tests*'])\nrequirements = requirements()\n\n# Allow for a package install to not use the vendored urllib3\nUPSTREAM_URLLIB3_FLAG = '--with-upstream-urllib3'\nif UPSTREAM_URLLIB3_FLAG in sys.argv:\n sys.argv.remove(UPSTREAM_URLLIB3_FLAG)\n requirements.append('urllib3 >= 1.19.1')\n packages = [x for x in packages if not x.startswith('telegram.vendor.ptb_urllib3')]\n\nwith codecs.open('README.rst', 'r', 'utf-8') as fd:\n fn = os.path.join('telegram', 'version.py')\n with open(fn) as fh:\n code = compile(fh.read(), fn, 'exec')\n exec(code)\n\n setup(name='python-telegram-bot',\n version=__version__,\n author='Leandro Toledo',\n author_email='[email protected]',\n license='LGPLv3',\n url='https://python-telegram-bot.org/',\n keywords='python telegram bot api wrapper',\n description=\"We have made you a wrapper you can't refuse\",\n long_description=fd.read(),\n packages=packages,\n install_requires=requirements,\n extras_require={\n 'json': 'ujson',\n 'socks': 'PySocks'\n },\n include_package_data=True,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3)',\n 'Operating System :: OS Independent',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Communications :: Chat',\n 'Topic :: Internet',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n ],)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\"\"\"The setup and build script for the python-telegram-bot library.\"\"\"\n\nimport codecs\nimport os\nimport sys\n\nfrom setuptools import setup, find_packages\n\n\ndef requirements():\n \"\"\"Build the requirements list for this project\"\"\"\n requirements_list = []\n\n with open('requirements.txt') as requirements:\n for install in requirements:\n requirements_list.append(install.strip())\n\n return requirements_list\n\n\npackages = find_packages(exclude=['tests*'])\nrequirements = requirements()\n\n# Allow for a package install to not use the vendored urllib3\nUPSTREAM_URLLIB3_FLAG = '--with-upstream-urllib3'\nif UPSTREAM_URLLIB3_FLAG in sys.argv:\n sys.argv.remove(UPSTREAM_URLLIB3_FLAG)\n requirements.append('urllib3 >= 1.19.1')\n packages = [x for x in packages if not x.startswith('telegram.vendor.ptb_urllib3')]\n\nwith codecs.open('README.rst', 'r', 'utf-8') as fd:\n fn = os.path.join('telegram', 'version.py')\n with open(fn) as fh:\n code = compile(fh.read(), fn, 'exec')\n exec(code)\n\n setup(name='python-telegram-bot',\n version=__version__,\n author='Leandro Toledo',\n author_email='[email protected]',\n license='LGPLv3',\n url='https://python-telegram-bot.org/',\n keywords='python telegram bot api wrapper',\n description=\"We have made you a wrapper you can't refuse\",\n long_description=fd.read(),\n packages=packages,\n install_requires=requirements,\n extras_require={\n 'json': 'ujson',\n 'socks': 'PySocks'\n },\n include_package_data=True,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3)',\n 'Operating System :: OS Independent',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Communications :: Chat',\n 'Topic :: Internet',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n ],)\n", "path": "setup.py"}]} | 1,120 | 84 |
gh_patches_debug_15216 | rasdani/github-patches | git_diff | translate__translate-4027 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix properties roundtrip
After #3607 some changes need to be introduced in order to fix properties roundtrip. More details in https://github.com/translate/translate/pull/3607#issuecomment-291440437
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `translate/convert/po2prop.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright 2002-2006 Zuza Software Foundation
4 #
5 # This file is part of translate.
6 #
7 # translate is free software; you can redistribute it and/or modify
8 # it under the terms of the GNU General Public License as published by
9 # the Free Software Foundation; either version 2 of the License, or
10 # (at your option) any later version.
11 #
12 # translate is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU General Public License for more details.
16 #
17 # You should have received a copy of the GNU General Public License
18 # along with this program; if not, see <http://www.gnu.org/licenses/>.
19
20 """Convert Gettext PO localization files to Java/Mozilla .properties files.
21
22 See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/commands/prop2po.html
23 for examples and usage instructions.
24 """
25
26 import warnings
27
28 from translate.convert import accesskey, convert
29 from translate.misc import quote
30 from translate.storage import po, properties
31
32
33 eol = u"\n"
34
35
36 def applytranslation(key, propunit, inunit, mixedkeys):
37 """applies the translation for key in the po unit to the prop unit"""
38 # this converts the po-style string to a prop-style string
39 value = inunit.target
40 # handle mixed keys
41 for labelsuffix in properties.labelsuffixes:
42 if key.endswith(labelsuffix):
43 if key in mixedkeys:
44 value, akey = accesskey.extract(value)
45 break
46 else:
47 for akeysuffix in properties.accesskeysuffixes:
48 if key.endswith(akeysuffix):
49 if key in mixedkeys:
50 label, value = accesskey.extract(value)
51 if not value:
52 warnings.warn("Could not find accesskey for %s" % key)
53 # Use the source language accesskey
54 label, value = accesskey.extract(inunit.source)
55 else:
56 original = propunit.source
57 # For the sake of diffs we keep the case of the
58 # accesskey the same if we know the translation didn't
59 # change. Casing matters in XUL.
60 if value == propunit.source and original.lower() == value.lower():
61 if original.isupper():
62 value = value.upper()
63 elif original.islower():
64 value = value.lower()
65 return value
66
67
68 class reprop:
69
70 def __init__(self, templatefile, inputstore, personality, encoding=None,
71 remove_untranslated=False):
72 self.templatefile = templatefile
73 self.inputstore = inputstore
74 self.personality = properties.get_dialect(personality)
75 self.encoding = encoding
76 if self.encoding is None:
77 self.encoding = self.personality.default_encoding
78 self.remove_untranslated = remove_untranslated
79 self.mixer = accesskey.UnitMixer(properties.labelsuffixes,
80 properties.accesskeysuffixes)
81
82 def convertstore(self, includefuzzy=False):
83 self.includefuzzy = includefuzzy
84 self.inmultilinemsgid = False
85 self.inecho = False
86 self.inputstore.makeindex()
87 if self.personality.name == "gaia":
88 self._explode_gaia_plurals()
89 outputlines = []
90 # Readlines doesn't work for UTF-16, we read() and splitlines(keepends) instead
91 content = self.templatefile.read().decode(self.encoding)
92 for line in content.splitlines(True):
93 outputstr = self.convertline(line)
94 outputlines.append(outputstr)
95 return u"".join(outputlines).encode(self.encoding)
96
97 def _handle_accesskeys(self, inunit, currkey):
98 value = inunit.target
99 if self.personality.name == "mozilla":
100 keys = inunit.getlocations()
101 mixedkeys = self.mixer.match_entities(keys)
102 for key in keys:
103 if key == currkey and key in self.inputstore.locationindex:
104 propunit = self.inputstore.locationindex[key] # find the prop
105 value = applytranslation(key, propunit, inunit, mixedkeys)
106 break
107
108 return value
109
110 def _explode_gaia_plurals(self):
111 """Explode the gaia plurals."""
112 from translate.lang import data
113 for unit in self.inputstore.units:
114 if not unit.hasplural():
115 continue
116 if unit.isfuzzy() and not self.includefuzzy or not unit.istranslated():
117 continue
118
119 names = data.cldr_plural_categories
120 location = unit.getlocations()[0]
121 for category, text in zip(names, unit.target.strings):
122 # TODO: for now we assume all forms are present. We need to
123 # fill in the rest after mapping things to the proper CLDR names.
124 if category == 'zero':
125 # [zero] cases are translated as separate units
126 continue
127 new_unit = self.inputstore.addsourceunit(u"fish") # not used
128 new_location = '%s[%s]' % (location, category)
129 new_unit.addlocation(new_location)
130 new_unit.target = text
131 self.inputstore.locationindex[new_location] = new_unit
132
133 # We don't want the plural marker to be translated:
134 del self.inputstore.locationindex[location]
135
136 def convertline(self, line):
137 returnline = u""
138 # handle multiline msgid if we're in one
139 if self.inmultilinemsgid:
140 msgid = quote.rstripeol(line).strip()
141 # see if there's more
142 self.inmultilinemsgid = (msgid[-1:] == '\\')
143 # if we're echoing...
144 if self.inecho:
145 returnline = line
146 # otherwise, this could be a comment
147 elif line.strip()[:1] == '#':
148 returnline = quote.rstripeol(line) + eol
149 else:
150 line = quote.rstripeol(line)
151 delimiter_char, delimiter_pos = self.personality.find_delimiter(line)
152 if quote.rstripeol(line)[-1:] == '\\':
153 self.inmultilinemsgid = True
154 if delimiter_pos == -1:
155 key = self.personality.key_strip(line)
156 delimiter = " %s " % self.personality.delimiters[0]
157 else:
158 key = self.personality.key_strip(line[:delimiter_pos])
159 # Calculate space around the equal sign
160 prespace = line[line.find(' ', len(key)):delimiter_pos]
161 postspacestart = len(line[delimiter_pos+1:])
162 postspaceend = len(line[delimiter_pos+1:].lstrip())
163 postspace = line[delimiter_pos+1:delimiter_pos+(postspacestart-postspaceend)+1]
164 delimiter = prespace + delimiter_char + postspace
165 if key in self.inputstore.locationindex:
166 unit = self.inputstore.locationindex[key]
167 if unit is None or not unit.istranslated() and bool(unit.source) and self.remove_untranslated:
168 returnline = u""
169 self.inecho = False
170 else:
171 if unit.isfuzzy() and not self.includefuzzy or len(unit.target) == 0:
172 value = unit.source
173 else:
174 value = self._handle_accesskeys(unit, key)
175 self.inecho = False
176 assert isinstance(value, str)
177 returnline = "%(key)s%(del)s%(value)s%(term)s%(eol)s" % {
178 "key": "%s%s%s" % (self.personality.key_wrap_char,
179 key,
180 self.personality.key_wrap_char),
181 "del": delimiter,
182 "value": "%s%s%s" % (self.personality.value_wrap_char,
183 self.personality.encode(value),
184 self.personality.value_wrap_char),
185 "term": self.personality.pair_terminator,
186 "eol": eol,
187 }
188 else:
189 self.inecho = True
190 returnline = line + eol
191 assert isinstance(returnline, str)
192 return returnline
193
194
195 def convertstrings(inputfile, outputfile, templatefile, personality="strings",
196 includefuzzy=False, encoding=None, outputthreshold=None,
197 remove_untranslated=False):
198 """.strings specific convertor function"""
199 return convertprop(inputfile, outputfile, templatefile,
200 personality="strings", includefuzzy=includefuzzy,
201 encoding=encoding, outputthreshold=outputthreshold,
202 remove_untranslated=remove_untranslated)
203
204
205 def convertmozillaprop(inputfile, outputfile, templatefile,
206 includefuzzy=False, remove_untranslated=False,
207 outputthreshold=None):
208 """Mozilla specific convertor function"""
209 return convertprop(inputfile, outputfile, templatefile,
210 personality="mozilla", includefuzzy=includefuzzy,
211 remove_untranslated=remove_untranslated,
212 outputthreshold=outputthreshold)
213
214
215 def convertprop(inputfile, outputfile, templatefile, personality="java",
216 includefuzzy=False, encoding=None, remove_untranslated=False,
217 outputthreshold=None):
218 inputstore = po.pofile(inputfile)
219
220 if not convert.should_output_store(inputstore, outputthreshold):
221 return False
222
223 if templatefile is None:
224 raise ValueError("must have template file for properties files")
225 # convertor = po2prop()
226 else:
227 convertor = reprop(templatefile, inputstore, personality, encoding,
228 remove_untranslated)
229 outputprop = convertor.convertstore(includefuzzy)
230 outputfile.write(outputprop)
231 return True
232
233
234 formats = {
235 ("po", "properties"): ("properties", convertprop),
236 ("po", "lang"): ("lang", convertprop),
237 ("po", "strings"): ("strings", convertstrings),
238 }
239
240
241 def main(argv=None):
242 # handle command line options
243 parser = convert.ConvertOptionParser(formats, usetemplates=True,
244 description=__doc__)
245 parser.add_option(
246 "", "--personality", dest="personality",
247 default=properties.default_dialect, type="choice",
248 choices=list(properties.dialects),
249 help="override the input file format: %s (for .properties files, default: %s)" % (
250 ", ".join(properties.dialects), properties.default_dialect),
251 metavar="TYPE")
252 parser.add_option(
253 "", "--encoding", dest="encoding", default=None,
254 help="override the encoding set by the personality",
255 metavar="ENCODING")
256 parser.add_threshold_option()
257 parser.add_fuzzy_option()
258 parser.add_remove_untranslated_option()
259 parser.passthrough.append("personality")
260 parser.passthrough.append("encoding")
261 parser.run(argv)
262
263
264 if __name__ == '__main__':
265 main()
266
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/translate/convert/po2prop.py b/translate/convert/po2prop.py
--- a/translate/convert/po2prop.py
+++ b/translate/convert/po2prop.py
@@ -178,7 +178,7 @@
"key": "%s%s%s" % (self.personality.key_wrap_char,
key,
self.personality.key_wrap_char),
- "del": delimiter,
+ "del": delimiter if delimiter_pos != -1 or value else "",
"value": "%s%s%s" % (self.personality.value_wrap_char,
self.personality.encode(value),
self.personality.value_wrap_char),
| {"golden_diff": "diff --git a/translate/convert/po2prop.py b/translate/convert/po2prop.py\n--- a/translate/convert/po2prop.py\n+++ b/translate/convert/po2prop.py\n@@ -178,7 +178,7 @@\n \"key\": \"%s%s%s\" % (self.personality.key_wrap_char,\n key,\n self.personality.key_wrap_char),\n- \"del\": delimiter,\n+ \"del\": delimiter if delimiter_pos != -1 or value else \"\",\n \"value\": \"%s%s%s\" % (self.personality.value_wrap_char,\n self.personality.encode(value),\n self.personality.value_wrap_char),\n", "issue": "Fix properties roundtrip\nAfter #3607 some changes need to be introduced in order to fix properties roundtrip. More details in https://github.com/translate/translate/pull/3607#issuecomment-291440437 \n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2002-2006 Zuza Software Foundation\n#\n# This file is part of translate.\n#\n# translate is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# translate is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Convert Gettext PO localization files to Java/Mozilla .properties files.\n\nSee: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/commands/prop2po.html\nfor examples and usage instructions.\n\"\"\"\n\nimport warnings\n\nfrom translate.convert import accesskey, convert\nfrom translate.misc import quote\nfrom translate.storage import po, properties\n\n\neol = u\"\\n\"\n\n\ndef applytranslation(key, propunit, inunit, mixedkeys):\n \"\"\"applies the translation for key in the po unit to the prop unit\"\"\"\n # this converts the po-style string to a prop-style string\n value = inunit.target\n # handle mixed keys\n for labelsuffix in properties.labelsuffixes:\n if key.endswith(labelsuffix):\n if key in mixedkeys:\n value, akey = accesskey.extract(value)\n break\n else:\n for akeysuffix in properties.accesskeysuffixes:\n if key.endswith(akeysuffix):\n if key in mixedkeys:\n label, value = accesskey.extract(value)\n if not value:\n warnings.warn(\"Could not find accesskey for %s\" % key)\n # Use the source language accesskey\n label, value = accesskey.extract(inunit.source)\n else:\n original = propunit.source\n # For the sake of diffs we keep the case of the\n # accesskey the same if we know the translation didn't\n # change. Casing matters in XUL.\n if value == propunit.source and original.lower() == value.lower():\n if original.isupper():\n value = value.upper()\n elif original.islower():\n value = value.lower()\n return value\n\n\nclass reprop:\n\n def __init__(self, templatefile, inputstore, personality, encoding=None,\n remove_untranslated=False):\n self.templatefile = templatefile\n self.inputstore = inputstore\n self.personality = properties.get_dialect(personality)\n self.encoding = encoding\n if self.encoding is None:\n self.encoding = self.personality.default_encoding\n self.remove_untranslated = remove_untranslated\n self.mixer = accesskey.UnitMixer(properties.labelsuffixes,\n properties.accesskeysuffixes)\n\n def convertstore(self, includefuzzy=False):\n self.includefuzzy = includefuzzy\n self.inmultilinemsgid = False\n self.inecho = False\n self.inputstore.makeindex()\n if self.personality.name == \"gaia\":\n self._explode_gaia_plurals()\n outputlines = []\n # Readlines doesn't work for UTF-16, we read() and splitlines(keepends) instead\n content = self.templatefile.read().decode(self.encoding)\n for line in content.splitlines(True):\n outputstr = self.convertline(line)\n outputlines.append(outputstr)\n return u\"\".join(outputlines).encode(self.encoding)\n\n def _handle_accesskeys(self, inunit, currkey):\n value = inunit.target\n if self.personality.name == \"mozilla\":\n keys = inunit.getlocations()\n mixedkeys = self.mixer.match_entities(keys)\n for key in keys:\n if key == currkey and key in self.inputstore.locationindex:\n propunit = self.inputstore.locationindex[key] # find the prop\n value = applytranslation(key, propunit, inunit, mixedkeys)\n break\n\n return value\n\n def _explode_gaia_plurals(self):\n \"\"\"Explode the gaia plurals.\"\"\"\n from translate.lang import data\n for unit in self.inputstore.units:\n if not unit.hasplural():\n continue\n if unit.isfuzzy() and not self.includefuzzy or not unit.istranslated():\n continue\n\n names = data.cldr_plural_categories\n location = unit.getlocations()[0]\n for category, text in zip(names, unit.target.strings):\n # TODO: for now we assume all forms are present. We need to\n # fill in the rest after mapping things to the proper CLDR names.\n if category == 'zero':\n # [zero] cases are translated as separate units\n continue\n new_unit = self.inputstore.addsourceunit(u\"fish\") # not used\n new_location = '%s[%s]' % (location, category)\n new_unit.addlocation(new_location)\n new_unit.target = text\n self.inputstore.locationindex[new_location] = new_unit\n\n # We don't want the plural marker to be translated:\n del self.inputstore.locationindex[location]\n\n def convertline(self, line):\n returnline = u\"\"\n # handle multiline msgid if we're in one\n if self.inmultilinemsgid:\n msgid = quote.rstripeol(line).strip()\n # see if there's more\n self.inmultilinemsgid = (msgid[-1:] == '\\\\')\n # if we're echoing...\n if self.inecho:\n returnline = line\n # otherwise, this could be a comment\n elif line.strip()[:1] == '#':\n returnline = quote.rstripeol(line) + eol\n else:\n line = quote.rstripeol(line)\n delimiter_char, delimiter_pos = self.personality.find_delimiter(line)\n if quote.rstripeol(line)[-1:] == '\\\\':\n self.inmultilinemsgid = True\n if delimiter_pos == -1:\n key = self.personality.key_strip(line)\n delimiter = \" %s \" % self.personality.delimiters[0]\n else:\n key = self.personality.key_strip(line[:delimiter_pos])\n # Calculate space around the equal sign\n prespace = line[line.find(' ', len(key)):delimiter_pos]\n postspacestart = len(line[delimiter_pos+1:])\n postspaceend = len(line[delimiter_pos+1:].lstrip())\n postspace = line[delimiter_pos+1:delimiter_pos+(postspacestart-postspaceend)+1]\n delimiter = prespace + delimiter_char + postspace\n if key in self.inputstore.locationindex:\n unit = self.inputstore.locationindex[key]\n if unit is None or not unit.istranslated() and bool(unit.source) and self.remove_untranslated:\n returnline = u\"\"\n self.inecho = False\n else:\n if unit.isfuzzy() and not self.includefuzzy or len(unit.target) == 0:\n value = unit.source\n else:\n value = self._handle_accesskeys(unit, key)\n self.inecho = False\n assert isinstance(value, str)\n returnline = \"%(key)s%(del)s%(value)s%(term)s%(eol)s\" % {\n \"key\": \"%s%s%s\" % (self.personality.key_wrap_char,\n key,\n self.personality.key_wrap_char),\n \"del\": delimiter,\n \"value\": \"%s%s%s\" % (self.personality.value_wrap_char,\n self.personality.encode(value),\n self.personality.value_wrap_char),\n \"term\": self.personality.pair_terminator,\n \"eol\": eol,\n }\n else:\n self.inecho = True\n returnline = line + eol\n assert isinstance(returnline, str)\n return returnline\n\n\ndef convertstrings(inputfile, outputfile, templatefile, personality=\"strings\",\n includefuzzy=False, encoding=None, outputthreshold=None,\n remove_untranslated=False):\n \"\"\".strings specific convertor function\"\"\"\n return convertprop(inputfile, outputfile, templatefile,\n personality=\"strings\", includefuzzy=includefuzzy,\n encoding=encoding, outputthreshold=outputthreshold,\n remove_untranslated=remove_untranslated)\n\n\ndef convertmozillaprop(inputfile, outputfile, templatefile,\n includefuzzy=False, remove_untranslated=False,\n outputthreshold=None):\n \"\"\"Mozilla specific convertor function\"\"\"\n return convertprop(inputfile, outputfile, templatefile,\n personality=\"mozilla\", includefuzzy=includefuzzy,\n remove_untranslated=remove_untranslated,\n outputthreshold=outputthreshold)\n\n\ndef convertprop(inputfile, outputfile, templatefile, personality=\"java\",\n includefuzzy=False, encoding=None, remove_untranslated=False,\n outputthreshold=None):\n inputstore = po.pofile(inputfile)\n\n if not convert.should_output_store(inputstore, outputthreshold):\n return False\n\n if templatefile is None:\n raise ValueError(\"must have template file for properties files\")\n # convertor = po2prop()\n else:\n convertor = reprop(templatefile, inputstore, personality, encoding,\n remove_untranslated)\n outputprop = convertor.convertstore(includefuzzy)\n outputfile.write(outputprop)\n return True\n\n\nformats = {\n (\"po\", \"properties\"): (\"properties\", convertprop),\n (\"po\", \"lang\"): (\"lang\", convertprop),\n (\"po\", \"strings\"): (\"strings\", convertstrings),\n}\n\n\ndef main(argv=None):\n # handle command line options\n parser = convert.ConvertOptionParser(formats, usetemplates=True,\n description=__doc__)\n parser.add_option(\n \"\", \"--personality\", dest=\"personality\",\n default=properties.default_dialect, type=\"choice\",\n choices=list(properties.dialects),\n help=\"override the input file format: %s (for .properties files, default: %s)\" % (\n \", \".join(properties.dialects), properties.default_dialect),\n metavar=\"TYPE\")\n parser.add_option(\n \"\", \"--encoding\", dest=\"encoding\", default=None,\n help=\"override the encoding set by the personality\",\n metavar=\"ENCODING\")\n parser.add_threshold_option()\n parser.add_fuzzy_option()\n parser.add_remove_untranslated_option()\n parser.passthrough.append(\"personality\")\n parser.passthrough.append(\"encoding\")\n parser.run(argv)\n\n\nif __name__ == '__main__':\n main()\n", "path": "translate/convert/po2prop.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2002-2006 Zuza Software Foundation\n#\n# This file is part of translate.\n#\n# translate is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# translate is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Convert Gettext PO localization files to Java/Mozilla .properties files.\n\nSee: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/commands/prop2po.html\nfor examples and usage instructions.\n\"\"\"\n\nimport warnings\n\nfrom translate.convert import accesskey, convert\nfrom translate.misc import quote\nfrom translate.storage import po, properties\n\n\neol = u\"\\n\"\n\n\ndef applytranslation(key, propunit, inunit, mixedkeys):\n \"\"\"applies the translation for key in the po unit to the prop unit\"\"\"\n # this converts the po-style string to a prop-style string\n value = inunit.target\n # handle mixed keys\n for labelsuffix in properties.labelsuffixes:\n if key.endswith(labelsuffix):\n if key in mixedkeys:\n value, akey = accesskey.extract(value)\n break\n else:\n for akeysuffix in properties.accesskeysuffixes:\n if key.endswith(akeysuffix):\n if key in mixedkeys:\n label, value = accesskey.extract(value)\n if not value:\n warnings.warn(\"Could not find accesskey for %s\" % key)\n # Use the source language accesskey\n label, value = accesskey.extract(inunit.source)\n else:\n original = propunit.source\n # For the sake of diffs we keep the case of the\n # accesskey the same if we know the translation didn't\n # change. Casing matters in XUL.\n if value == propunit.source and original.lower() == value.lower():\n if original.isupper():\n value = value.upper()\n elif original.islower():\n value = value.lower()\n return value\n\n\nclass reprop:\n\n def __init__(self, templatefile, inputstore, personality, encoding=None,\n remove_untranslated=False):\n self.templatefile = templatefile\n self.inputstore = inputstore\n self.personality = properties.get_dialect(personality)\n self.encoding = encoding\n if self.encoding is None:\n self.encoding = self.personality.default_encoding\n self.remove_untranslated = remove_untranslated\n self.mixer = accesskey.UnitMixer(properties.labelsuffixes,\n properties.accesskeysuffixes)\n\n def convertstore(self, includefuzzy=False):\n self.includefuzzy = includefuzzy\n self.inmultilinemsgid = False\n self.inecho = False\n self.inputstore.makeindex()\n if self.personality.name == \"gaia\":\n self._explode_gaia_plurals()\n outputlines = []\n # Readlines doesn't work for UTF-16, we read() and splitlines(keepends) instead\n content = self.templatefile.read().decode(self.encoding)\n for line in content.splitlines(True):\n outputstr = self.convertline(line)\n outputlines.append(outputstr)\n return u\"\".join(outputlines).encode(self.encoding)\n\n def _handle_accesskeys(self, inunit, currkey):\n value = inunit.target\n if self.personality.name == \"mozilla\":\n keys = inunit.getlocations()\n mixedkeys = self.mixer.match_entities(keys)\n for key in keys:\n if key == currkey and key in self.inputstore.locationindex:\n propunit = self.inputstore.locationindex[key] # find the prop\n value = applytranslation(key, propunit, inunit, mixedkeys)\n break\n\n return value\n\n def _explode_gaia_plurals(self):\n \"\"\"Explode the gaia plurals.\"\"\"\n from translate.lang import data\n for unit in self.inputstore.units:\n if not unit.hasplural():\n continue\n if unit.isfuzzy() and not self.includefuzzy or not unit.istranslated():\n continue\n\n names = data.cldr_plural_categories\n location = unit.getlocations()[0]\n for category, text in zip(names, unit.target.strings):\n # TODO: for now we assume all forms are present. We need to\n # fill in the rest after mapping things to the proper CLDR names.\n if category == 'zero':\n # [zero] cases are translated as separate units\n continue\n new_unit = self.inputstore.addsourceunit(u\"fish\") # not used\n new_location = '%s[%s]' % (location, category)\n new_unit.addlocation(new_location)\n new_unit.target = text\n self.inputstore.locationindex[new_location] = new_unit\n\n # We don't want the plural marker to be translated:\n del self.inputstore.locationindex[location]\n\n def convertline(self, line):\n returnline = u\"\"\n # handle multiline msgid if we're in one\n if self.inmultilinemsgid:\n msgid = quote.rstripeol(line).strip()\n # see if there's more\n self.inmultilinemsgid = (msgid[-1:] == '\\\\')\n # if we're echoing...\n if self.inecho:\n returnline = line\n # otherwise, this could be a comment\n elif line.strip()[:1] == '#':\n returnline = quote.rstripeol(line) + eol\n else:\n line = quote.rstripeol(line)\n delimiter_char, delimiter_pos = self.personality.find_delimiter(line)\n if quote.rstripeol(line)[-1:] == '\\\\':\n self.inmultilinemsgid = True\n if delimiter_pos == -1:\n key = self.personality.key_strip(line)\n delimiter = \" %s \" % self.personality.delimiters[0]\n else:\n key = self.personality.key_strip(line[:delimiter_pos])\n # Calculate space around the equal sign\n prespace = line[line.find(' ', len(key)):delimiter_pos]\n postspacestart = len(line[delimiter_pos+1:])\n postspaceend = len(line[delimiter_pos+1:].lstrip())\n postspace = line[delimiter_pos+1:delimiter_pos+(postspacestart-postspaceend)+1]\n delimiter = prespace + delimiter_char + postspace\n if key in self.inputstore.locationindex:\n unit = self.inputstore.locationindex[key]\n if unit is None or not unit.istranslated() and bool(unit.source) and self.remove_untranslated:\n returnline = u\"\"\n self.inecho = False\n else:\n if unit.isfuzzy() and not self.includefuzzy or len(unit.target) == 0:\n value = unit.source\n else:\n value = self._handle_accesskeys(unit, key)\n self.inecho = False\n assert isinstance(value, str)\n returnline = \"%(key)s%(del)s%(value)s%(term)s%(eol)s\" % {\n \"key\": \"%s%s%s\" % (self.personality.key_wrap_char,\n key,\n self.personality.key_wrap_char),\n \"del\": delimiter if delimiter_pos != -1 or value else \"\",\n \"value\": \"%s%s%s\" % (self.personality.value_wrap_char,\n self.personality.encode(value),\n self.personality.value_wrap_char),\n \"term\": self.personality.pair_terminator,\n \"eol\": eol,\n }\n else:\n self.inecho = True\n returnline = line + eol\n assert isinstance(returnline, str)\n return returnline\n\n\ndef convertstrings(inputfile, outputfile, templatefile, personality=\"strings\",\n includefuzzy=False, encoding=None, outputthreshold=None,\n remove_untranslated=False):\n \"\"\".strings specific convertor function\"\"\"\n return convertprop(inputfile, outputfile, templatefile,\n personality=\"strings\", includefuzzy=includefuzzy,\n encoding=encoding, outputthreshold=outputthreshold,\n remove_untranslated=remove_untranslated)\n\n\ndef convertmozillaprop(inputfile, outputfile, templatefile,\n includefuzzy=False, remove_untranslated=False,\n outputthreshold=None):\n \"\"\"Mozilla specific convertor function\"\"\"\n return convertprop(inputfile, outputfile, templatefile,\n personality=\"mozilla\", includefuzzy=includefuzzy,\n remove_untranslated=remove_untranslated,\n outputthreshold=outputthreshold)\n\n\ndef convertprop(inputfile, outputfile, templatefile, personality=\"java\",\n includefuzzy=False, encoding=None, remove_untranslated=False,\n outputthreshold=None):\n inputstore = po.pofile(inputfile)\n\n if not convert.should_output_store(inputstore, outputthreshold):\n return False\n\n if templatefile is None:\n raise ValueError(\"must have template file for properties files\")\n # convertor = po2prop()\n else:\n convertor = reprop(templatefile, inputstore, personality, encoding,\n remove_untranslated)\n outputprop = convertor.convertstore(includefuzzy)\n outputfile.write(outputprop)\n return True\n\n\nformats = {\n (\"po\", \"properties\"): (\"properties\", convertprop),\n (\"po\", \"lang\"): (\"lang\", convertprop),\n (\"po\", \"strings\"): (\"strings\", convertstrings),\n}\n\n\ndef main(argv=None):\n # handle command line options\n parser = convert.ConvertOptionParser(formats, usetemplates=True,\n description=__doc__)\n parser.add_option(\n \"\", \"--personality\", dest=\"personality\",\n default=properties.default_dialect, type=\"choice\",\n choices=list(properties.dialects),\n help=\"override the input file format: %s (for .properties files, default: %s)\" % (\n \", \".join(properties.dialects), properties.default_dialect),\n metavar=\"TYPE\")\n parser.add_option(\n \"\", \"--encoding\", dest=\"encoding\", default=None,\n help=\"override the encoding set by the personality\",\n metavar=\"ENCODING\")\n parser.add_threshold_option()\n parser.add_fuzzy_option()\n parser.add_remove_untranslated_option()\n parser.passthrough.append(\"personality\")\n parser.passthrough.append(\"encoding\")\n parser.run(argv)\n\n\nif __name__ == '__main__':\n main()\n", "path": "translate/convert/po2prop.py"}]} | 3,324 | 147 |
gh_patches_debug_39694 | rasdani/github-patches | git_diff | NVIDIA__TransformerEngine-313 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Export transformer_engine.pytorch.LayerNorm to ONNX gives all zero output
After exporting `transformer_engine.pytorch.LayerNorm` to ONNX, it always give tensors with correct shape but filled with 0.
Example code:
```
import torch
import onnxruntime
from transformer_engine import pytorch as te
model = te.LayerNorm(1000).cuda().eval()
x_sample = torch.randn(3000, 1000)
with torch.inference_mode():
torch_out = model(x_sample.cuda()) # the result is correct
with torch.inference_mode():
with te.onnx_export(True):
torch.onnx.export(model, x_sample.cuda(), "layer_norm.onnx", dynamic_axes={"input": {0: "batch_size"}, "output": {0: "batch_size"}}, opset_version=17, input_names=["input"], output_names=["output"])
ort_session = onnxruntime.InferenceSession("layer_norm.onnx", providers=["CPUExecutionProvider"])
ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(x_sample)}
ort_output = ort_session.run(None, ort_inputs)[0]
# ort_output is all zero
```
Other pytorch modules like `transformer_engine.pytorch.LayerNormLinear` gives correct results.
OS: RHEL 7
Python: 3.10.11
TransformerEngine: 0.9
Pytorch: 2.0.1+cu118
GPU: 4090
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `transformer_engine/pytorch/module/layernorm.py`
Content:
```
1 # Copyright (c) 2022-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2 #
3 # See LICENSE for license information.
4
5 """LayerNorm API"""
6 import os
7 from typing import Union, Tuple, Any, Mapping, Optional
8
9 import torch
10 from torch.nn.parameter import Parameter
11 from torch.nn import init
12
13 import transformer_engine_extensions as tex
14
15
16 __all__ = ["LayerNorm"]
17
18
19 class _LayerNorm(torch.autograd.Function):
20 """functional LayerNorm"""
21
22 @staticmethod
23 def forward(
24 ctx,
25 inp: torch.Tensor,
26 ln_weight: torch.Tensor,
27 ln_bias: torch.Tensor,
28 eps: float,
29 fwd_ln_sm_margin: int,
30 bwd_ln_sm_margin: int,
31 zero_centered_gamma: bool,
32 ) -> torch.Tensor:
33 # Make sure input dimensions are compatible
34 in_features = ln_weight.numel()
35 assert inp.is_cuda, "TransformerEngine needs CUDA."
36 assert inp.shape[-1] == in_features, "LayerNorm not possible"
37 inputmat = inp.view((-1, in_features))
38
39 ln_out, mu, rsigma = tex.layernorm_fwd(inputmat, ln_weight,
40 ln_bias, eps, fwd_ln_sm_margin,
41 zero_centered_gamma)
42 ctx.save_for_backward(inputmat, ln_weight, mu, rsigma)
43 ctx.inp_shape = inp.shape
44 ctx.bwd_ln_sm_margin = bwd_ln_sm_margin
45 ctx.zero_centered_gamma = zero_centered_gamma
46 return ln_out.view_as(inp)
47
48 @staticmethod
49 def backward(
50 ctx, grad_output: torch.Tensor
51 ) -> Tuple[Union[torch.Tensor, None], ...]:
52 inputmat, ln_weight, mu, rsigma = ctx.saved_tensors
53 grad_output = grad_output.contiguous()
54 d_ln_out = grad_output.view(inputmat.shape)
55 dxmat, dgamma, dbeta = tex.layernorm_bwd(
56 d_ln_out, inputmat, mu, rsigma, ln_weight,
57 ctx.bwd_ln_sm_margin, ctx.zero_centered_gamma
58 )
59 return dxmat.view(ctx.inp_shape), dgamma, dbeta, None, None, None, None
60
61
62 class LayerNorm(torch.nn.Module):
63 r"""
64 Applies Layer Normalization over a mini-batch of inputs as described in
65 the paper `Layer Normalization <https://arxiv.org/abs/1607.06450>`__
66
67 .. math::
68 y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \varepsilon}} * \gamma + \beta
69
70 :math:`\gamma` and :math:`\beta` are learnable affine transform parameters of
71 size :attr:`hidden_size`
72
73 Parameters
74 ----------
75 hidden_size : int
76 size of each input sample.
77 eps : float, default = 1e-5
78 a value added to the denominator of layer normalization for numerical stability.
79 sequence_parallel : bool, default = `False`
80 if set to `True`, uses sequence parallelism.
81 params_dtype : torch.dtype, default = `torch.get_default_dtype()`
82 it controls the type used to allocate the initial parameters. Useful when
83 the model is trained with lower precision and the original FP32 parameters
84 would not fit in GPU memory.
85 zero_centered_gamma : bool, default = 'False'
86 if set to 'True', gamma parameter in LayerNorm is initialized to 0 and
87 the LayerNorm formula changes to
88
89 .. math::
90 y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \varepsilon}} *
91 (1 + \gamma) + \beta
92 """
93
94 def __init__(
95 self,
96 hidden_size: int,
97 eps: float = 1e-5,
98 sequence_parallel: bool = False,
99 params_dtype: Optional[torch.dtype] = None,
100 zero_centered_gamma: bool = False,
101 ) -> None:
102 super().__init__()
103 params_dtype = torch.get_default_dtype() if params_dtype is None else params_dtype
104 self.eps = eps
105 self.zero_centered_gamma = zero_centered_gamma
106 self.weight = Parameter(
107 torch.empty(
108 hidden_size,
109 device=torch.cuda.current_device(),
110 dtype=params_dtype,
111 )
112 )
113 self.bias = Parameter(
114 torch.empty(
115 hidden_size,
116 device=torch.cuda.current_device(),
117 dtype=params_dtype,
118 )
119 )
120 setattr(self.weight, "sequence_parallel", sequence_parallel)
121 setattr(self.bias, "sequence_parallel", sequence_parallel)
122 self.reset_layer_norm_parameters()
123
124 # These many SMs are subtracted from the total SM count when calling forward
125 # and backward LayerNorm C APIs. These envvars can be used to prevent the LN
126 # kernels from using all SMs in the device. This is useful for cases such as
127 # communication overlap with LN.
128 self.fwd_ln_sm_margin = int(os.getenv("NVTE_FWD_LAYERNORM_SM_MARGIN", "0"))
129 self.bwd_ln_sm_margin = int(os.getenv("NVTE_BWD_LAYERNORM_SM_MARGIN", "0"))
130
131 def load_state_dict(
132 self,
133 state_dict: Mapping[str, Any],
134 strict: bool = True,
135 ) -> None:
136 """Override PyTorch loader to maintain backward compatibility
137 with previous version of LayerNorm parameter names.
138 """
139 if "layer_norm_weight" in state_dict:
140 state_dict["weight"] = state_dict["layer_norm_weight"]
141 del state_dict["layer_norm_weight"]
142 if "layer_norm_bias" in state_dict:
143 state_dict["bias"] = state_dict["layer_norm_bias"]
144 del state_dict["layer_norm_bias"]
145
146 super().load_state_dict(state_dict, strict)
147
148 def reset_layer_norm_parameters(self) -> None:
149 """Init LN params"""
150 if not self.zero_centered_gamma:
151 init.ones_(self.weight)
152 else:
153 init.zeros_(self.weight)
154 init.zeros_(self.bias)
155
156
157 def forward(self, inp: torch.Tensor) -> torch.Tensor:
158 """LayerNorm FWD"""
159 # Maintain backward compatibility.
160 if hasattr(self, "layer_norm_weight"):
161 setattr(self, "weight", self.layer_norm_weight)
162 if hasattr(self, "layer_norm_bias"):
163 setattr(self, "bias", self.layer_norm_bias)
164
165 return _LayerNorm.apply(
166 inp,
167 self.weight,
168 self.bias,
169 self.eps,
170 self.fwd_ln_sm_margin,
171 self.bwd_ln_sm_margin,
172 self.zero_centered_gamma
173 )
174
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/transformer_engine/pytorch/module/layernorm.py b/transformer_engine/pytorch/module/layernorm.py
--- a/transformer_engine/pytorch/module/layernorm.py
+++ b/transformer_engine/pytorch/module/layernorm.py
@@ -11,7 +11,9 @@
from torch.nn import init
import transformer_engine_extensions as tex
-
+from ..cpp_extensions import (
+ layernorm_fwd_inf,
+ )
__all__ = ["LayerNorm"]
@@ -29,6 +31,7 @@
fwd_ln_sm_margin: int,
bwd_ln_sm_margin: int,
zero_centered_gamma: bool,
+ is_grad_enabled: bool,
) -> torch.Tensor:
# Make sure input dimensions are compatible
in_features = ln_weight.numel()
@@ -36,13 +39,16 @@
assert inp.shape[-1] == in_features, "LayerNorm not possible"
inputmat = inp.view((-1, in_features))
- ln_out, mu, rsigma = tex.layernorm_fwd(inputmat, ln_weight,
- ln_bias, eps, fwd_ln_sm_margin,
- zero_centered_gamma)
- ctx.save_for_backward(inputmat, ln_weight, mu, rsigma)
- ctx.inp_shape = inp.shape
- ctx.bwd_ln_sm_margin = bwd_ln_sm_margin
- ctx.zero_centered_gamma = zero_centered_gamma
+ if is_grad_enabled:
+ ln_out, mu, rsigma = tex.layernorm_fwd(inputmat, ln_weight,
+ ln_bias, eps, fwd_ln_sm_margin, zero_centered_gamma)
+ ctx.save_for_backward(inputmat, ln_weight, mu, rsigma)
+ ctx.inp_shape = inp.shape
+ ctx.bwd_ln_sm_margin = bwd_ln_sm_margin
+ ctx.zero_centered_gamma = zero_centered_gamma
+ else:
+ ln_out, mu, rsigma = layernorm_fwd_inf(inputmat, ln_weight,
+ ln_bias, eps, zero_centered_gamma), None, None
return ln_out.view_as(inp)
@staticmethod
@@ -56,7 +62,7 @@
d_ln_out, inputmat, mu, rsigma, ln_weight,
ctx.bwd_ln_sm_margin, ctx.zero_centered_gamma
)
- return dxmat.view(ctx.inp_shape), dgamma, dbeta, None, None, None, None
+ return dxmat.view(ctx.inp_shape), dgamma, dbeta, None, None, None, None, None
class LayerNorm(torch.nn.Module):
@@ -162,12 +168,22 @@
if hasattr(self, "layer_norm_bias"):
setattr(self, "bias", self.layer_norm_bias)
- return _LayerNorm.apply(
+ if torch.is_grad_enabled():
+ fwd_fn = _LayerNorm.apply
+ args = []
+ else:
+ fwd_fn = _LayerNorm.forward
+ args = [None]
+
+ args += (
inp,
self.weight,
self.bias,
self.eps,
self.fwd_ln_sm_margin,
self.bwd_ln_sm_margin,
- self.zero_centered_gamma
+ self.zero_centered_gamma,
+ torch.is_grad_enabled()
)
+
+ return fwd_fn(*args)
| {"golden_diff": "diff --git a/transformer_engine/pytorch/module/layernorm.py b/transformer_engine/pytorch/module/layernorm.py\n--- a/transformer_engine/pytorch/module/layernorm.py\n+++ b/transformer_engine/pytorch/module/layernorm.py\n@@ -11,7 +11,9 @@\n from torch.nn import init\n \n import transformer_engine_extensions as tex\n-\n+from ..cpp_extensions import (\n+ layernorm_fwd_inf,\n+ )\n \n __all__ = [\"LayerNorm\"]\n \n@@ -29,6 +31,7 @@\n fwd_ln_sm_margin: int,\n bwd_ln_sm_margin: int,\n zero_centered_gamma: bool,\n+ is_grad_enabled: bool,\n ) -> torch.Tensor:\n # Make sure input dimensions are compatible\n in_features = ln_weight.numel()\n@@ -36,13 +39,16 @@\n assert inp.shape[-1] == in_features, \"LayerNorm not possible\"\n inputmat = inp.view((-1, in_features))\n \n- ln_out, mu, rsigma = tex.layernorm_fwd(inputmat, ln_weight,\n- ln_bias, eps, fwd_ln_sm_margin,\n- zero_centered_gamma)\n- ctx.save_for_backward(inputmat, ln_weight, mu, rsigma)\n- ctx.inp_shape = inp.shape\n- ctx.bwd_ln_sm_margin = bwd_ln_sm_margin\n- ctx.zero_centered_gamma = zero_centered_gamma\n+ if is_grad_enabled:\n+ ln_out, mu, rsigma = tex.layernorm_fwd(inputmat, ln_weight,\n+ ln_bias, eps, fwd_ln_sm_margin, zero_centered_gamma)\n+ ctx.save_for_backward(inputmat, ln_weight, mu, rsigma)\n+ ctx.inp_shape = inp.shape\n+ ctx.bwd_ln_sm_margin = bwd_ln_sm_margin\n+ ctx.zero_centered_gamma = zero_centered_gamma\n+ else:\n+ ln_out, mu, rsigma = layernorm_fwd_inf(inputmat, ln_weight,\n+ ln_bias, eps, zero_centered_gamma), None, None\n return ln_out.view_as(inp)\n \n @staticmethod\n@@ -56,7 +62,7 @@\n d_ln_out, inputmat, mu, rsigma, ln_weight,\n ctx.bwd_ln_sm_margin, ctx.zero_centered_gamma\n )\n- return dxmat.view(ctx.inp_shape), dgamma, dbeta, None, None, None, None\n+ return dxmat.view(ctx.inp_shape), dgamma, dbeta, None, None, None, None, None\n \n \n class LayerNorm(torch.nn.Module):\n@@ -162,12 +168,22 @@\n if hasattr(self, \"layer_norm_bias\"):\n setattr(self, \"bias\", self.layer_norm_bias)\n \n- return _LayerNorm.apply(\n+ if torch.is_grad_enabled():\n+ fwd_fn = _LayerNorm.apply\n+ args = []\n+ else:\n+ fwd_fn = _LayerNorm.forward\n+ args = [None]\n+\n+ args += (\n inp,\n self.weight,\n self.bias,\n self.eps,\n self.fwd_ln_sm_margin,\n self.bwd_ln_sm_margin,\n- self.zero_centered_gamma\n+ self.zero_centered_gamma,\n+ torch.is_grad_enabled()\n )\n+\n+ return fwd_fn(*args)\n", "issue": "Export transformer_engine.pytorch.LayerNorm to ONNX gives all zero output\nAfter exporting `transformer_engine.pytorch.LayerNorm` to ONNX, it always give tensors with correct shape but filled with 0.\r\n\r\nExample code:\r\n```\r\nimport torch\r\nimport onnxruntime\r\nfrom transformer_engine import pytorch as te\r\n\r\nmodel = te.LayerNorm(1000).cuda().eval()\r\nx_sample = torch.randn(3000, 1000)\r\nwith torch.inference_mode():\r\n torch_out = model(x_sample.cuda()) # the result is correct\r\n\r\nwith torch.inference_mode():\r\n with te.onnx_export(True):\r\n torch.onnx.export(model, x_sample.cuda(), \"layer_norm.onnx\", dynamic_axes={\"input\": {0: \"batch_size\"}, \"output\": {0: \"batch_size\"}}, opset_version=17, input_names=[\"input\"], output_names=[\"output\"])\r\n\r\nort_session = onnxruntime.InferenceSession(\"layer_norm.onnx\", providers=[\"CPUExecutionProvider\"])\r\nort_inputs = {ort_session.get_inputs()[0].name: to_numpy(x_sample)}\r\nort_output = ort_session.run(None, ort_inputs)[0]\r\n# ort_output is all zero\r\n```\r\n\r\nOther pytorch modules like `transformer_engine.pytorch.LayerNormLinear` gives correct results.\r\n\r\n\r\nOS: RHEL 7\r\nPython: 3.10.11\r\nTransformerEngine: 0.9\r\nPytorch: 2.0.1+cu118\r\nGPU: 4090\n", "before_files": [{"content": "# Copyright (c) 2022-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.\n#\n# See LICENSE for license information.\n\n\"\"\"LayerNorm API\"\"\"\nimport os\nfrom typing import Union, Tuple, Any, Mapping, Optional\n\nimport torch\nfrom torch.nn.parameter import Parameter\nfrom torch.nn import init\n\nimport transformer_engine_extensions as tex\n\n\n__all__ = [\"LayerNorm\"]\n\n\nclass _LayerNorm(torch.autograd.Function):\n \"\"\"functional LayerNorm\"\"\"\n\n @staticmethod\n def forward(\n ctx,\n inp: torch.Tensor,\n ln_weight: torch.Tensor,\n ln_bias: torch.Tensor,\n eps: float,\n fwd_ln_sm_margin: int,\n bwd_ln_sm_margin: int,\n zero_centered_gamma: bool,\n ) -> torch.Tensor:\n # Make sure input dimensions are compatible\n in_features = ln_weight.numel()\n assert inp.is_cuda, \"TransformerEngine needs CUDA.\"\n assert inp.shape[-1] == in_features, \"LayerNorm not possible\"\n inputmat = inp.view((-1, in_features))\n\n ln_out, mu, rsigma = tex.layernorm_fwd(inputmat, ln_weight,\n ln_bias, eps, fwd_ln_sm_margin,\n zero_centered_gamma)\n ctx.save_for_backward(inputmat, ln_weight, mu, rsigma)\n ctx.inp_shape = inp.shape\n ctx.bwd_ln_sm_margin = bwd_ln_sm_margin\n ctx.zero_centered_gamma = zero_centered_gamma\n return ln_out.view_as(inp)\n\n @staticmethod\n def backward(\n ctx, grad_output: torch.Tensor\n ) -> Tuple[Union[torch.Tensor, None], ...]:\n inputmat, ln_weight, mu, rsigma = ctx.saved_tensors\n grad_output = grad_output.contiguous()\n d_ln_out = grad_output.view(inputmat.shape)\n dxmat, dgamma, dbeta = tex.layernorm_bwd(\n d_ln_out, inputmat, mu, rsigma, ln_weight,\n ctx.bwd_ln_sm_margin, ctx.zero_centered_gamma\n )\n return dxmat.view(ctx.inp_shape), dgamma, dbeta, None, None, None, None\n\n\nclass LayerNorm(torch.nn.Module):\n r\"\"\"\n Applies Layer Normalization over a mini-batch of inputs as described in\n the paper `Layer Normalization <https://arxiv.org/abs/1607.06450>`__\n\n .. math::\n y = \\frac{x - \\mathrm{E}[x]}{ \\sqrt{\\mathrm{Var}[x] + \\varepsilon}} * \\gamma + \\beta\n\n :math:`\\gamma` and :math:`\\beta` are learnable affine transform parameters of\n size :attr:`hidden_size`\n\n Parameters\n ----------\n hidden_size : int\n size of each input sample.\n eps : float, default = 1e-5\n a value added to the denominator of layer normalization for numerical stability.\n sequence_parallel : bool, default = `False`\n if set to `True`, uses sequence parallelism.\n params_dtype : torch.dtype, default = `torch.get_default_dtype()`\n it controls the type used to allocate the initial parameters. Useful when\n the model is trained with lower precision and the original FP32 parameters\n would not fit in GPU memory.\n zero_centered_gamma : bool, default = 'False'\n if set to 'True', gamma parameter in LayerNorm is initialized to 0 and\n the LayerNorm formula changes to\n\n .. math::\n y = \\frac{x - \\mathrm{E}[x]}{ \\sqrt{\\mathrm{Var}[x] + \\varepsilon}} *\n (1 + \\gamma) + \\beta\n \"\"\"\n\n def __init__(\n self,\n hidden_size: int,\n eps: float = 1e-5,\n sequence_parallel: bool = False,\n params_dtype: Optional[torch.dtype] = None,\n zero_centered_gamma: bool = False,\n ) -> None:\n super().__init__()\n params_dtype = torch.get_default_dtype() if params_dtype is None else params_dtype\n self.eps = eps\n self.zero_centered_gamma = zero_centered_gamma\n self.weight = Parameter(\n torch.empty(\n hidden_size,\n device=torch.cuda.current_device(),\n dtype=params_dtype,\n )\n )\n self.bias = Parameter(\n torch.empty(\n hidden_size,\n device=torch.cuda.current_device(),\n dtype=params_dtype,\n )\n )\n setattr(self.weight, \"sequence_parallel\", sequence_parallel)\n setattr(self.bias, \"sequence_parallel\", sequence_parallel)\n self.reset_layer_norm_parameters()\n\n # These many SMs are subtracted from the total SM count when calling forward\n # and backward LayerNorm C APIs. These envvars can be used to prevent the LN\n # kernels from using all SMs in the device. This is useful for cases such as\n # communication overlap with LN.\n self.fwd_ln_sm_margin = int(os.getenv(\"NVTE_FWD_LAYERNORM_SM_MARGIN\", \"0\"))\n self.bwd_ln_sm_margin = int(os.getenv(\"NVTE_BWD_LAYERNORM_SM_MARGIN\", \"0\"))\n\n def load_state_dict(\n self,\n state_dict: Mapping[str, Any],\n strict: bool = True,\n ) -> None:\n \"\"\"Override PyTorch loader to maintain backward compatibility\n with previous version of LayerNorm parameter names.\n \"\"\"\n if \"layer_norm_weight\" in state_dict:\n state_dict[\"weight\"] = state_dict[\"layer_norm_weight\"]\n del state_dict[\"layer_norm_weight\"]\n if \"layer_norm_bias\" in state_dict:\n state_dict[\"bias\"] = state_dict[\"layer_norm_bias\"]\n del state_dict[\"layer_norm_bias\"]\n\n super().load_state_dict(state_dict, strict)\n\n def reset_layer_norm_parameters(self) -> None:\n \"\"\"Init LN params\"\"\"\n if not self.zero_centered_gamma:\n init.ones_(self.weight)\n else:\n init.zeros_(self.weight)\n init.zeros_(self.bias)\n\n\n def forward(self, inp: torch.Tensor) -> torch.Tensor:\n \"\"\"LayerNorm FWD\"\"\"\n # Maintain backward compatibility.\n if hasattr(self, \"layer_norm_weight\"):\n setattr(self, \"weight\", self.layer_norm_weight)\n if hasattr(self, \"layer_norm_bias\"):\n setattr(self, \"bias\", self.layer_norm_bias)\n\n return _LayerNorm.apply(\n inp,\n self.weight,\n self.bias,\n self.eps,\n self.fwd_ln_sm_margin,\n self.bwd_ln_sm_margin,\n self.zero_centered_gamma\n )\n", "path": "transformer_engine/pytorch/module/layernorm.py"}], "after_files": [{"content": "# Copyright (c) 2022-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.\n#\n# See LICENSE for license information.\n\n\"\"\"LayerNorm API\"\"\"\nimport os\nfrom typing import Union, Tuple, Any, Mapping, Optional\n\nimport torch\nfrom torch.nn.parameter import Parameter\nfrom torch.nn import init\n\nimport transformer_engine_extensions as tex\nfrom ..cpp_extensions import (\n layernorm_fwd_inf,\n )\n\n__all__ = [\"LayerNorm\"]\n\n\nclass _LayerNorm(torch.autograd.Function):\n \"\"\"functional LayerNorm\"\"\"\n\n @staticmethod\n def forward(\n ctx,\n inp: torch.Tensor,\n ln_weight: torch.Tensor,\n ln_bias: torch.Tensor,\n eps: float,\n fwd_ln_sm_margin: int,\n bwd_ln_sm_margin: int,\n zero_centered_gamma: bool,\n is_grad_enabled: bool,\n ) -> torch.Tensor:\n # Make sure input dimensions are compatible\n in_features = ln_weight.numel()\n assert inp.is_cuda, \"TransformerEngine needs CUDA.\"\n assert inp.shape[-1] == in_features, \"LayerNorm not possible\"\n inputmat = inp.view((-1, in_features))\n\n if is_grad_enabled:\n ln_out, mu, rsigma = tex.layernorm_fwd(inputmat, ln_weight,\n ln_bias, eps, fwd_ln_sm_margin, zero_centered_gamma)\n ctx.save_for_backward(inputmat, ln_weight, mu, rsigma)\n ctx.inp_shape = inp.shape\n ctx.bwd_ln_sm_margin = bwd_ln_sm_margin\n ctx.zero_centered_gamma = zero_centered_gamma\n else:\n ln_out, mu, rsigma = layernorm_fwd_inf(inputmat, ln_weight,\n ln_bias, eps, zero_centered_gamma), None, None\n return ln_out.view_as(inp)\n\n @staticmethod\n def backward(\n ctx, grad_output: torch.Tensor\n ) -> Tuple[Union[torch.Tensor, None], ...]:\n inputmat, ln_weight, mu, rsigma = ctx.saved_tensors\n grad_output = grad_output.contiguous()\n d_ln_out = grad_output.view(inputmat.shape)\n dxmat, dgamma, dbeta = tex.layernorm_bwd(\n d_ln_out, inputmat, mu, rsigma, ln_weight,\n ctx.bwd_ln_sm_margin, ctx.zero_centered_gamma\n )\n return dxmat.view(ctx.inp_shape), dgamma, dbeta, None, None, None, None, None\n\n\nclass LayerNorm(torch.nn.Module):\n r\"\"\"\n Applies Layer Normalization over a mini-batch of inputs as described in\n the paper `Layer Normalization <https://arxiv.org/abs/1607.06450>`__\n\n .. math::\n y = \\frac{x - \\mathrm{E}[x]}{ \\sqrt{\\mathrm{Var}[x] + \\varepsilon}} * \\gamma + \\beta\n\n :math:`\\gamma` and :math:`\\beta` are learnable affine transform parameters of\n size :attr:`hidden_size`\n\n Parameters\n ----------\n hidden_size : int\n size of each input sample.\n eps : float, default = 1e-5\n a value added to the denominator of layer normalization for numerical stability.\n sequence_parallel : bool, default = `False`\n if set to `True`, uses sequence parallelism.\n params_dtype : torch.dtype, default = `torch.get_default_dtype()`\n it controls the type used to allocate the initial parameters. Useful when\n the model is trained with lower precision and the original FP32 parameters\n would not fit in GPU memory.\n zero_centered_gamma : bool, default = 'False'\n if set to 'True', gamma parameter in LayerNorm is initialized to 0 and\n the LayerNorm formula changes to\n\n .. math::\n y = \\frac{x - \\mathrm{E}[x]}{ \\sqrt{\\mathrm{Var}[x] + \\varepsilon}} *\n (1 + \\gamma) + \\beta\n \"\"\"\n\n def __init__(\n self,\n hidden_size: int,\n eps: float = 1e-5,\n sequence_parallel: bool = False,\n params_dtype: Optional[torch.dtype] = None,\n zero_centered_gamma: bool = False,\n ) -> None:\n super().__init__()\n params_dtype = torch.get_default_dtype() if params_dtype is None else params_dtype\n self.eps = eps\n self.zero_centered_gamma = zero_centered_gamma\n self.weight = Parameter(\n torch.empty(\n hidden_size,\n device=torch.cuda.current_device(),\n dtype=params_dtype,\n )\n )\n self.bias = Parameter(\n torch.empty(\n hidden_size,\n device=torch.cuda.current_device(),\n dtype=params_dtype,\n )\n )\n setattr(self.weight, \"sequence_parallel\", sequence_parallel)\n setattr(self.bias, \"sequence_parallel\", sequence_parallel)\n self.reset_layer_norm_parameters()\n\n # These many SMs are subtracted from the total SM count when calling forward\n # and backward LayerNorm C APIs. These envvars can be used to prevent the LN\n # kernels from using all SMs in the device. This is useful for cases such as\n # communication overlap with LN.\n self.fwd_ln_sm_margin = int(os.getenv(\"NVTE_FWD_LAYERNORM_SM_MARGIN\", \"0\"))\n self.bwd_ln_sm_margin = int(os.getenv(\"NVTE_BWD_LAYERNORM_SM_MARGIN\", \"0\"))\n\n def load_state_dict(\n self,\n state_dict: Mapping[str, Any],\n strict: bool = True,\n ) -> None:\n \"\"\"Override PyTorch loader to maintain backward compatibility\n with previous version of LayerNorm parameter names.\n \"\"\"\n if \"layer_norm_weight\" in state_dict:\n state_dict[\"weight\"] = state_dict[\"layer_norm_weight\"]\n del state_dict[\"layer_norm_weight\"]\n if \"layer_norm_bias\" in state_dict:\n state_dict[\"bias\"] = state_dict[\"layer_norm_bias\"]\n del state_dict[\"layer_norm_bias\"]\n\n super().load_state_dict(state_dict, strict)\n\n def reset_layer_norm_parameters(self) -> None:\n \"\"\"Init LN params\"\"\"\n if not self.zero_centered_gamma:\n init.ones_(self.weight)\n else:\n init.zeros_(self.weight)\n init.zeros_(self.bias)\n\n\n def forward(self, inp: torch.Tensor) -> torch.Tensor:\n \"\"\"LayerNorm FWD\"\"\"\n # Maintain backward compatibility.\n if hasattr(self, \"layer_norm_weight\"):\n setattr(self, \"weight\", self.layer_norm_weight)\n if hasattr(self, \"layer_norm_bias\"):\n setattr(self, \"bias\", self.layer_norm_bias)\n\n if torch.is_grad_enabled():\n fwd_fn = _LayerNorm.apply\n args = []\n else:\n fwd_fn = _LayerNorm.forward\n args = [None]\n\n args += (\n inp,\n self.weight,\n self.bias,\n self.eps,\n self.fwd_ln_sm_margin,\n self.bwd_ln_sm_margin,\n self.zero_centered_gamma,\n torch.is_grad_enabled()\n )\n\n return fwd_fn(*args)\n", "path": "transformer_engine/pytorch/module/layernorm.py"}]} | 2,436 | 743 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.