exorcist123 commited on
Commit
f4634b9
·
1 Parent(s): 77b3268

add crowd counting demo

Browse files
.gitignore ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ crowd-counting-env/
2
+ testing.ipynb
3
+ __pycache__/
README.md CHANGED
@@ -1,13 +1,2 @@
1
- ---
2
- title: Video Crowd Counting
3
- emoji: 🚀
4
- colorFrom: pink
5
- colorTo: red
6
- sdk: gradio
7
- sdk_version: 4.21.0
8
- app_file: app.py
9
- pinned: false
10
- license: apache-2.0
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
1
+ # crowd-counting-demo
2
+
 
 
 
 
 
 
 
 
 
 
 
app.py ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ from crowd_counter import CrowdCounter
3
+ from PIL import Image
4
+ import cv2
5
+
6
+ crowd_counter_engine = CrowdCounter()
7
+
8
+ def predict(inp):
9
+ inp = Image.fromarray(inp.astype('uint8'), 'RGB')
10
+ response = crowd_counter_engine.inference(inp)
11
+ crowd_count = response[0]
12
+ pred_img= response[1]
13
+ return cv2.cvtColor(pred_img, cv2.COLOR_BGR2RGB), crowd_count
14
+
15
+ title = "Crowd Counter Demo"
16
+ desc = "A Demo of Proposal Point Prediction for Crowd Counting - Powered by P2PNet"
17
+ examples = [
18
+ ["images/img-1.jpg"],
19
+ ["images/img-2.jpg"],
20
+ ["images/img-3.jpg"],
21
+ ]
22
+ inputs = gr.inputs.Image(label="Image of Crowd")
23
+ outputs = [gr.outputs.Image(label="Proposal Points Prediction",type = "numpy"), gr.outputs.Label(label="Predicted Count",type = "numpy")]
24
+ gr.Interface(fn=predict, inputs=inputs, outputs=outputs, title=title, description=desc, examples=examples,
25
+ allow_flagging=False).launch()
crowd_counter/LICENSE ADDED
@@ -0,0 +1,1401 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Tencent is pleased to support the open source community by making P2PNet available.
2
+
3
+ Copyright (C) 2021 THL A29 Limited, a Tencent company. All rights reserved. The below software in this distribution may have been modified by THL A29 Limited ("Tencent Modifications"). All Tencent Modifications are Copyright (C) THL A29 Limited.
4
+
5
+ P2PNet is licensed under the following License, except for the third-party components listed below.
6
+
7
+ License for P2PNet:
8
+ --------------------------------------------------------------------
9
+ Redistribution and use in source and binary forms, with or without modification, are permitted provided
10
+ that the following conditions are met:
11
+
12
+ 1. Use in source and binary forms shall only be for the purpose of academic research.
13
+
14
+ 2. Redistributions of source code must retain the above copyright notice, this list of conditions and the
15
+ following disclaimer.
16
+
17
+ 3. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and
18
+ the following disclaimer in the documentation and/or other materials provided with the distribution.
19
+
20
+ 4. Neither the name of the copyright holder nor the names of its contributors may be used to endorse
21
+ or promote products derived from this software without specific prior written permission.
22
+
23
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
24
+ AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
25
+ IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
26
+ DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
27
+ FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
28
+ DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
29
+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
30
+ CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
31
+ OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
32
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
33
+
34
+
35
+ Other dependencies and licenses:
36
+
37
+
38
+ Open Source Software Licensed under the BSD 3-Clause License:
39
+ --------------------------------------------------------------------
40
+ 1. Pytorch
41
+ Copyright (c) 2016- Facebook, Inc (Adam Paszke)
42
+ Copyright (c) 2014- Facebook, Inc (Soumith Chintala)
43
+ Copyright (c) 2011-2014 Idiap Research Institute (Ronan Collobert)
44
+ Copyright (c) 2012-2014 Deepmind Technologies (Koray Kavukcuoglu)
45
+ Copyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu)
46
+ Copyright (c) 2011-2013 NYU (Clement Farabet)
47
+ Copyright (c) 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston)
48
+ Copyright (c) 2006 Idiap Research Institute (Samy Bengio)
49
+ Copyright (c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz)
50
+
51
+ 2. torchvision
52
+ Copyright (c) Soumith Chintala 2016,
53
+ All rights reserved.
54
+
55
+ 3. opencv-python
56
+ Copyright (C) 2000-2020, Intel Corporation, all rights reserved.
57
+ Copyright (C) 2009-2011, Willow Garage Inc., all rights reserved.
58
+ Copyright (C) 2009-2016, NVIDIA Corporation, all rights reserved.
59
+ Copyright (C) 2010-2013, Advanced Micro Devices, Inc., all rights reserved.
60
+ Copyright (C) 2015-2016, OpenCV Foundation, all rights reserved.
61
+ Copyright (C) 2015-2016, Itseez Inc., all rights reserved.
62
+ Copyright (C) 2019-2020, Xperience AI, all rights reserved.
63
+ Third party copyrights are property of their respective owners.
64
+
65
+
66
+ Terms of the BSD 3-Clause License:
67
+ --------------------------------------------------------------------
68
+ Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
69
+
70
+ 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
71
+
72
+ 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
73
+
74
+ 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
75
+
76
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
77
+
78
+
79
+
80
+ Open Source Software Licensed under the PIL Software License:
81
+ --------------------------------------------------------------------
82
+ 1. Pillow
83
+ Copyright © 2010-2020 by Alex Clark and contributors
84
+
85
+
86
+ Terms of the PIL Software License:
87
+ --------------------------------------------------------------------
88
+ By obtaining, using, and/or copying this software and/or its associated documentation, you agree that you have read, understood, and will comply with the following terms and conditions:
89
+
90
+ Permission to use, copy, modify, and distribute this software and its associated documentation for any purpose and without fee is hereby granted, provided that the above copyright notice appears in all copies, and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of Secret Labs AB or the author not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission.
91
+
92
+ SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
93
+
94
+
95
+
96
+ Open Source Software Licensed under the BSD 3-Clause License and Other Licenses of the Third-Party Components therein:
97
+ --------------------------------------------------------------------
98
+ 1. numpy
99
+ Copyright (c) 2005-2020, NumPy Developers.
100
+ All rights reserved.
101
+
102
+
103
+ A copy of the BSD 3-Clause License is included in this file.
104
+
105
+ The NumPy repository and source distributions bundle several libraries that are
106
+ compatibly licensed. We list these here.
107
+
108
+ Name: Numpydoc
109
+ Files: doc/sphinxext/numpydoc/*
110
+ License: BSD-2-Clause
111
+
112
+ Name: scipy-sphinx-theme
113
+ Files: doc/scipy-sphinx-theme/*
114
+ License: BSD-3-Clause AND PSF-2.0 AND Apache-2.0
115
+
116
+ Name: lapack-lite
117
+ Files: numpy/linalg/lapack_lite/*
118
+ License: BSD-3-Clause
119
+
120
+ Name: tempita
121
+ Files: tools/npy_tempita/*
122
+ License: MIT
123
+
124
+ Name: dragon4
125
+ Files: numpy/core/src/multiarray/dragon4.c
126
+ License: MIT
127
+
128
+
129
+
130
+ Open Source Software Licensed under the BSD 3-Clause License and Other Licenses of the Third-Party Components therein:
131
+ --------------------------------------------------------------------
132
+ 1. h5py
133
+ Copyright (c) 2008 Andrew Collette and contributors
134
+ All rights reserved.
135
+
136
+
137
+ A copy of the BSD 3-Clause License is included in this file.
138
+
139
+ For thirdparty hdf5:
140
+ HDF5 (Hierarchical Data Format 5) Software Library and Utilities
141
+ Copyright 2006-2007 by The HDF Group (THG).
142
+
143
+ NCSA HDF5 (Hierarchical Data Format 5) Software Library and Utilities
144
+ Copyright 1998-2006 by the Board of Trustees of the University of Illinois.
145
+
146
+ All rights reserved.
147
+
148
+ Contributors: National Center for Supercomputing Applications (NCSA)
149
+ at the University of Illinois, Fortner Software, Unidata Program
150
+ Center (netCDF), The Independent JPEG Group (JPEG), Jean-loup Gailly
151
+ and Mark Adler (gzip), and Digital Equipment Corporation (DEC).
152
+
153
+ Redistribution and use in source and binary forms, with or without
154
+ modification, are permitted for any purpose (including commercial
155
+ purposes) provided that the following conditions are met:
156
+
157
+ 1. Redistributions of source code must retain the above copyright
158
+ notice, this list of conditions, and the following disclaimer.
159
+ 2. Redistributions in binary form must reproduce the above
160
+ copyright notice, this list of conditions, and the following
161
+ disclaimer in the documentation and/or materials provided with the
162
+ distribution.
163
+ 3. In addition, redistributions of modified forms of the source or
164
+ binary code must carry prominent notices stating that the original
165
+ code was changed and the date of the change.
166
+ 4. All publications or advertising materials mentioning features or
167
+ use of this software are asked, but not required, to acknowledge that
168
+ it was developed by The HDF Group and by the National Center for
169
+ Supercomputing Applications at the University of Illinois at
170
+ Urbana-Champaign and credit the contributors.
171
+ 5. Neither the name of The HDF Group, the name of the University,
172
+ nor the name of any Contributor may be used to endorse or promote
173
+ products derived from this software without specific prior written
174
+ permission from THG, the University, or the Contributor, respectively.
175
+
176
+ DISCLAIMER: THIS SOFTWARE IS PROVIDED BY THE HDF GROUP (THG) AND THE
177
+ CONTRIBUTORS "AS IS" WITH NO WARRANTY OF ANY KIND, EITHER EXPRESSED OR
178
+ IMPLIED. In no event shall THG or the Contributors be liable for any
179
+ damages suffered by the users arising out of the use of this software,
180
+ even if advised of the possibility of such damage.
181
+
182
+ Portions of HDF5 were developed with support from the University of
183
+ California, Lawrence Livermore National Laboratory (UC LLNL). The
184
+ following statement applies to those portions of the product and must
185
+ be retained in any redistribution of source code, binaries,
186
+ documentation, and/or accompanying materials:
187
+
188
+ This work was partially produced at the University of California,
189
+ Lawrence Livermore National Laboratory (UC LLNL) under contract
190
+ no. W-7405-ENG-48 (Contract 48) between the U.S. Department of Energy
191
+ (DOE) and The Regents of the University of California (University) for
192
+ the operation of UC LLNL.
193
+
194
+ DISCLAIMER: This work was prepared as an account of work sponsored by
195
+ an agency of the United States Government. Neither the United States
196
+ Government nor the University of California nor any of their
197
+ employees, makes any warranty, express or implied, or assumes any
198
+ liability or responsibility for the accuracy, completeness, or
199
+ usefulness of any information, apparatus, product, or process
200
+ disclosed, or represents that its use would not infringe privately-
201
+ owned rights. Reference herein to any specific commercial products,
202
+ process, or service by trade name, trademark, manufacturer, or
203
+ otherwise, does not necessarily constitute or imply its endorsement,
204
+ recommendation, or favoring by the United States Government or the
205
+ University of California. The views and opinions of authors expressed
206
+ herein do not necessarily state or reflect those of the United States
207
+ Government or the University of California, and shall not be used for
208
+ advertising or product endorsement purposes.
209
+
210
+ For third-party pytables:
211
+ Copyright Notice and Statement for PyTables Software Library and Utilities:
212
+
213
+ Copyright (c) 2002, 2003, 2004 Francesc Altet
214
+ Copyright (c) 2005, 2006, 2007 Carabos Coop. V.
215
+ All rights reserved.
216
+
217
+ Redistribution and use in source and binary forms, with or without
218
+ modification, are permitted provided that the following conditions are
219
+ met:
220
+
221
+ a. Redistributions of source code must retain the above copyright
222
+ notice, this list of conditions and the following disclaimer.
223
+
224
+ b. Redistributions in binary form must reproduce the above copyright
225
+ notice, this list of conditions and the following disclaimer in the
226
+ documentation and/or other materials provided with the
227
+ distribution.
228
+
229
+ c. Neither the name of the Carabos Coop. V. nor the names of its
230
+ contributors may be used to endorse or promote products derived
231
+ from this software without specific prior written permission.
232
+
233
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
234
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
235
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
236
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
237
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
238
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
239
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
240
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
241
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
242
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
243
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
244
+
245
+ For third-party python:
246
+ Python license
247
+ ==============
248
+
249
+ #. This LICENSE AGREEMENT is between the Python Software Foundation ("PSF"), and
250
+ the Individual or Organization ("Licensee") accessing and otherwise using Python
251
+ Python 2.7.5 software in source or binary form and its associated documentation.
252
+
253
+ #. Subject to the terms and conditions of this License Agreement, PSF hereby
254
+ grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce,
255
+ analyze, test, perform and/or display publicly, prepare derivative works,
256
+ distribute, and otherwise use Python Python 2.7.5 alone or in any derivative
257
+ version, provided, however, that PSF's License Agreement and PSF's notice of
258
+ copyright, i.e., "Copyright 2001-2013 Python Software Foundation; All Rights
259
+ Reserved" are retained in Python Python 2.7.5 alone or in any derivative version
260
+ prepared by Licensee.
261
+
262
+ #. In the event Licensee prepares a derivative work that is based on or
263
+ incorporates Python Python 2.7.5 or any part thereof, and wants to make the
264
+ derivative work available to others as provided herein, then Licensee hereby
265
+ agrees to include in any such work a brief summary of the changes made to Python
266
+ Python 2.7.5.
267
+
268
+ #. PSF is making Python Python 2.7.5 available to Licensee on an "AS IS" basis.
269
+ PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF
270
+ EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND DISCLAIMS ANY REPRESENTATION OR
271
+ WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE
272
+ USE OF PYTHON Python 2.7.5 WILL NOT INFRINGE ANY THIRD PARTY RIGHTS.
273
+
274
+ #. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON Python 2.7.5
275
+ FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF
276
+ MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON Python 2.7.5, OR ANY DERIVATIVE
277
+ THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
278
+
279
+ #. This License Agreement will automatically terminate upon a material breach of
280
+ its terms and conditions.
281
+
282
+ #. Nothing in this License Agreement shall be deemed to create any relationship
283
+ of agency, partnership, or joint venture between PSF and Licensee. This License
284
+ Agreement does not grant permission to use PSF trademarks or trade name in a
285
+ trademark sense to endorse or promote products or services of Licensee, or any
286
+ third party.
287
+
288
+ #. By copying, installing or otherwise using Python Python 2.7.5, Licensee agrees
289
+ to be bound by the terms and conditions of this License Agreement.
290
+
291
+ For third-party stdint:
292
+ Copyright (c) 2006-2008 Alexander Chemeris
293
+
294
+ Redistribution and use in source and binary forms, with or without
295
+ modification, are permitted provided that the following conditions are met:
296
+
297
+ 1. Redistributions of source code must retain the above copyright notice,
298
+ this list of conditions and the following disclaimer.
299
+
300
+ 2. Redistributions in binary form must reproduce the above copyright
301
+ notice, this list of conditions and the following disclaimer in the
302
+ documentation and/or other materials provided with the distribution.
303
+
304
+ 3. The name of the author may be used to endorse or promote products
305
+ derived from this software without specific prior written permission.
306
+
307
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
308
+ WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
309
+ MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
310
+ EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
311
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
312
+ PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
313
+ OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
314
+ WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
315
+ OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
316
+ ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
317
+
318
+
319
+
320
+ Open Source Software Licensed under the BSD 3-Clause License and Other Licenses of the Third-Party Components therein:
321
+ --------------------------------------------------------------------
322
+ 1. scipy
323
+ Copyright (c) 2001-2002 Enthought, Inc. 2003-2019, SciPy Developers.
324
+ All rights reserved.
325
+
326
+
327
+ A copy of the BSD 3-Clause License is included in this file.
328
+
329
+ The SciPy repository and source distributions bundle a number of libraries that
330
+ are compatibly licensed. We list these here.
331
+
332
+ Name: Numpydoc
333
+ Files: doc/sphinxext/numpydoc/*
334
+ License: 2-clause BSD
335
+ For details, see doc/sphinxext/LICENSE.txt
336
+
337
+ Name: scipy-sphinx-theme
338
+ Files: doc/scipy-sphinx-theme/*
339
+ License: 3-clause BSD, PSF and Apache 2.0
340
+ For details, see doc/sphinxext/LICENSE.txt
341
+
342
+ Name: Decorator
343
+ Files: scipy/_lib/decorator.py
344
+ License: 2-clause BSD
345
+ For details, see the header inside scipy/_lib/decorator.py
346
+
347
+ Name: ID
348
+ Files: scipy/linalg/src/id_dist/*
349
+ License: 3-clause BSD
350
+ For details, see scipy/linalg/src/id_dist/doc/doc.tex
351
+
352
+ Name: L-BFGS-B
353
+ Files: scipy/optimize/lbfgsb/*
354
+ License: BSD license
355
+ For details, see scipy/optimize/lbfgsb/README
356
+
357
+ Name: SuperLU
358
+ Files: scipy/sparse/linalg/dsolve/SuperLU/*
359
+ License: 3-clause BSD
360
+ For details, see scipy/sparse/linalg/dsolve/SuperLU/License.txt
361
+
362
+ Name: ARPACK
363
+ Files: scipy/sparse/linalg/eigen/arpack/ARPACK/*
364
+ License: 3-clause BSD
365
+ For details, see scipy/sparse/linalg/eigen/arpack/ARPACK/COPYING
366
+
367
+ Name: Qhull
368
+ Files: scipy/spatial/qhull/*
369
+ License: Qhull license (BSD-like)
370
+ For details, see scipy/spatial/qhull/COPYING.txt
371
+
372
+ Name: Cephes
373
+ Files: scipy/special/cephes/*
374
+ License: 3-clause BSD
375
+ Distributed under 3-clause BSD license with permission from the author,
376
+ see https://lists.debian.org/debian-legal/2004/12/msg00295.html
377
+
378
+ Cephes Math Library Release 2.8: June, 2000
379
+ Copyright 1984, 1995, 2000 by Stephen L. Moshier
380
+
381
+ This software is derived from the Cephes Math Library and is
382
+ incorporated herein by permission of the author.
383
+
384
+ All rights reserved.
385
+
386
+ Redistribution and use in source and binary forms, with or without
387
+ modification, are permitted provided that the following conditions are met:
388
+ * Redistributions of source code must retain the above copyright
389
+ notice, this list of conditions and the following disclaimer.
390
+ * Redistributions in binary form must reproduce the above copyright
391
+ notice, this list of conditions and the following disclaimer in the
392
+ documentation and/or other materials provided with the distribution.
393
+ * Neither the name of the <organization> nor the
394
+ names of its contributors may be used to endorse or promote products
395
+ derived from this software without specific prior written permission.
396
+
397
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
398
+ ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
399
+ WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
400
+ DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
401
+ DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
402
+ (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
403
+ LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
404
+ ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
405
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
406
+ SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
407
+
408
+ Name: Faddeeva
409
+ Files: scipy/special/Faddeeva.*
410
+ License: MIT
411
+ Copyright (c) 2012 Massachusetts Institute of Technology
412
+
413
+ Permission is hereby granted, free of charge, to any person obtaining
414
+ a copy of this software and associated documentation files (the
415
+ "Software"), to deal in the Software without restriction, including
416
+ without limitation the rights to use, copy, modify, merge, publish,
417
+ distribute, sublicense, and/or sell copies of the Software, and to
418
+ permit persons to whom the Software is furnished to do so, subject to
419
+ the following conditions:
420
+
421
+ The above copyright notice and this permission notice shall be
422
+ included in all copies or substantial portions of the Software.
423
+
424
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
425
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
426
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
427
+ NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
428
+ LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
429
+ OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
430
+ WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
431
+
432
+ Name: qd
433
+ Files: scipy/special/cephes/dd_*.[ch]
434
+ License: modified BSD license ("BSD-LBNL-License.doc")
435
+ This work was supported by the Director, Office of Science, Division
436
+ of Mathematical, Information, and Computational Sciences of the
437
+ U.S. Department of Energy under contract numbers DE-AC03-76SF00098 and
438
+ DE-AC02-05CH11231.
439
+
440
+ Copyright (c) 2003-2009, The Regents of the University of California,
441
+ through Lawrence Berkeley National Laboratory (subject to receipt of
442
+ any required approvals from U.S. Dept. of Energy) All rights reserved.
443
+
444
+ 1. Redistribution and use in source and binary forms, with or
445
+ without modification, are permitted provided that the following
446
+ conditions are met:
447
+
448
+ (1) Redistributions of source code must retain the copyright
449
+ notice, this list of conditions and the following disclaimer.
450
+
451
+ (2) Redistributions in binary form must reproduce the copyright
452
+ notice, this list of conditions and the following disclaimer in
453
+ the documentation and/or other materials provided with the
454
+ distribution.
455
+
456
+ (3) Neither the name of the University of California, Lawrence
457
+ Berkeley National Laboratory, U.S. Dept. of Energy nor the names
458
+ of its contributors may be used to endorse or promote products
459
+ derived from this software without specific prior written
460
+ permission.
461
+
462
+ 2. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
463
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
464
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
465
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
466
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
467
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
468
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
469
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
470
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
471
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
472
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
473
+
474
+ 3. You are under no obligation whatsoever to provide any bug fixes,
475
+ patches, or upgrades to the features, functionality or performance of
476
+ the source code ("Enhancements") to anyone; however, if you choose to
477
+ make your Enhancements available either publicly, or directly to
478
+ Lawrence Berkeley National Laboratory, without imposing a separate
479
+ written license agreement for such Enhancements, then you hereby grant
480
+ the following license: a non-exclusive, royalty-free perpetual license
481
+ to install, use, modify, prepare derivative works, incorporate into
482
+ other computer software, distribute, and sublicense such enhancements
483
+ or derivative works thereof, in binary and source code form.
484
+
485
+ Name: pypocketfft
486
+ Files: scipy/fft/_pocketfft/[pocketfft.h, pypocketfft.cxx]
487
+ License: 3-Clause BSD
488
+ For details, see scipy/fft/_pocketfft/LICENSE.md
489
+
490
+ Name: uarray
491
+ Files: scipy/_lib/uarray/*
492
+ License: 3-Clause BSD
493
+ For details, see scipy/_lib/uarray/LICENSE
494
+
495
+ Name: ampgo
496
+ Files: benchmarks/benchmarks/go_benchmark_functions/*.py
497
+ License: MIT
498
+ Functions for testing global optimizers, forked from the AMPGO project,
499
+ https://code.google.com/archive/p/ampgo
500
+
501
+ Name: pybind11
502
+ Files: no source files are included, however pybind11 binary artifacts are
503
+ included with every binary build of SciPy.
504
+ License:
505
+ Copyright (c) 2016 Wenzel Jakob <[email protected]>, All rights reserved.
506
+
507
+ Redistribution and use in source and binary forms, with or without
508
+ modification, are permitted provided that the following conditions are met:
509
+
510
+ 1. Redistributions of source code must retain the above copyright notice, this
511
+ list of conditions and the following disclaimer.
512
+
513
+ 2. Redistributions in binary form must reproduce the above copyright notice,
514
+ this list of conditions and the following disclaimer in the documentation
515
+ and/or other materials provided with the distribution.
516
+
517
+ 3. Neither the name of the copyright holder nor the names of its contributors
518
+ may be used to endorse or promote products derived from this software
519
+ without specific prior written permission.
520
+
521
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
522
+ ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
523
+ WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
524
+ DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
525
+ FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
526
+ DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
527
+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
528
+ CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
529
+ OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
530
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
531
+
532
+
533
+
534
+ Open Source Software Licensed under the Specific License and Other Licenses of the Third-Party Components therein:
535
+ --------------------------------------------------------------------
536
+ 1. matplotlib
537
+ Copyright (c)
538
+ 2012- Matplotlib Development Team; All Rights Reserved
539
+
540
+
541
+ Terms of the Specific License:
542
+ --------------------------------------------------------------------
543
+ 1. This LICENSE AGREEMENT is between the Matplotlib Development Team
544
+ ("MDT"), and the Individual or Organization ("Licensee") accessing and
545
+ otherwise using matplotlib software in source or binary form and its
546
+ associated documentation.
547
+
548
+ 2. Subject to the terms and conditions of this License Agreement, MDT
549
+ hereby grants Licensee a nonexclusive, royalty-free, world-wide license
550
+ to reproduce, analyze, test, perform and/or display publicly, prepare
551
+ derivative works, distribute, and otherwise use matplotlib
552
+ alone or in any derivative version, provided, however, that MDT's
553
+ License Agreement and MDT's notice of copyright, i.e., "Copyright (c)
554
+ 2012- Matplotlib Development Team; All Rights Reserved" are retained in
555
+ matplotlib alone or in any derivative version prepared by
556
+ Licensee.
557
+
558
+ 3. In the event Licensee prepares a derivative work that is based on or
559
+ incorporates matplotlib or any part thereof, and wants to
560
+ make the derivative work available to others as provided herein, then
561
+ Licensee hereby agrees to include in any such work a brief summary of
562
+ the changes made to matplotlib .
563
+
564
+ 4. MDT is making matplotlib available to Licensee on an "AS
565
+ IS" basis. MDT MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
566
+ IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, MDT MAKES NO AND
567
+ DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
568
+ FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF MATPLOTLIB
569
+ WILL NOT INFRINGE ANY THIRD PARTY RIGHTS.
570
+
571
+ 5. MDT SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF MATPLOTLIB
572
+ FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR
573
+ LOSS AS A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING
574
+ MATPLOTLIB , OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF
575
+ THE POSSIBILITY THEREOF.
576
+
577
+ 6. This License Agreement will automatically terminate upon a material
578
+ breach of its terms and conditions.
579
+
580
+ 7. Nothing in this License Agreement shall be deemed to create any
581
+ relationship of agency, partnership, or joint venture between MDT and
582
+ Licensee. This License Agreement does not grant permission to use MDT
583
+ trademarks or trade name in a trademark sense to endorse or promote
584
+ products or services of Licensee, or any third party.
585
+
586
+ 8. By copying, installing or otherwise using matplotlib ,
587
+ Licensee agrees to be bound by the terms and conditions of this License
588
+ Agreement.
589
+
590
+ For third-party cmr10.pfb:
591
+ The cmr10.pfb file is a Type-1 version of one of Knuth's Computer Modern fonts.
592
+ It is included here as test data only, but the following license applies.
593
+
594
+ Copyright (c) 1997, 2009, American Mathematical Society (http://www.ams.org).
595
+ All Rights Reserved.
596
+
597
+ "cmb10" is a Reserved Font Name for this Font Software.
598
+ "cmbsy10" is a Reserved Font Name for this Font Software.
599
+ "cmbsy5" is a Reserved Font Name for this Font Software.
600
+ "cmbsy6" is a Reserved Font Name for this Font Software.
601
+ "cmbsy7" is a Reserved Font Name for this Font Software.
602
+ "cmbsy8" is a Reserved Font Name for this Font Software.
603
+ "cmbsy9" is a Reserved Font Name for this Font Software.
604
+ "cmbx10" is a Reserved Font Name for this Font Software.
605
+ "cmbx12" is a Reserved Font Name for this Font Software.
606
+ "cmbx5" is a Reserved Font Name for this Font Software.
607
+ "cmbx6" is a Reserved Font Name for this Font Software.
608
+ "cmbx7" is a Reserved Font Name for this Font Software.
609
+ "cmbx8" is a Reserved Font Name for this Font Software.
610
+ "cmbx9" is a Reserved Font Name for this Font Software.
611
+ "cmbxsl10" is a Reserved Font Name for this Font Software.
612
+ "cmbxti10" is a Reserved Font Name for this Font Software.
613
+ "cmcsc10" is a Reserved Font Name for this Font Software.
614
+ "cmcsc8" is a Reserved Font Name for this Font Software.
615
+ "cmcsc9" is a Reserved Font Name for this Font Software.
616
+ "cmdunh10" is a Reserved Font Name for this Font Software.
617
+ "cmex10" is a Reserved Font Name for this Font Software.
618
+ "cmex7" is a Reserved Font Name for this Font Software.
619
+ "cmex8" is a Reserved Font Name for this Font Software.
620
+ "cmex9" is a Reserved Font Name for this Font Software.
621
+ "cmff10" is a Reserved Font Name for this Font Software.
622
+ "cmfi10" is a Reserved Font Name for this Font Software.
623
+ "cmfib8" is a Reserved Font Name for this Font Software.
624
+ "cminch" is a Reserved Font Name for this Font Software.
625
+ "cmitt10" is a Reserved Font Name for this Font Software.
626
+ "cmmi10" is a Reserved Font Name for this Font Software.
627
+ "cmmi12" is a Reserved Font Name for this Font Software.
628
+ "cmmi5" is a Reserved Font Name for this Font Software.
629
+ "cmmi6" is a Reserved Font Name for this Font Software.
630
+ "cmmi7" is a Reserved Font Name for this Font Software.
631
+ "cmmi8" is a Reserved Font Name for this Font Software.
632
+ "cmmi9" is a Reserved Font Name for this Font Software.
633
+ "cmmib10" is a Reserved Font Name for this Font Software.
634
+ "cmmib5" is a Reserved Font Name for this Font Software.
635
+ "cmmib6" is a Reserved Font Name for this Font Software.
636
+ "cmmib7" is a Reserved Font Name for this Font Software.
637
+ "cmmib8" is a Reserved Font Name for this Font Software.
638
+ "cmmib9" is a Reserved Font Name for this Font Software.
639
+ "cmr10" is a Reserved Font Name for this Font Software.
640
+ "cmr12" is a Reserved Font Name for this Font Software.
641
+ "cmr17" is a Reserved Font Name for this Font Software.
642
+ "cmr5" is a Reserved Font Name for this Font Software.
643
+ "cmr6" is a Reserved Font Name for this Font Software.
644
+ "cmr7" is a Reserved Font Name for this Font Software.
645
+ "cmr8" is a Reserved Font Name for this Font Software.
646
+ "cmr9" is a Reserved Font Name for this Font Software.
647
+ "cmsl10" is a Reserved Font Name for this Font Software.
648
+ "cmsl12" is a Reserved Font Name for this Font Software.
649
+ "cmsl8" is a Reserved Font Name for this Font Software.
650
+ "cmsl9" is a Reserved Font Name for this Font Software.
651
+ "cmsltt10" is a Reserved Font Name for this Font Software.
652
+ "cmss10" is a Reserved Font Name for this Font Software.
653
+ "cmss12" is a Reserved Font Name for this Font Software.
654
+ "cmss17" is a Reserved Font Name for this Font Software.
655
+ "cmss8" is a Reserved Font Name for this Font Software.
656
+ "cmss9" is a Reserved Font Name for this Font Software.
657
+ "cmssbx10" is a Reserved Font Name for this Font Software.
658
+ "cmssdc10" is a Reserved Font Name for this Font Software.
659
+ "cmssi10" is a Reserved Font Name for this Font Software.
660
+ "cmssi12" is a Reserved Font Name for this Font Software.
661
+ "cmssi17" is a Reserved Font Name for this Font Software.
662
+ "cmssi8" is a Reserved Font Name for this Font Software.
663
+ "cmssi9" is a Reserved Font Name for this Font Software.
664
+ "cmssq8" is a Reserved Font Name for this Font Software.
665
+ "cmssqi8" is a Reserved Font Name for this Font Software.
666
+ "cmsy10" is a Reserved Font Name for this Font Software.
667
+ "cmsy5" is a Reserved Font Name for this Font Software.
668
+ "cmsy6" is a Reserved Font Name for this Font Software.
669
+ "cmsy7" is a Reserved Font Name for this Font Software.
670
+ "cmsy8" is a Reserved Font Name for this Font Software.
671
+ "cmsy9" is a Reserved Font Name for this Font Software.
672
+ "cmtcsc10" is a Reserved Font Name for this Font Software.
673
+ "cmtex10" is a Reserved Font Name for this Font Software.
674
+ "cmtex8" is a Reserved Font Name for this Font Software.
675
+ "cmtex9" is a Reserved Font Name for this Font Software.
676
+ "cmti10" is a Reserved Font Name for this Font Software.
677
+ "cmti12" is a Reserved Font Name for this Font Software.
678
+ "cmti7" is a Reserved Font Name for this Font Software.
679
+ "cmti8" is a Reserved Font Name for this Font Software.
680
+ "cmti9" is a Reserved Font Name for this Font Software.
681
+ "cmtt10" is a Reserved Font Name for this Font Software.
682
+ "cmtt12" is a Reserved Font Name for this Font Software.
683
+ "cmtt8" is a Reserved Font Name for this Font Software.
684
+ "cmtt9" is a Reserved Font Name for this Font Software.
685
+ "cmu10" is a Reserved Font Name for this Font Software.
686
+ "cmvtt10" is a Reserved Font Name for this Font Software.
687
+ "euex10" is a Reserved Font Name for this Font Software.
688
+ "euex7" is a Reserved Font Name for this Font Software.
689
+ "euex8" is a Reserved Font Name for this Font Software.
690
+ "euex9" is a Reserved Font Name for this Font Software.
691
+ "eufb10" is a Reserved Font Name for this Font Software.
692
+ "eufb5" is a Reserved Font Name for this Font Software.
693
+ "eufb7" is a Reserved Font Name for this Font Software.
694
+ "eufm10" is a Reserved Font Name for this Font Software.
695
+ "eufm5" is a Reserved Font Name for this Font Software.
696
+ "eufm7" is a Reserved Font Name for this Font Software.
697
+ "eurb10" is a Reserved Font Name for this Font Software.
698
+ "eurb5" is a Reserved Font Name for this Font Software.
699
+ "eurb7" is a Reserved Font Name for this Font Software.
700
+ "eurm10" is a Reserved Font Name for this Font Software.
701
+ "eurm5" is a Reserved Font Name for this Font Software.
702
+ "eurm7" is a Reserved Font Name for this Font Software.
703
+ "eusb10" is a Reserved Font Name for this Font Software.
704
+ "eusb5" is a Reserved Font Name for this Font Software.
705
+ "eusb7" is a Reserved Font Name for this Font Software.
706
+ "eusm10" is a Reserved Font Name for this Font Software.
707
+ "eusm5" is a Reserved Font Name for this Font Software.
708
+ "eusm7" is a Reserved Font Name for this Font Software.
709
+ "lasy10" is a Reserved Font Name for this Font Software.
710
+ "lasy5" is a Reserved Font Name for this Font Software.
711
+ "lasy6" is a Reserved Font Name for this Font Software.
712
+ "lasy7" is a Reserved Font Name for this Font Software.
713
+ "lasy8" is a Reserved Font Name for this Font Software.
714
+ "lasy9" is a Reserved Font Name for this Font Software.
715
+ "lasyb10" is a Reserved Font Name for this Font Software.
716
+ "lcircle1" is a Reserved Font Name for this Font Software.
717
+ "lcirclew" is a Reserved Font Name for this Font Software.
718
+ "lcmss8" is a Reserved Font Name for this Font Software.
719
+ "lcmssb8" is a Reserved Font Name for this Font Software.
720
+ "lcmssi8" is a Reserved Font Name for this Font Software.
721
+ "line10" is a Reserved Font Name for this Font Software.
722
+ "linew10" is a Reserved Font Name for this Font Software.
723
+ "msam10" is a Reserved Font Name for this Font Software.
724
+ "msam5" is a Reserved Font Name for this Font Software.
725
+ "msam6" is a Reserved Font Name for this Font Software.
726
+ "msam7" is a Reserved Font Name for this Font Software.
727
+ "msam8" is a Reserved Font Name for this Font Software.
728
+ "msam9" is a Reserved Font Name for this Font Software.
729
+ "msbm10" is a Reserved Font Name for this Font Software.
730
+ "msbm5" is a Reserved Font Name for this Font Software.
731
+ "msbm6" is a Reserved Font Name for this Font Software.
732
+ "msbm7" is a Reserved Font Name for this Font Software.
733
+ "msbm8" is a Reserved Font Name for this Font Software.
734
+ "msbm9" is a Reserved Font Name for this Font Software.
735
+ "wncyb10" is a Reserved Font Name for this Font Software.
736
+ "wncyi10" is a Reserved Font Name for this Font Software.
737
+ "wncyr10" is a Reserved Font Name for this Font Software.
738
+ "wncysc10" is a Reserved Font Name for this Font Software.
739
+ "wncyss10" is a Reserved Font Name for this Font Software.
740
+
741
+ This Font Software is licensed under the SIL Open Font License, Version 1.1.
742
+ This license is copied below, and is also available with a FAQ at:
743
+ http://scripts.sil.org/OFL
744
+
745
+ -----------------------------------------------------------
746
+ SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
747
+ -----------------------------------------------------------
748
+
749
+ PREAMBLE
750
+ The goals of the Open Font License (OFL) are to stimulate worldwide
751
+ development of collaborative font projects, to support the font creation
752
+ efforts of academic and linguistic communities, and to provide a free and
753
+ open framework in which fonts may be shared and improved in partnership
754
+ with others.
755
+
756
+ The OFL allows the licensed fonts to be used, studied, modified and
757
+ redistributed freely as long as they are not sold by themselves. The
758
+ fonts, including any derivative works, can be bundled, embedded,
759
+ redistributed and/or sold with any software provided that any reserved
760
+ names are not used by derivative works. The fonts and derivatives,
761
+ however, cannot be released under any other type of license. The
762
+ requirement for fonts to remain under this license does not apply
763
+ to any document created using the fonts or their derivatives.
764
+
765
+ DEFINITIONS
766
+ "Font Software" refers to the set of files released by the Copyright
767
+ Holder(s) under this license and clearly marked as such. This may
768
+ include source files, build scripts and documentation.
769
+
770
+ "Reserved Font Name" refers to any names specified as such after the
771
+ copyright statement(s).
772
+
773
+ "Original Version" refers to the collection of Font Software components as
774
+ distributed by the Copyright Holder(s).
775
+
776
+ "Modified Version" refers to any derivative made by adding to, deleting,
777
+ or substituting -- in part or in whole -- any of the components of the
778
+ Original Version, by changing formats or by porting the Font Software to a
779
+ new environment.
780
+
781
+ "Author" refers to any designer, engineer, programmer, technical
782
+ writer or other person who contributed to the Font Software.
783
+
784
+ PERMISSION & CONDITIONS
785
+ Permission is hereby granted, free of charge, to any person obtaining
786
+ a copy of the Font Software, to use, study, copy, merge, embed, modify,
787
+ redistribute, and sell modified and unmodified copies of the Font
788
+ Software, subject to the following conditions:
789
+
790
+ 1) Neither the Font Software nor any of its individual components,
791
+ in Original or Modified Versions, may be sold by itself.
792
+
793
+ 2) Original or Modified Versions of the Font Software may be bundled,
794
+ redistributed and/or sold with any software, provided that each copy
795
+ contains the above copyright notice and this license. These can be
796
+ included either as stand-alone text files, human-readable headers or
797
+ in the appropriate machine-readable metadata fields within text or
798
+ binary files as long as those fields can be easily viewed by the user.
799
+
800
+ 3) No Modified Version of the Font Software may use the Reserved Font
801
+ Name(s) unless explicit written permission is granted by the corresponding
802
+ Copyright Holder. This restriction only applies to the primary font name as
803
+ presented to the users.
804
+
805
+ 4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
806
+ Software shall not be used to promote, endorse or advertise any
807
+ Modified Version, except to acknowledge the contribution(s) of the
808
+ Copyright Holder(s) and the Author(s) or with their explicit written
809
+ permission.
810
+
811
+ 5) The Font Software, modified or unmodified, in part or in whole,
812
+ must be distributed entirely under this license, and must not be
813
+ distributed under any other license. The requirement for fonts to
814
+ remain under this license does not apply to any document created
815
+ using the Font Software.
816
+
817
+ TERMINATION
818
+ This license becomes null and void if any of the above conditions are
819
+ not met.
820
+
821
+ DISCLAIMER
822
+ THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
823
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
824
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
825
+ OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
826
+ COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
827
+ INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
828
+ DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
829
+ FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
830
+ OTHER DEALINGS IN THE FONT SOFTWARE.
831
+
832
+ For third-party BaKoMa:
833
+ BaKoMa Fonts Licence
834
+ --------------------
835
+
836
+ This licence covers two font packs (known as BaKoMa Fonts Colelction,
837
+ which is available at `CTAN:fonts/cm/ps-type1/bakoma/'):
838
+
839
+ 1) BaKoMa-CM (1.1/12-Nov-94)
840
+ Computer Modern Fonts in PostScript Type 1 and TrueType font formats.
841
+
842
+ 2) BaKoMa-AMS (1.2/19-Jan-95)
843
+ AMS TeX fonts in PostScript Type 1 and TrueType font formats.
844
+
845
+ Copyright (C) 1994, 1995, Basil K. Malyshev. All Rights Reserved.
846
+
847
+ Permission to copy and distribute these fonts for any purpose is
848
+ hereby granted without fee, provided that the above copyright notice,
849
+ author statement and this permission notice appear in all copies of
850
+ these fonts and related documentation.
851
+
852
+ Permission to modify and distribute modified fonts for any purpose is
853
+ hereby granted without fee, provided that the copyright notice,
854
+ author statement, this permission notice and location of original
855
+ fonts (http://www.ctan.org/tex-archive/fonts/cm/ps-type1/bakoma)
856
+ appear in all copies of modified fonts and related documentation.
857
+
858
+ Permission to use these fonts (embedding into PostScript, PDF, SVG
859
+ and printing by using any software) is hereby granted without fee.
860
+ It is not required to provide any notices about using these fonts.
861
+
862
+ Basil K. Malyshev
863
+ INSTITUTE FOR HIGH ENERGY PHYSICS
864
+ IHEP, OMVT
865
+ Moscow Region
866
+ 142281 PROTVINO
867
+ RUSSIA
868
+
869
+ E-Mail: [email protected]
870
871
+
872
+ For thirdparty carlogo:
873
+ ----> we renamed carlito -> carlogo to comply with the terms <----
874
+
875
+ Copyright (c) 2010-2013 by tyPoland Lukasz Dziedzic with Reserved Font Name "Carlito".
876
+
877
+ This Font Software is licensed under the SIL Open Font License, Version 1.1.
878
+ This license is copied below, and is also available with a FAQ at: http://scripts.sil.org/OFL
879
+
880
+ -----------------------------------------------------------
881
+ SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
882
+ -----------------------------------------------------------
883
+
884
+ PREAMBLE
885
+ The goals of the Open Font License (OFL) are to stimulate worldwide development of collaborative font projects, to support the font creation efforts of academic and linguistic communities, and to provide a free and open framework in which fonts may be shared and improved in partnership with others.
886
+
887
+ The OFL allows the licensed fonts to be used, studied, modified and redistributed freely as long as they are not sold by themselves. The fonts, including any derivative works, can be bundled, embedded, redistributed and/or sold with any software provided that any reserved names are not used by derivative works. The fonts and derivatives, however, cannot be released under any other type of license. The requirement for fonts to remain under this license does not apply to any document created using the fonts or their derivatives.
888
+
889
+ DEFINITIONS
890
+ "Font Software" refers to the set of files released by the Copyright Holder(s) under this license and clearly marked as such. This may include source files, build scripts and documentation.
891
+
892
+ "Reserved Font Name" refers to any names specified as such after the copyright statement(s).
893
+
894
+ "Original Version" refers to the collection of Font Software components as distributed by the Copyright Holder(s).
895
+
896
+ "Modified Version" refers to any derivative made by adding to, deleting, or substituting -- in part or in whole -- any of the components of the Original Version, by changing formats or by porting the Font Software to a new environment.
897
+
898
+ "Author" refers to any designer, engineer, programmer, technical writer or other person who contributed to the Font Software.
899
+
900
+ PERMISSION & CONDITIONS
901
+ Permission is hereby granted, free of charge, to any person obtaining a copy of the Font Software, to use, study, copy, merge, embed, modify, redistribute, and sell modified and unmodified copies of the Font Software, subject to the following conditions:
902
+
903
+ 1) Neither the Font Software nor any of its individual components, in Original or Modified Versions, may be sold by itself.
904
+
905
+ 2) Original or Modified Versions of the Font Software may be bundled, redistributed and/or sold with any software, provided that each copy contains the above copyright notice and this license. These can be included either as stand-alone text files, human-readable headers or in the appropriate machine-readable metadata fields within text or binary files as long as those fields can be easily viewed by the user.
906
+
907
+ 3) No Modified Version of the Font Software may use the Reserved Font Name(s) unless explicit written permission is granted by the corresponding Copyright Holder. This restriction only applies to the primary font name as presented to the users.
908
+
909
+ 4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font Software shall not be used to promote, endorse or advertise any Modified Version, except to acknowledge the contribution(s) of the Copyright Holder(s) and the Author(s) or with their explicit written permission.
910
+
911
+ 5) The Font Software, modified or unmodified, in part or in whole, must be distributed entirely under this license, and must not be distributed under any other license. The requirement for fonts to remain under this license does not apply to any document created using the Font Software.
912
+
913
+ TERMINATION
914
+ This license becomes null and void if any of the above conditions are not met.
915
+
916
+ DISCLAIMER
917
+ THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM OTHER DEALINGS IN THE FONT SOFTWARE.
918
+
919
+ For third-party ColorBrewer Color Schemes:
920
+ Apache-Style Software License for ColorBrewer Color Schemes
921
+
922
+ Version 1.1
923
+
924
+ Copyright (c) 2002 Cynthia Brewer, Mark Harrower, and The Pennsylvania
925
+ State University. All rights reserved. Redistribution and use in source
926
+ and binary forms, with or without modification, are permitted provided
927
+ that the following conditions are met:
928
+
929
+ 1. Redistributions as source code must retain the above copyright notice,
930
+ this list of conditions and the following disclaimer.
931
+
932
+ 2. The end-user documentation included with the redistribution, if any,
933
+ must include the following acknowledgment: "This product includes color
934
+ specifications and designs developed by Cynthia Brewer
935
+ (http://colorbrewer.org/)." Alternately, this acknowledgment may appear in
936
+ the software itself, if and wherever such third-party acknowledgments
937
+ normally appear.
938
+
939
+ 3. The name "ColorBrewer" must not be used to endorse or promote products
940
+ derived from this software without prior written permission. For written
941
+ permission, please contact Cynthia Brewer at [email protected].
942
+
943
+ 4. Products derived from this software may not be called "ColorBrewer",
944
+ nor may "ColorBrewer" appear in their name, without prior written
945
+ permission of Cynthia Brewer.
946
+
947
+ THIS SOFTWARE IS PROVIDED "AS IS" AND ANY EXPRESSED OR IMPLIED WARRANTIES,
948
+ INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
949
+ AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
950
+ CYNTHIA BREWER, MARK HARROWER, OR THE PENNSYLVANIA STATE UNIVERSITY BE
951
+ LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
952
+ CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
953
+ SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
954
+ INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
955
+ CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
956
+ ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
957
+ POSSIBILITY OF SUCH DAMAGE.
958
+
959
+ For third-party JSXTOOLS_RESIZE_OBSERVER:
960
+ # CC0 1.0 Universal
961
+
962
+ ## Statement of Purpose
963
+
964
+ The laws of most jurisdictions throughout the world automatically confer
965
+ exclusive Copyright and Related Rights (defined below) upon the creator and
966
+ subsequent owner(s) (each and all, an “owner”) of an original work of
967
+ authorship and/or a database (each, a “Work”).
968
+
969
+ Certain owners wish to permanently relinquish those rights to a Work for the
970
+ purpose of contributing to a commons of creative, cultural and scientific works
971
+ (“Commons”) that the public can reliably and without fear of later claims of
972
+ infringement build upon, modify, incorporate in other works, reuse and
973
+ redistribute as freely as possible in any form whatsoever and for any purposes,
974
+ including without limitation commercial purposes. These owners may contribute
975
+ to the Commons to promote the ideal of a free culture and the further
976
+ production of creative, cultural and scientific works, or to gain reputation or
977
+ greater distribution for their Work in part through the use and efforts of
978
+ others.
979
+
980
+ For these and/or other purposes and motivations, and without any expectation of
981
+ additional consideration or compensation, the person associating CC0 with a
982
+ Work (the “Affirmer”), to the extent that he or she is an owner of Copyright
983
+ and Related Rights in the Work, voluntarily elects to apply CC0 to the Work and
984
+ publicly distribute the Work under its terms, with knowledge of his or her
985
+ Copyright and Related Rights in the Work and the meaning and intended legal
986
+ effect of CC0 on those rights.
987
+
988
+ 1. Copyright and Related Rights. A Work made available under CC0 may be
989
+ protected by copyright and related or neighboring rights (“Copyright and
990
+ Related Rights”). Copyright and Related Rights include, but are not limited
991
+ to, the following:
992
+ 1. the right to reproduce, adapt, distribute, perform, display, communicate,
993
+ and translate a Work;
994
+ 2. moral rights retained by the original author(s) and/or performer(s);
995
+ 3. publicity and privacy rights pertaining to a person’s image or likeness
996
+ depicted in a Work;
997
+ 4. rights protecting against unfair competition in regards to a Work,
998
+ subject to the limitations in paragraph 4(i), below;
999
+ 5. rights protecting the extraction, dissemination, use and reuse of data in
1000
+ a Work;
1001
+ 6. database rights (such as those arising under Directive 96/9/EC of the
1002
+ European Parliament and of the Council of 11 March 1996 on the legal
1003
+ protection of databases, and under any national implementation thereof,
1004
+ including any amended or successor version of such directive); and
1005
+ 7. other similar, equivalent or corresponding rights throughout the world
1006
+ based on applicable law or treaty, and any national implementations
1007
+ thereof.
1008
+
1009
+ 2. Waiver. To the greatest extent permitted by, but not in contravention of,
1010
+ applicable law, Affirmer hereby overtly, fully, permanently, irrevocably and
1011
+ unconditionally waives, abandons, and surrenders all of Affirmer’s Copyright
1012
+ and Related Rights and associated claims and causes of action, whether now
1013
+ known or unknown (including existing as well as future claims and causes of
1014
+ action), in the Work (i) in all territories worldwide, (ii) for the maximum
1015
+ duration provided by applicable law or treaty (including future time
1016
+ extensions), (iii) in any current or future medium and for any number of
1017
+ copies, and (iv) for any purpose whatsoever, including without limitation
1018
+ commercial, advertising or promotional purposes (the “Waiver”). Affirmer
1019
+ makes the Waiver for the benefit of each member of the public at large and
1020
+ to the detriment of Affirmer’s heirs and successors, fully intending that
1021
+ such Waiver shall not be subject to revocation, rescission, cancellation,
1022
+ termination, or any other legal or equitable action to disrupt the quiet
1023
+ enjoyment of the Work by the public as contemplated by Affirmer’s express
1024
+ Statement of Purpose.
1025
+
1026
+ 3. Public License Fallback. Should any part of the Waiver for any reason be
1027
+ judged legally invalid or ineffective under applicable law, then the Waiver
1028
+ shall be preserved to the maximum extent permitted taking into account
1029
+ Affirmer’s express Statement of Purpose. In addition, to the extent the
1030
+ Waiver is so judged Affirmer hereby grants to each affected person a
1031
+ royalty-free, non transferable, non sublicensable, non exclusive,
1032
+ irrevocable and unconditional license to exercise Affirmer’s Copyright and
1033
+ Related Rights in the Work (i) in all territories worldwide, (ii) for the
1034
+ maximum duration provided by applicable law or treaty (including future time
1035
+ extensions), (iii) in any current or future medium and for any number of
1036
+ copies, and (iv) for any purpose whatsoever, including without limitation
1037
+ commercial, advertising or promotional purposes (the “License”). The License
1038
+ shall be deemed effective as of the date CC0 was applied by Affirmer to the
1039
+ Work. Should any part of the License for any reason be judged legally
1040
+ invalid or ineffective under applicable law, such partial invalidity or
1041
+ ineffectiveness shall not invalidate the remainder of the License, and in
1042
+ such case Affirmer hereby affirms that he or she will not (i) exercise any
1043
+ of his or her remaining Copyright and Related Rights in the Work or (ii)
1044
+ assert any associated claims and causes of action with respect to the Work,
1045
+ in either case contrary to Affirmer’s express Statement of Purpose.
1046
+
1047
+ 4. Limitations and Disclaimers.
1048
+ 1. No trademark or patent rights held by Affirmer are waived, abandoned,
1049
+ surrendered, licensed or otherwise affected by this document.
1050
+ 2. Affirmer offers the Work as-is and makes no representations or warranties
1051
+ of any kind concerning the Work, express, implied, statutory or
1052
+ otherwise, including without limitation warranties of title,
1053
+ merchantability, fitness for a particular purpose, non infringement, or
1054
+ the absence of latent or other defects, accuracy, or the present or
1055
+ absence of errors, whether or not discoverable, all to the greatest
1056
+ extent permissible under applicable law.
1057
+ 3. Affirmer disclaims responsibility for clearing rights of other persons
1058
+ that may apply to the Work or any use thereof, including without
1059
+ limitation any person’s Copyright and Related Rights in the Work.
1060
+ Further, Affirmer disclaims responsibility for obtaining any necessary
1061
+ consents, permissions or other rights required for any use of the Work.
1062
+ 4. Affirmer understands and acknowledges that Creative Commons is not a
1063
+ party to this document and has no duty or obligation with respect to this
1064
+ CC0 or use of the Work.
1065
+
1066
+ For more information, please see
1067
+ http://creativecommons.org/publicdomain/zero/1.0/.
1068
+
1069
+ For third-party QT4_EDITOR:
1070
+ Module creating PyQt4 form dialogs/layouts to edit various type of parameters
1071
+
1072
+
1073
+ formlayout License Agreement (MIT License)
1074
+ ------------------------------------------
1075
+
1076
+ Copyright (c) 2009 Pierre Raybaut
1077
+
1078
+ Permission is hereby granted, free of charge, to any person
1079
+ obtaining a copy of this software and associated documentation
1080
+ files (the "Software"), to deal in the Software without
1081
+ restriction, including without limitation the rights to use,
1082
+ copy, modify, merge, publish, distribute, sublicense, and/or sell
1083
+ copies of the Software, and to permit persons to whom the
1084
+ Software is furnished to do so, subject to the following
1085
+ conditions:
1086
+
1087
+ The above copyright notice and this permission notice shall be
1088
+ included in all copies or substantial portions of the Software.
1089
+
1090
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
1091
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
1092
+ OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
1093
+ NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
1094
+ HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
1095
+ WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
1096
+ FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
1097
+ OTHER DEALINGS IN THE SOFTWARE.
1098
+ """
1099
+
1100
+ For third-party SOLARIZED:
1101
+ https://github.com/altercation/solarized/blob/master/LICENSE
1102
+ Copyright (c) 2011 Ethan Schoonover
1103
+
1104
+ Permission is hereby granted, free of charge, to any person obtaining a copy
1105
+ of this software and associated documentation files (the "Software"), to deal
1106
+ in the Software without restriction, including without limitation the rights
1107
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
1108
+ copies of the Software, and to permit persons to whom the Software is
1109
+ furnished to do so, subject to the following conditions:
1110
+
1111
+ The above copyright notice and this permission notice shall be included in
1112
+ all copies or substantial portions of the Software.
1113
+
1114
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
1115
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
1116
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
1117
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
1118
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
1119
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
1120
+ THE SOFTWARE.
1121
+
1122
+ For third-party STIX:
1123
+ TERMS AND CONDITIONS
1124
+
1125
+ 1. Permission is hereby granted, free of charge, to any person
1126
+ obtaining a copy of the STIX Fonts-TM set accompanying this license
1127
+ (collectively, the "Fonts") and the associated documentation files
1128
+ (collectively with the Fonts, the "Font Software"), to reproduce and
1129
+ distribute the Font Software, including the rights to use, copy, merge
1130
+ and publish copies of the Font Software, and to permit persons to whom
1131
+ the Font Software is furnished to do so same, subject to the following
1132
+ terms and conditions (the "License").
1133
+
1134
+ 2. The following copyright and trademark notice and these Terms and
1135
+ Conditions shall be included in all copies of one or more of the Font
1136
+ typefaces and any derivative work created as permitted under this
1137
+ License:
1138
+
1139
+ Copyright (c) 2001-2005 by the STI Pub Companies, consisting of
1140
+ the American Institute of Physics, the American Chemical Society, the
1141
+ American Mathematical Society, the American Physical Society, Elsevier,
1142
+ Inc., and The Institute of Electrical and Electronic Engineers, Inc.
1143
+ Portions copyright (c) 1998-2003 by MicroPress, Inc. Portions copyright
1144
+ (c) 1990 by Elsevier, Inc. All rights reserved. STIX Fonts-TM is a
1145
+ trademark of The Institute of Electrical and Electronics Engineers, Inc.
1146
+
1147
+ 3. You may (a) convert the Fonts from one format to another (e.g.,
1148
+ from TrueType to PostScript), in which case the normal and reasonable
1149
+ distortion that occurs during such conversion shall be permitted and (b)
1150
+ embed or include a subset of the Fonts in a document for the purposes of
1151
+ allowing users to read text in the document that utilizes the Fonts. In
1152
+ each case, you may use the STIX Fonts-TM mark to designate the resulting
1153
+ Fonts or subset of the Fonts.
1154
+
1155
+ 4. You may also (a) add glyphs or characters to the Fonts, or modify
1156
+ the shape of existing glyphs, so long as the base set of glyphs is not
1157
+ removed and (b) delete glyphs or characters from the Fonts, provided
1158
+ that the resulting font set is distributed with the following
1159
+ disclaimer: "This [name] font does not include all the Unicode points
1160
+ covered in the STIX Fonts-TM set but may include others." In each case,
1161
+ the name used to denote the resulting font set shall not include the
1162
+ term "STIX" or any similar term.
1163
+
1164
+ 5. You may charge a fee in connection with the distribution of the
1165
+ Font Software, provided that no copy of one or more of the individual
1166
+ Font typefaces that form the STIX Fonts-TM set may be sold by itself.
1167
+
1168
+ 6. THE FONT SOFTWARE IS PROVIDED "AS IS," WITHOUT WARRANTY OF ANY
1169
+ KIND, EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, ANY WARRANTIES
1170
+ OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
1171
+ OF COPYRIGHT, PATENT, TRADEMARK OR OTHER RIGHT. IN NO EVENT SHALL
1172
+ MICROPRESS OR ANY OF THE STI PUB COMPANIES BE LIABLE FOR ANY CLAIM,
1173
+ DAMAGES OR OTHER LIABILITY, INCLUDING, BUT NOT LIMITED TO, ANY GENERAL,
1174
+ SPECIAL, INDIRECT, INCIDENTAL OR CONSEQUENTIAL DAMAGES, WHETHER IN AN
1175
+ ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM OR OUT OF THE USE OR
1176
+ INABILITY TO USE THE FONT SOFTWARE OR FROM OTHER DEALINGS IN THE FONT
1177
+ SOFTWARE.
1178
+
1179
+ 7. Except as contained in the notice set forth in Section 2, the
1180
+ names MicroPress Inc. and STI Pub Companies, as well as the names of the
1181
+ companies/organizations that compose the STI Pub Companies, shall not be
1182
+ used in advertising or otherwise to promote the sale, use or other
1183
+ dealings in the Font Software without the prior written consent of the
1184
+ respective company or organization.
1185
+
1186
+ 8. This License shall become null and void in the event of any
1187
+ material breach of the Terms and Conditions herein by licensee.
1188
+
1189
+ 9. A substantial portion of the STIX Fonts set was developed by
1190
+ MicroPress Inc. for the STI Pub Companies. To obtain additional
1191
+ mathematical fonts, please contact MicroPress, Inc., 68-30 Harrow
1192
+ Street, Forest Hills, NY 11375, USA - Phone: (718) 575-1816.
1193
+
1194
+ For third-party YORICK:
1195
+ BSD-style license for gist/yorick colormaps.
1196
+
1197
+ Copyright:
1198
+
1199
+ Copyright (c) 1996. The Regents of the University of California.
1200
+ All rights reserved.
1201
+
1202
+ Permission to use, copy, modify, and distribute this software for any
1203
+ purpose without fee is hereby granted, provided that this entire
1204
+ notice is included in all copies of any software which is or includes
1205
+ a copy or modification of this software and in all copies of the
1206
+ supporting documentation for such software.
1207
+
1208
+ This work was produced at the University of California, Lawrence
1209
+ Livermore National Laboratory under contract no. W-7405-ENG-48 between
1210
+ the U.S. Department of Energy and The Regents of the University of
1211
+ California for the operation of UC LLNL.
1212
+
1213
+
1214
+ DISCLAIMER
1215
+
1216
+ This software was prepared as an account of work sponsored by an
1217
+ agency of the United States Government. Neither the United States
1218
+ Government nor the University of California nor any of their
1219
+ employees, makes any warranty, express or implied, or assumes any
1220
+ liability or responsibility for the accuracy, completeness, or
1221
+ usefulness of any information, apparatus, product, or process
1222
+ disclosed, or represents that its use would not infringe
1223
+ privately-owned rights. Reference herein to any specific commercial
1224
+ products, process, or service by trade name, trademark, manufacturer,
1225
+ or otherwise, does not necessarily constitute or imply its
1226
+ endorsement, recommendation, or favoring by the United States
1227
+ Government or the University of California. The views and opinions of
1228
+ authors expressed herein do not necessarily state or reflect those of
1229
+ the United States Government or the University of California, and
1230
+ shall not be used for advertising or product endorsement purposes.
1231
+
1232
+
1233
+ AUTHOR
1234
+
1235
+ David H. Munro wrote Yorick and Gist. Berkeley Yacc (byacc) generated
1236
+ the Yorick parser. The routines in Math are from LAPACK and FFTPACK;
1237
+ MathC contains C translations by David H. Munro. The algorithms for
1238
+ Yorick's random number generator and several special functions in
1239
+ Yorick/include were taken from Numerical Recipes by Press, et. al.,
1240
+ although the Yorick implementations are unrelated to those in
1241
+ Numerical Recipes. A small amount of code in Gist was adapted from
1242
+ the X11R4 release, copyright M.I.T. -- the complete copyright notice
1243
+ may be found in the (unused) file Gist/host.c.
1244
+
1245
+
1246
+
1247
+ Open Source Software Licensed under the PSF License Version 2:
1248
+ --------------------------------------------------------------------
1249
+ 1. Python
1250
+ Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018 Python Software Foundation. All rights reserved.
1251
+
1252
+ Copyright (c) 2000 BeOpen.com. All rights reserved.
1253
+
1254
+ Copyright (c) 1995-2001 Corporation for National Research Initiatives. All rights reserved.
1255
+
1256
+ Copyright (c) 1991-1995 Stichting Mathematisch Centrum. All rights reserved.
1257
+
1258
+
1259
+ Terms of the PSF License Version 2:
1260
+ --------------------------------------------------------------------
1261
+ 1. This LICENSE AGREEMENT is between the Python Software Foundation
1262
+ ("PSF"), and the Individual or Organization ("Licensee") accessing and
1263
+ otherwise using this software ("Python") in source or binary form and
1264
+ its associated documentation.
1265
+
1266
+ 2. Subject to the terms and conditions of this License Agreement, PSF hereby
1267
+ grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce,
1268
+ analyze, test, perform and/or display publicly, prepare derivative works,
1269
+ distribute, and otherwise use Python alone or in any derivative version,
1270
+ provided, however, that PSF's License Agreement and PSF's notice of copyright,
1271
+ i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
1272
+ 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018 Python Software Foundation; All
1273
+ Rights Reserved" are retained in Python alone or in any derivative version
1274
+ prepared by Licensee.
1275
+
1276
+ 3. In the event Licensee prepares a derivative work that is based on
1277
+ or incorporates Python or any part thereof, and wants to make
1278
+ the derivative work available to others as provided herein, then
1279
+ Licensee hereby agrees to include in any such work a brief summary of
1280
+ the changes made to Python.
1281
+
1282
+ 4. PSF is making Python available to Licensee on an "AS IS"
1283
+ basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
1284
+ IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND
1285
+ DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
1286
+ FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT
1287
+ INFRINGE ANY THIRD PARTY RIGHTS.
1288
+
1289
+ 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
1290
+ FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
1291
+ A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON,
1292
+ OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
1293
+
1294
+ 6. This License Agreement will automatically terminate upon a material
1295
+ breach of its terms and conditions.
1296
+
1297
+ 7. Nothing in this License Agreement shall be deemed to create any
1298
+ relationship of agency, partnership, or joint venture between PSF and
1299
+ Licensee. This License Agreement does not grant permission to use PSF
1300
+ trademarks or trade name in a trademark sense to endorse or promote
1301
+ products or services of Licensee, or any third party.
1302
+
1303
+ 8. By copying, installing or otherwise using Python, Licensee
1304
+ agrees to be bound by the terms and conditions of this License
1305
+ Agreement.
1306
+
1307
+
1308
+
1309
+ Open Source Software Licensed under the MIT License:
1310
+ The below software in this distribution may have been modified by THL A29 Limited ("Tencent Modifications"). All Tencent Modifications are Copyright (C) 2021 THL A29 Limited.
1311
+ --------------------------------------------------------------------
1312
+ 1. C^3 Framework
1313
+ Copyright (c) 2018 Junyu Gao
1314
+
1315
+
1316
+ Terms of the MIT License:
1317
+ --------------------------------------------------------------------
1318
+ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
1319
+
1320
+ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
1321
+
1322
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
1323
+
1324
+
1325
+
1326
+ Open Source Software Licensed under the MIT License:
1327
+ --------------------------------------------------------------------
1328
+ 1. tensorboardX
1329
+ Copyright (c) 2017 Tzu-Wei Huang
1330
+
1331
+
1332
+ A copy of the MIT License is included in this file.
1333
+
1334
+
1335
+
1336
+ Open Source Software Licensed under the Apache License Version 2.0:
1337
+ The below software in this distribution may have been modified by Tencent.
1338
+ --------------------------------------------------------------------
1339
+ 1. DETR
1340
+ Copyright 2020 - present, Facebook, Inc
1341
+ Please note this software has been modified by Tencent in this distribution.
1342
+
1343
+
1344
+ Terms of the Apache License Version 2.0:
1345
+ --------------------------------------------------------------------
1346
+ Apache License
1347
+
1348
+ Version 2.0, January 2004
1349
+
1350
+ http://www.apache.org/licenses/
1351
+
1352
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1353
+ 1. Definitions.
1354
+
1355
+ "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
1356
+
1357
+ "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
1358
+
1359
+ "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
1360
+
1361
+ "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License.
1362
+
1363
+ "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
1364
+
1365
+ "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
1366
+
1367
+ "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).
1368
+
1369
+ "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
1370
+
1371
+ "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution."
1372
+
1373
+ "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.
1374
+
1375
+ 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
1376
+
1377
+ 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.
1378
+
1379
+ 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
1380
+
1381
+ You must give any other recipients of the Work or Derivative Works a copy of this License; and
1382
+
1383
+ You must cause any modified files to carry prominent notices stating that You changed the files; and
1384
+
1385
+ You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and
1386
+
1387
+ If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
1388
+
1389
+ You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
1390
+
1391
+ 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
1392
+
1393
+ 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.
1394
+
1395
+ 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
1396
+
1397
+ 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
1398
+
1399
+ 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
1400
+
1401
+ END OF TERMS AND CONDITIONS
crowd_counter/README.md ADDED
@@ -0,0 +1,164 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # P2PNet (ICCV2021 Oral Presentation)
2
+
3
+ This repository contains codes for the official implementation in PyTorch of **P2PNet** as described in [Rethinking Counting and Localization in Crowds: A Purely Point-Based Framework](https://arxiv.org/abs/2107.12746).
4
+
5
+ A brief introduction of P2PNet can be found at [机器之心 (almosthuman)](https://mp.weixin.qq.com/s?__biz=MzA3MzI4MjgzMw==&mid=2650827826&idx=3&sn=edd3d66444130fb34a59d08fab618a9e&chksm=84e5a84cb392215a005a3b3424f20a9d24dc525dcd933960035bf4b6aa740191b5ecb2b7b161&mpshare=1&scene=1&srcid=1004YEOC7HC9daYRYeUio7Xn&sharer_sharetime=1633675738338&sharer_shareid=7d375dccd3b2f9eec5f8b27ee7c04883&version=3.1.16.5505&platform=win#rd).
6
+
7
+ The codes is tested with PyTorch 1.5.0. It may not run with other versions.
8
+
9
+ ## Visualized demos for P2PNet
10
+ <img src="vis/congested1.png" width="1000"/>
11
+ <img src="vis/congested2.png" width="1000"/>
12
+ <img src="vis/congested3.png" width="1000"/>
13
+
14
+ ## The network
15
+ The overall architecture of the P2PNet. Built upon the VGG16, it firstly introduce an upsampling path to obtain fine-grained feature map.
16
+ Then it exploits two branches to simultaneously predict a set of point proposals and their confidence scores.
17
+
18
+ <img src="vis/net.png" width="1000"/>
19
+
20
+ ## Comparison with state-of-the-art methods
21
+ The P2PNet achieved state-of-the-art performance on several challenging datasets with various densities.
22
+
23
+ | Methods | Venue | SHTechPartA <br> MAE/MSE |SHTechPartB <br> MAE/MSE | UCF_CC_50 <br> MAE/MSE | UCF_QNRF <br> MAE/MSE |
24
+ |:----:|:----:|:----:|:----:|:----:|:----:|
25
+ CAN | CVPR'19 | 62.3/100.0 | 7.8/12.2 | 212.2/**243.7** | 107.0/183.0 |
26
+ Bayesian+ | ICCV'19 | 62.8/101.8 | 7.7/12.7 | 229.3/308.2 | 88.7/154.8 |
27
+ S-DCNet | ICCV'19 | 58.3/95.0 | 6.7/10.7 | 204.2/301.3 | 104.4/176.1 |
28
+ SANet+SPANet | ICCV'19 | 59.4/92.5 | 6.5/**9.9** | 232.6/311.7 | -/- |
29
+ DUBNet | AAAI'20 | 64.6/106.8 | 7.7/12.5 | 243.8/329.3 | 105.6/180.5 |
30
+ SDANet | AAAI'20 | 63.6/101.8 | 7.8/10.2 | 227.6/316.4 | -/- |
31
+ ADSCNet | CVPR'20 | <u>55.4</u>/97.7 | <u>6.4</u>/11.3 | 198.4/267.3 | **71.3**/**132.5**|
32
+ ASNet | CVPR'20 | 57.78/<u>90.13</u> | -/- | <u>174.84</u>/<u>251.63</u> | 91.59/159.71 |
33
+ AMRNet | ECCV'20 | 61.59/98.36 | 7.02/11.00 | 184.0/265.8 | 86.6/152.2 |
34
+ AMSNet | ECCV'20 | 56.7/93.4 | 6.7/10.2 | 208.4/297.3 | 101.8/163.2|
35
+ DM-Count | NeurIPS'20 | 59.7/95.7 | 7.4/11.8 | 211.0/291.5 | 85.6/<u>148.3</u>|
36
+ **Ours** |- | **52.74**/**85.06** | **6.25**/**9.9** | **172.72**/256.18 | <u>85.32</u>/154.5 |
37
+
38
+ Comparison on the [NWPU-Crowd](https://www.crowdbenchmark.com/resultdetail.html?rid=81) dataset.
39
+
40
+ | Methods | MAE[O] |MSE[O] | MAE[L] | MAE[S] |
41
+ |:----:|:----:|:----:|:----:|:----:|
42
+ MCNN | 232.5|714.6 | 220.9|1171.9 |
43
+ SANet | 190.6 | 491.4 | 153.8 | 716.3|
44
+ CSRNet | 121.3 | 387.8 | 112.0 | <u>522.7</u> |
45
+ PCC-Net | 112.3 | 457.0 | 111.0 | 777.6 |
46
+ CANNet | 110.0 | 495.3 | 102.3 | 718.3|
47
+ Bayesian+ | 105.4 | 454.2 | 115.8 | 750.5 |
48
+ S-DCNet | 90.2 | 370.5 | **82.9** | 567.8 |
49
+ DM-Count | <u>88.4</u> | 388.6 | 88.0 | **498.0** |
50
+ **Ours** | **77.44**|**362** | <u>83.28</u>| 553.92 |
51
+
52
+ The overall performance for both counting and localization.
53
+
54
+ |nAP$_{\delta}$|SHTechPartA| SHTechPartB | UCF_CC_50 | UCF_QNRF | NWPU_Crowd |
55
+ |:----:|:----:|:----:|:----:|:----:|:----:|
56
+ $\delta=0.05$ | 10.9\% | 23.8\% | 5.0\% | 5.9\% | 12.9\% |
57
+ $\delta=0.25$ | 70.3\% | 84.2\% | 54.5\% | 55.4\% | 71.3\% |
58
+ $\delta=0.50$ | 90.1\% | 94.1\% | 88.1\% | 83.2\% | 89.1\% |
59
+ $\delta=\{{0.05:0.05:0.50}\}$ | 64.4\% | 76.3\% | 54.3\% | 53.1\% | 65.0\% |
60
+
61
+ Comparison for the localization performance in terms of F1-Measure on NWPU.
62
+
63
+ | Method| F1-Measure |Precision| Recall |
64
+ |:----:|:----:|:----:|:----:|
65
+ FasterRCNN | 0.068 | 0.958 | 0.035 |
66
+ TinyFaces | 0.567 | 0.529 | 0.611 |
67
+ RAZ | 0.599 | 0.666 | 0.543|
68
+ Crowd-SDNet | 0.637 | 0.651 | 0.624 |
69
+ PDRNet | 0.653 | 0.675 | 0.633 |
70
+ TopoCount | 0.692 | 0.683 | **0.701** |
71
+ D2CNet | <u>0.700</u> | **0.741** | 0.662 |
72
+ **Ours** |**0.712** | <u>0.729</u> | <u>0.695</u> |
73
+
74
+ ## Installation
75
+ * Clone this repo into a directory named P2PNET_ROOT
76
+ * Organize your datasets as required
77
+ * Install Python dependencies. We use python 3.6.5 and pytorch 1.5.0
78
+ ```
79
+ pip install -r requirements.txt
80
+ ```
81
+
82
+ ## Organize the counting dataset
83
+ We use a list file to collect all the images and their ground truth annotations in a counting dataset. When your dataset is organized as recommended in the following, the format of this list file is defined as:
84
+ ```
85
+ train/scene01/img01.jpg train/scene01/img01.txt
86
+ train/scene01/img02.jpg train/scene01/img02.txt
87
+ ...
88
+ train/scene02/img01.jpg train/scene02/img01.txt
89
+ ```
90
+
91
+ ### Dataset structures:
92
+ ```
93
+ DATA_ROOT/
94
+ |->train/
95
+ | |->scene01/
96
+ | |->scene02/
97
+ | |->...
98
+ |->test/
99
+ | |->scene01/
100
+ | |->scene02/
101
+ | |->...
102
+ |->train.list
103
+ |->test.list
104
+ ```
105
+ DATA_ROOT is your path containing the counting datasets.
106
+
107
+ ### Annotations format
108
+ For the annotations of each image, we use a single txt file which contains one annotation per line. Note that indexing for pixel values starts at 0. The expected format of each line is:
109
+ ```
110
+ x1 y1
111
+ x2 y2
112
+ ...
113
+ ```
114
+
115
+ ## Training
116
+
117
+ The network can be trained using the `train.py` script. For training on SHTechPartA, use
118
+
119
+ ```
120
+ CUDA_VISIBLE_DEVICES=0 python train.py --data_root $DATA_ROOT \
121
+ --dataset_file SHHA \
122
+ --epochs 3500 \
123
+ --lr_drop 3500 \
124
+ --output_dir ./logs \
125
+ --checkpoints_dir ./weights \
126
+ --tensorboard_dir ./logs \
127
+ --lr 0.0001 \
128
+ --lr_backbone 0.00001 \
129
+ --batch_size 8 \
130
+ --eval_freq 1 \
131
+ --gpu_id 0
132
+ ```
133
+ By default, a periodic evaluation will be conducted on the validation set.
134
+
135
+ ## Testing
136
+
137
+ A trained model (with an MAE of **51.96**) on SHTechPartA is available at "./weights", run the following commands to launch a visualization demo:
138
+
139
+ ```
140
+ CUDA_VISIBLE_DEVICES=0 python run_test.py --weight_path ./weights/SHTechA.pth --output_dir ./logs/
141
+ ```
142
+
143
+ ## Acknowledgements
144
+
145
+ - Part of codes are borrowed from the [C^3 Framework](https://github.com/gjy3035/C-3-Framework).
146
+ - We refer to [DETR](https://github.com/facebookresearch/detr) to implement our matching strategy.
147
+
148
+
149
+ ## Citing P2PNet
150
+
151
+ If you find P2PNet is useful in your project, please consider citing us:
152
+
153
+ ```BibTeX
154
+ @inproceedings{song2021rethinking,
155
+ title={Rethinking Counting and Localization in Crowds: A Purely Point-Based Framework},
156
+ author={Song, Qingyu and Wang, Changan and Jiang, Zhengkai and Wang, Yabiao and Tai, Ying and Wang, Chengjie and Li, Jilin and Huang, Feiyue and Wu, Yang},
157
+ journal={Proceedings of the IEEE/CVF International Conference on Computer Vision},
158
+ year={2021}
159
+ }
160
+ ```
161
+
162
+ ## Related works from Tencent Youtu Lab
163
+ - [AAAI2021] To Choose or to Fuse? Scale Selection for Crowd Counting. ([paper link](https://ojs.aaai.org/index.php/AAAI/article/view/16360) & [codes](https://github.com/TencentYoutuResearch/CrowdCounting-SASNet))
164
+ - [ICCV2021] Uniformity in Heterogeneity: Diving Deep into Count Interval Partition for Crowd Counting. ([paper link](https://arxiv.org/abs/2107.12619) & [codes](https://github.com/TencentYoutuResearch/CrowdCounting-UEPNet))
crowd_counter/__init__.py ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import torchvision.transforms as standard_transforms
3
+ import numpy as np
4
+
5
+ from PIL import Image
6
+ import cv2
7
+
8
+ from .engine import *
9
+ from .models import build_model
10
+ import os
11
+ import warnings
12
+
13
+ warnings.filterwarnings("ignore")
14
+
15
+
16
+ class Args:
17
+ def __init__(
18
+ self,
19
+ backbone: str,
20
+ row: int,
21
+ line: int,
22
+ output_dir: str,
23
+ weight_path: str,
24
+ # gpu_id: int,
25
+ ) -> None:
26
+ self.backbone = backbone
27
+ self.row = row
28
+ self.line = line
29
+ self.output_dir = output_dir
30
+ self.weight_path = weight_path
31
+ # self.gpu_id = gpu_id
32
+
33
+
34
+ class CrowdCounter:
35
+ def __init__(self) -> None:
36
+ # Create the Args object
37
+ self.args = Args(
38
+ backbone="vgg16_bn",
39
+ row=2,
40
+ line=2,
41
+ output_dir="./crowd_counter/preds",
42
+ weight_path="./crowd_counter/weights/SHTechA.pth",
43
+ # gpu_id=0,
44
+ )
45
+
46
+ # device = torch.device('cuda')
47
+ self.device = torch.device("cpu")
48
+ # get the P2PNet
49
+ self.model = build_model(self.args)
50
+ # move to GPU
51
+ self.model.to(self.device)
52
+ # load trained model
53
+ if self.args.weight_path is not None:
54
+ checkpoint = torch.load(self.args.weight_path, map_location="cpu")
55
+ self.model.load_state_dict(checkpoint["model"])
56
+ # convert to eval mode
57
+ self.model.eval()
58
+ # create the pre-processing transform
59
+ self.transform = standard_transforms.Compose(
60
+ [
61
+ standard_transforms.ToTensor(),
62
+ standard_transforms.Normalize(
63
+ mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]
64
+ ),
65
+ ]
66
+ )
67
+
68
+ def test(
69
+ self, args: Args, img_raw: Image.Image , debug: bool = False,
70
+ ) -> tuple[any, Image.Image, torch.Tensor]:
71
+
72
+ # round the size
73
+ width, height = img_raw.size
74
+ new_width = width // 128 * 128
75
+ new_height = height // 128 * 128
76
+ img_raw = img_raw.resize((new_width, new_height), Image.LANCZOS)
77
+ # pre-proccessing
78
+ img = self.transform(img_raw)
79
+
80
+ samples = torch.Tensor(img).unsqueeze(0)
81
+ samples = samples.to(self.device)
82
+ # run inference
83
+ outputs = self.model(samples)
84
+ outputs_scores = torch.nn.functional.softmax(outputs["pred_logits"], -1)[
85
+ :, :, 1
86
+ ][0]
87
+
88
+ outputs_points = outputs["pred_points"][0]
89
+
90
+ threshold = 0.5
91
+ # filter the predictions
92
+ conf = outputs_scores[outputs_scores > threshold]
93
+ points = (
94
+ outputs_points[outputs_scores > threshold].detach().cpu().numpy().tolist()
95
+ )
96
+
97
+ # draw the predictions
98
+ size = 5
99
+ img_to_draw = cv2.cvtColor(np.array(img_raw), cv2.COLOR_RGB2BGR)
100
+
101
+ for p in points:
102
+ img_to_draw = cv2.circle(
103
+ img_to_draw, (int(p[0]), int(p[1])), size, (255, 0, 0), -1
104
+ )
105
+ return points, img_to_draw, conf
106
+
107
+ # Function to process and save images
108
+ def inference(self, img_raw: Image.Image) -> tuple[int, Image.Image]:
109
+
110
+ # Predict points on the image
111
+ points, img_to_draw, conf = self.test(self.args, img_raw)
112
+
113
+ # Prepare text for the number of points
114
+ num_points = len(points)
115
+
116
+ # Pilgrims, Drawn Image %
117
+ return num_points, img_to_draw
crowd_counter/crowd_datasets/SHHA/SHHA.py ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import random
3
+ import torch
4
+ import numpy as np
5
+ from torch.utils.data import Dataset
6
+ from PIL import Image
7
+ import cv2
8
+ import glob
9
+ import scipy.io as io
10
+
11
+ class SHHA(Dataset):
12
+ def __init__(self, data_root, transform=None, train=False, patch=False, flip=False):
13
+ self.root_path = data_root
14
+ self.train_lists = "shanghai_tech_part_a_train.list"
15
+ self.eval_list = "shanghai_tech_part_a_test.list"
16
+ # there may exist multiple list files
17
+ self.img_list_file = self.train_lists.split(',')
18
+ if train:
19
+ self.img_list_file = self.train_lists.split(',')
20
+ else:
21
+ self.img_list_file = self.eval_list.split(',')
22
+
23
+ self.img_map = {}
24
+ self.img_list = []
25
+ # loads the image/gt pairs
26
+ for _, train_list in enumerate(self.img_list_file):
27
+ train_list = train_list.strip()
28
+ with open(os.path.join(self.root_path, train_list)) as fin:
29
+ for line in fin:
30
+ if len(line) < 2:
31
+ continue
32
+ line = line.strip().split()
33
+ self.img_map[os.path.join(self.root_path, line[0].strip())] = \
34
+ os.path.join(self.root_path, line[1].strip())
35
+ self.img_list = sorted(list(self.img_map.keys()))
36
+ # number of samples
37
+ self.nSamples = len(self.img_list)
38
+
39
+ self.transform = transform
40
+ self.train = train
41
+ self.patch = patch
42
+ self.flip = flip
43
+
44
+ def __len__(self):
45
+ return self.nSamples
46
+
47
+ def __getitem__(self, index):
48
+ assert index <= len(self), 'index range error'
49
+
50
+ img_path = self.img_list[index]
51
+ gt_path = self.img_map[img_path]
52
+ # load image and ground truth
53
+ img, point = load_data((img_path, gt_path), self.train)
54
+ # applu augumentation
55
+ if self.transform is not None:
56
+ img = self.transform(img)
57
+
58
+ if self.train:
59
+ # data augmentation -> random scale
60
+ scale_range = [0.7, 1.3]
61
+ min_size = min(img.shape[1:])
62
+ scale = random.uniform(*scale_range)
63
+ # scale the image and points
64
+ if scale * min_size > 128:
65
+ img = torch.nn.functional.upsample_bilinear(img.unsqueeze(0), scale_factor=scale).squeeze(0)
66
+ point *= scale
67
+ # random crop augumentaiton
68
+ if self.train and self.patch:
69
+ img, point = random_crop(img, point)
70
+ for i, _ in enumerate(point):
71
+ point[i] = torch.Tensor(point[i])
72
+ # random flipping
73
+ if random.random() > 0.5 and self.train and self.flip:
74
+ # random flip
75
+ img = torch.Tensor(img[:, :, :, ::-1].copy())
76
+ for i, _ in enumerate(point):
77
+ point[i][:, 0] = 128 - point[i][:, 0]
78
+
79
+ if not self.train:
80
+ point = [point]
81
+
82
+ img = torch.Tensor(img)
83
+ # pack up related infos
84
+ target = [{} for i in range(len(point))]
85
+ for i, _ in enumerate(point):
86
+ target[i]['point'] = torch.Tensor(point[i])
87
+ image_id = int(img_path.split('/')[-1].split('.')[0].split('_')[-1])
88
+ image_id = torch.Tensor([image_id]).long()
89
+ target[i]['image_id'] = image_id
90
+ target[i]['labels'] = torch.ones([point[i].shape[0]]).long()
91
+
92
+ return img, target
93
+
94
+
95
+ def load_data(img_gt_path, train):
96
+ img_path, gt_path = img_gt_path
97
+ # load the images
98
+ img = cv2.imread(img_path)
99
+ img = Image.fromarray(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
100
+ # load ground truth points
101
+ points = []
102
+ with open(gt_path) as f_label:
103
+ for line in f_label:
104
+ x = float(line.strip().split(' ')[0])
105
+ y = float(line.strip().split(' ')[1])
106
+ points.append([x, y])
107
+
108
+ return img, np.array(points)
109
+
110
+ # random crop augumentation
111
+ def random_crop(img, den, num_patch=4):
112
+ half_h = 128
113
+ half_w = 128
114
+ result_img = np.zeros([num_patch, img.shape[0], half_h, half_w])
115
+ result_den = []
116
+ # crop num_patch for each image
117
+ for i in range(num_patch):
118
+ start_h = random.randint(0, img.size(1) - half_h)
119
+ start_w = random.randint(0, img.size(2) - half_w)
120
+ end_h = start_h + half_h
121
+ end_w = start_w + half_w
122
+ # copy the cropped rect
123
+ result_img[i] = img[:, start_h:end_h, start_w:end_w]
124
+ # copy the cropped points
125
+ idx = (den[:, 0] >= start_w) & (den[:, 0] <= end_w) & (den[:, 1] >= start_h) & (den[:, 1] <= end_h)
126
+ # shift the corrdinates
127
+ record_den = den[idx]
128
+ record_den[:, 0] -= start_w
129
+ record_den[:, 1] -= start_h
130
+
131
+ result_den.append(record_den)
132
+
133
+ return result_img, result_den
crowd_counter/crowd_datasets/SHHA/__init__.py ADDED
File without changes
crowd_counter/crowd_datasets/SHHA/loading_data.py ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torchvision.transforms as standard_transforms
2
+ from .SHHA import SHHA
3
+
4
+ # DeNormalize used to get original images
5
+ class DeNormalize(object):
6
+ def __init__(self, mean, std):
7
+ self.mean = mean
8
+ self.std = std
9
+
10
+ def __call__(self, tensor):
11
+ for t, m, s in zip(tensor, self.mean, self.std):
12
+ t.mul_(s).add_(m)
13
+ return tensor
14
+
15
+ def loading_data(data_root):
16
+ # the pre-proccssing transform
17
+ transform = standard_transforms.Compose([
18
+ standard_transforms.ToTensor(),
19
+ standard_transforms.Normalize(mean=[0.485, 0.456, 0.406],
20
+ std=[0.229, 0.224, 0.225]),
21
+ ])
22
+ # create the training dataset
23
+ train_set = SHHA(data_root, train=True, transform=transform, patch=True, flip=True)
24
+ # create the validation dataset
25
+ val_set = SHHA(data_root, train=False, transform=transform)
26
+
27
+ return train_set, val_set
crowd_counter/crowd_datasets/__init__.py ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ # build dataset according to given 'dataset_file'
2
+ def build_dataset(args):
3
+ if args.dataset_file == 'SHHA':
4
+ from crowd_datasets.SHHA.loading_data import loading_data
5
+ return loading_data
6
+
7
+ return None
crowd_counter/engine.py ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2
+ """
3
+ Train and eval functions used in main.py
4
+ Mostly copy-paste from DETR (https://github.com/facebookresearch/detr).
5
+ """
6
+ import math
7
+ import os
8
+ import sys
9
+ from typing import Iterable
10
+
11
+ import torch
12
+
13
+ import crowd_counter.util.misc as utils
14
+ from crowd_counter.util.misc import NestedTensor
15
+ import numpy as np
16
+ import time
17
+ import torchvision.transforms as standard_transforms
18
+ import cv2
19
+
20
+ class DeNormalize(object):
21
+ def __init__(self, mean, std):
22
+ self.mean = mean
23
+ self.std = std
24
+
25
+ def __call__(self, tensor):
26
+ for t, m, s in zip(tensor, self.mean, self.std):
27
+ t.mul_(s).add_(m)
28
+ return tensor
29
+
30
+ def vis(samples, targets, pred, vis_dir, des=None):
31
+ '''
32
+ samples -> tensor: [batch, 3, H, W]
33
+ targets -> list of dict: [{'points':[], 'image_id': str}]
34
+ pred -> list: [num_preds, 2]
35
+ '''
36
+ gts = [t['point'].tolist() for t in targets]
37
+
38
+ pil_to_tensor = standard_transforms.ToTensor()
39
+
40
+ restore_transform = standard_transforms.Compose([
41
+ DeNormalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
42
+ standard_transforms.ToPILImage()
43
+ ])
44
+ # draw one by one
45
+ for idx in range(samples.shape[0]):
46
+ sample = restore_transform(samples[idx])
47
+ sample = pil_to_tensor(sample.convert('RGB')).numpy() * 255
48
+ sample_gt = sample.transpose([1, 2, 0])[:, :, ::-1].astype(np.uint8).copy()
49
+ sample_pred = sample.transpose([1, 2, 0])[:, :, ::-1].astype(np.uint8).copy()
50
+
51
+ max_len = np.max(sample_gt.shape)
52
+
53
+ size = 2
54
+ # draw gt
55
+ for t in gts[idx]:
56
+ sample_gt = cv2.circle(sample_gt, (int(t[0]), int(t[1])), size, (0, 255, 0), -1)
57
+ # draw predictions
58
+ for p in pred[idx]:
59
+ sample_pred = cv2.circle(sample_pred, (int(p[0]), int(p[1])), size, (0, 0, 255), -1)
60
+
61
+ name = targets[idx]['image_id']
62
+ # save the visualized images
63
+ if des is not None:
64
+ cv2.imwrite(os.path.join(vis_dir, '{}_{}_gt_{}_pred_{}_gt.jpg'.format(int(name),
65
+ des, len(gts[idx]), len(pred[idx]))), sample_gt)
66
+ cv2.imwrite(os.path.join(vis_dir, '{}_{}_gt_{}_pred_{}_pred.jpg'.format(int(name),
67
+ des, len(gts[idx]), len(pred[idx]))), sample_pred)
68
+ else:
69
+ cv2.imwrite(
70
+ os.path.join(vis_dir, '{}_gt_{}_pred_{}_gt.jpg'.format(int(name), len(gts[idx]), len(pred[idx]))),
71
+ sample_gt)
72
+ cv2.imwrite(
73
+ os.path.join(vis_dir, '{}_gt_{}_pred_{}_pred.jpg'.format(int(name), len(gts[idx]), len(pred[idx]))),
74
+ sample_pred)
75
+
76
+ # the training routine
77
+ def train_one_epoch(model: torch.nn.Module, criterion: torch.nn.Module,
78
+ data_loader: Iterable, optimizer: torch.optim.Optimizer,
79
+ device: torch.device, epoch: int, max_norm: float = 0):
80
+ model.train()
81
+ criterion.train()
82
+ metric_logger = utils.MetricLogger(delimiter=" ")
83
+ metric_logger.add_meter('lr', utils.SmoothedValue(window_size=1, fmt='{value:.6f}'))
84
+ # iterate all training samples
85
+ for samples, targets in data_loader:
86
+ samples = samples.to(device)
87
+ targets = [{k: v.to(device) for k, v in t.items()} for t in targets]
88
+ # forward
89
+ outputs = model(samples)
90
+ # calc the losses
91
+ loss_dict = criterion(outputs, targets)
92
+ weight_dict = criterion.weight_dict
93
+ losses = sum(loss_dict[k] * weight_dict[k] for k in loss_dict.keys() if k in weight_dict)
94
+
95
+ # reduce all losses
96
+ loss_dict_reduced = utils.reduce_dict(loss_dict)
97
+ loss_dict_reduced_unscaled = {f'{k}_unscaled': v
98
+ for k, v in loss_dict_reduced.items()}
99
+ loss_dict_reduced_scaled = {k: v * weight_dict[k]
100
+ for k, v in loss_dict_reduced.items() if k in weight_dict}
101
+ losses_reduced_scaled = sum(loss_dict_reduced_scaled.values())
102
+
103
+ loss_value = losses_reduced_scaled.item()
104
+
105
+ if not math.isfinite(loss_value):
106
+ print("Loss is {}, stopping training".format(loss_value))
107
+ print(loss_dict_reduced)
108
+ sys.exit(1)
109
+ # backward
110
+ optimizer.zero_grad()
111
+ losses.backward()
112
+ if max_norm > 0:
113
+ torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm)
114
+ optimizer.step()
115
+ # update logger
116
+ metric_logger.update(loss=loss_value, **loss_dict_reduced_scaled, **loss_dict_reduced_unscaled)
117
+ metric_logger.update(lr=optimizer.param_groups[0]["lr"])
118
+ # gather the stats from all processes
119
+ metric_logger.synchronize_between_processes()
120
+ print("Averaged stats:", metric_logger)
121
+ return {k: meter.global_avg for k, meter in metric_logger.meters.items()}
122
+
123
+ # the inference routine
124
+ @torch.no_grad()
125
+ def evaluate_crowd_no_overlap(model, data_loader, device, vis_dir=None):
126
+ model.eval()
127
+
128
+ metric_logger = utils.MetricLogger(delimiter=" ")
129
+ metric_logger.add_meter('class_error', utils.SmoothedValue(window_size=1, fmt='{value:.2f}'))
130
+ # run inference on all images to calc MAE
131
+ maes = []
132
+ mses = []
133
+ for samples, targets in data_loader:
134
+ samples = samples.to(device)
135
+
136
+ outputs = model(samples)
137
+ outputs_scores = torch.nn.functional.softmax(outputs['pred_logits'], -1)[:, :, 1][0]
138
+
139
+ outputs_points = outputs['pred_points'][0]
140
+
141
+ gt_cnt = targets[0]['point'].shape[0]
142
+ # 0.5 is used by default
143
+ threshold = 0.5
144
+
145
+ points = outputs_points[outputs_scores > threshold].detach().cpu().numpy().tolist()
146
+ predict_cnt = int((outputs_scores > threshold).sum())
147
+ # if specified, save the visualized images
148
+ if vis_dir is not None:
149
+ vis(samples, targets, [points], vis_dir)
150
+ # accumulate MAE, MSE
151
+ mae = abs(predict_cnt - gt_cnt)
152
+ mse = (predict_cnt - gt_cnt) * (predict_cnt - gt_cnt)
153
+ maes.append(float(mae))
154
+ mses.append(float(mse))
155
+ # calc MAE, MSE
156
+ mae = np.mean(maes)
157
+ mse = np.sqrt(np.mean(mses))
158
+
159
+ return mae, mse
crowd_counter/models/__init__.py ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ from .p2pnet import build
2
+
3
+ # build the P2PNet model
4
+ # set training to 'True' during training
5
+ def build_model(args, training=False):
6
+ return build(args, training)
crowd_counter/models/backbone.py ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2
+ """
3
+ Backbone modules.
4
+ """
5
+ from collections import OrderedDict
6
+
7
+ import torch
8
+ import torch.nn.functional as F
9
+ import torchvision
10
+ from torch import nn
11
+
12
+ import crowd_counter.models.vgg_ as models
13
+
14
+ class BackboneBase_VGG(nn.Module):
15
+ def __init__(self, backbone: nn.Module, num_channels: int, name: str, return_interm_layers: bool):
16
+ super().__init__()
17
+ features = list(backbone.features.children())
18
+ if return_interm_layers:
19
+ if name == 'vgg16_bn':
20
+ self.body1 = nn.Sequential(*features[:13])
21
+ self.body2 = nn.Sequential(*features[13:23])
22
+ self.body3 = nn.Sequential(*features[23:33])
23
+ self.body4 = nn.Sequential(*features[33:43])
24
+ else:
25
+ self.body1 = nn.Sequential(*features[:9])
26
+ self.body2 = nn.Sequential(*features[9:16])
27
+ self.body3 = nn.Sequential(*features[16:23])
28
+ self.body4 = nn.Sequential(*features[23:30])
29
+ else:
30
+ if name == 'vgg16_bn':
31
+ self.body = nn.Sequential(*features[:44]) # 16x down-sample
32
+ elif name == 'vgg16':
33
+ self.body = nn.Sequential(*features[:30]) # 16x down-sample
34
+ self.num_channels = num_channels
35
+ self.return_interm_layers = return_interm_layers
36
+
37
+ def forward(self, tensor_list):
38
+ out = []
39
+
40
+ if self.return_interm_layers:
41
+ xs = tensor_list
42
+ for _, layer in enumerate([self.body1, self.body2, self.body3, self.body4]):
43
+ xs = layer(xs)
44
+ out.append(xs)
45
+
46
+ else:
47
+ xs = self.body(tensor_list)
48
+ out.append(xs)
49
+ return out
50
+
51
+
52
+ class Backbone_VGG(BackboneBase_VGG):
53
+ """ResNet backbone with frozen BatchNorm."""
54
+ def __init__(self, name: str, return_interm_layers: bool):
55
+ if name == 'vgg16_bn':
56
+ backbone = models.vgg16_bn(pretrained=True)
57
+ elif name == 'vgg16':
58
+ backbone = models.vgg16(pretrained=True)
59
+ num_channels = 256
60
+ super().__init__(backbone, num_channels, name, return_interm_layers)
61
+
62
+
63
+ def build_backbone(args):
64
+ backbone = Backbone_VGG(args.backbone, True)
65
+ return backbone
66
+
67
+ if __name__ == '__main__':
68
+ Backbone_VGG('vgg16', True)
crowd_counter/models/matcher.py ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
3
+ """
4
+ Mostly copy-paste from DETR (https://github.com/facebookresearch/detr).
5
+ """
6
+ import torch
7
+ from scipy.optimize import linear_sum_assignment
8
+ from torch import nn
9
+
10
+
11
+ class HungarianMatcher_Crowd(nn.Module):
12
+ """This class computes an assignment between the targets and the predictions of the network
13
+
14
+ For efficiency reasons, the targets don't include the no_object. Because of this, in general,
15
+ there are more predictions than targets. In this case, we do a 1-to-1 matching of the best predictions,
16
+ while the others are un-matched (and thus treated as non-objects).
17
+ """
18
+
19
+ def __init__(self, cost_class: float = 1, cost_point: float = 1):
20
+ """Creates the matcher
21
+
22
+ Params:
23
+ cost_class: This is the relative weight of the foreground object
24
+ cost_point: This is the relative weight of the L1 error of the points coordinates in the matching cost
25
+ """
26
+ super().__init__()
27
+ self.cost_class = cost_class
28
+ self.cost_point = cost_point
29
+ assert cost_class != 0 or cost_point != 0, "all costs cant be 0"
30
+
31
+ @torch.no_grad()
32
+ def forward(self, outputs, targets):
33
+ """ Performs the matching
34
+
35
+ Params:
36
+ outputs: This is a dict that contains at least these entries:
37
+ "pred_logits": Tensor of dim [batch_size, num_queries, num_classes] with the classification logits
38
+ "points": Tensor of dim [batch_size, num_queries, 2] with the predicted point coordinates
39
+
40
+ targets: This is a list of targets (len(targets) = batch_size), where each target is a dict containing:
41
+ "labels": Tensor of dim [num_target_points] (where num_target_points is the number of ground-truth
42
+ objects in the target) containing the class labels
43
+ "points": Tensor of dim [num_target_points, 2] containing the target point coordinates
44
+
45
+ Returns:
46
+ A list of size batch_size, containing tuples of (index_i, index_j) where:
47
+ - index_i is the indices of the selected predictions (in order)
48
+ - index_j is the indices of the corresponding selected targets (in order)
49
+ For each batch element, it holds:
50
+ len(index_i) = len(index_j) = min(num_queries, num_target_points)
51
+ """
52
+ bs, num_queries = outputs["pred_logits"].shape[:2]
53
+
54
+ # We flatten to compute the cost matrices in a batch
55
+ out_prob = outputs["pred_logits"].flatten(0, 1).softmax(-1) # [batch_size * num_queries, num_classes]
56
+ out_points = outputs["pred_points"].flatten(0, 1) # [batch_size * num_queries, 2]
57
+
58
+ # Also concat the target labels and points
59
+ # tgt_ids = torch.cat([v["labels"] for v in targets])
60
+ tgt_ids = torch.cat([v["labels"] for v in targets])
61
+ tgt_points = torch.cat([v["point"] for v in targets])
62
+
63
+ # Compute the classification cost. Contrary to the loss, we don't use the NLL,
64
+ # but approximate it in 1 - proba[target class].
65
+ # The 1 is a constant that doesn't change the matching, it can be ommitted.
66
+ cost_class = -out_prob[:, tgt_ids]
67
+
68
+ # Compute the L2 cost between point
69
+ cost_point = torch.cdist(out_points, tgt_points, p=2)
70
+
71
+ # Compute the giou cost between point
72
+
73
+ # Final cost matrix
74
+ C = self.cost_point * cost_point + self.cost_class * cost_class
75
+ C = C.view(bs, num_queries, -1).cpu()
76
+
77
+ sizes = [len(v["point"]) for v in targets]
78
+ indices = [linear_sum_assignment(c[i]) for i, c in enumerate(C.split(sizes, -1))]
79
+ return [(torch.as_tensor(i, dtype=torch.int64), torch.as_tensor(j, dtype=torch.int64)) for i, j in indices]
80
+
81
+
82
+ def build_matcher_crowd(args):
83
+ return HungarianMatcher_Crowd(cost_class=args.set_cost_class, cost_point=args.set_cost_point)
crowd_counter/models/p2pnet.py ADDED
@@ -0,0 +1,342 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import torch.nn.functional as F
3
+ from torch import nn
4
+
5
+ from crowd_counter.util.misc import (NestedTensor, nested_tensor_from_tensor_list,
6
+ accuracy, get_world_size, interpolate,
7
+ is_dist_avail_and_initialized)
8
+
9
+ from .backbone import build_backbone
10
+ from .matcher import build_matcher_crowd
11
+
12
+ import numpy as np
13
+ import time
14
+
15
+ # the network frmawork of the regression branch
16
+ class RegressionModel(nn.Module):
17
+ def __init__(self, num_features_in, num_anchor_points=4, feature_size=256):
18
+ super(RegressionModel, self).__init__()
19
+
20
+ self.conv1 = nn.Conv2d(num_features_in, feature_size, kernel_size=3, padding=1)
21
+ self.act1 = nn.ReLU()
22
+
23
+ self.conv2 = nn.Conv2d(feature_size, feature_size, kernel_size=3, padding=1)
24
+ self.act2 = nn.ReLU()
25
+
26
+ self.conv3 = nn.Conv2d(feature_size, feature_size, kernel_size=3, padding=1)
27
+ self.act3 = nn.ReLU()
28
+
29
+ self.conv4 = nn.Conv2d(feature_size, feature_size, kernel_size=3, padding=1)
30
+ self.act4 = nn.ReLU()
31
+
32
+ self.output = nn.Conv2d(feature_size, num_anchor_points * 2, kernel_size=3, padding=1)
33
+ # sub-branch forward
34
+ def forward(self, x):
35
+ out = self.conv1(x)
36
+ out = self.act1(out)
37
+
38
+ out = self.conv2(out)
39
+ out = self.act2(out)
40
+
41
+ out = self.output(out)
42
+
43
+ out = out.permute(0, 2, 3, 1)
44
+
45
+ return out.contiguous().view(out.shape[0], -1, 2)
46
+
47
+ # the network frmawork of the classification branch
48
+ class ClassificationModel(nn.Module):
49
+ def __init__(self, num_features_in, num_anchor_points=4, num_classes=80, prior=0.01, feature_size=256):
50
+ super(ClassificationModel, self).__init__()
51
+
52
+ self.num_classes = num_classes
53
+ self.num_anchor_points = num_anchor_points
54
+
55
+ self.conv1 = nn.Conv2d(num_features_in, feature_size, kernel_size=3, padding=1)
56
+ self.act1 = nn.ReLU()
57
+
58
+ self.conv2 = nn.Conv2d(feature_size, feature_size, kernel_size=3, padding=1)
59
+ self.act2 = nn.ReLU()
60
+
61
+ self.conv3 = nn.Conv2d(feature_size, feature_size, kernel_size=3, padding=1)
62
+ self.act3 = nn.ReLU()
63
+
64
+ self.conv4 = nn.Conv2d(feature_size, feature_size, kernel_size=3, padding=1)
65
+ self.act4 = nn.ReLU()
66
+
67
+ self.output = nn.Conv2d(feature_size, num_anchor_points * num_classes, kernel_size=3, padding=1)
68
+ self.output_act = nn.Sigmoid()
69
+ # sub-branch forward
70
+ def forward(self, x):
71
+ out = self.conv1(x)
72
+ out = self.act1(out)
73
+
74
+ out = self.conv2(out)
75
+ out = self.act2(out)
76
+
77
+ out = self.output(out)
78
+
79
+ out1 = out.permute(0, 2, 3, 1)
80
+
81
+ batch_size, width, height, _ = out1.shape
82
+
83
+ out2 = out1.view(batch_size, width, height, self.num_anchor_points, self.num_classes)
84
+
85
+ return out2.contiguous().view(x.shape[0], -1, self.num_classes)
86
+
87
+ # generate the reference points in grid layout
88
+ def generate_anchor_points(stride=16, row=3, line=3):
89
+ row_step = stride / row
90
+ line_step = stride / line
91
+
92
+ shift_x = (np.arange(1, line + 1) - 0.5) * line_step - stride / 2
93
+ shift_y = (np.arange(1, row + 1) - 0.5) * row_step - stride / 2
94
+
95
+ shift_x, shift_y = np.meshgrid(shift_x, shift_y)
96
+
97
+ anchor_points = np.vstack((
98
+ shift_x.ravel(), shift_y.ravel()
99
+ )).transpose()
100
+
101
+ return anchor_points
102
+ # shift the meta-anchor to get an acnhor points
103
+ def shift(shape, stride, anchor_points):
104
+ shift_x = (np.arange(0, shape[1]) + 0.5) * stride
105
+ shift_y = (np.arange(0, shape[0]) + 0.5) * stride
106
+
107
+ shift_x, shift_y = np.meshgrid(shift_x, shift_y)
108
+
109
+ shifts = np.vstack((
110
+ shift_x.ravel(), shift_y.ravel()
111
+ )).transpose()
112
+
113
+ A = anchor_points.shape[0]
114
+ K = shifts.shape[0]
115
+ all_anchor_points = (anchor_points.reshape((1, A, 2)) + shifts.reshape((1, K, 2)).transpose((1, 0, 2)))
116
+ all_anchor_points = all_anchor_points.reshape((K * A, 2))
117
+
118
+ return all_anchor_points
119
+
120
+ # this class generate all reference points on all pyramid levels
121
+ class AnchorPoints(nn.Module):
122
+ def __init__(self, pyramid_levels=None, strides=None, row=3, line=3):
123
+ super(AnchorPoints, self).__init__()
124
+
125
+ if pyramid_levels is None:
126
+ self.pyramid_levels = [3, 4, 5, 6, 7]
127
+ else:
128
+ self.pyramid_levels = pyramid_levels
129
+
130
+ if strides is None:
131
+ self.strides = [2 ** x for x in self.pyramid_levels]
132
+
133
+ self.row = row
134
+ self.line = line
135
+
136
+ def forward(self, image):
137
+ image_shape = image.shape[2:]
138
+ image_shape = np.array(image_shape)
139
+ image_shapes = [(image_shape + 2 ** x - 1) // (2 ** x) for x in self.pyramid_levels]
140
+
141
+ all_anchor_points = np.zeros((0, 2)).astype(np.float32)
142
+ # get reference points for each level
143
+ for idx, p in enumerate(self.pyramid_levels):
144
+ anchor_points = generate_anchor_points(2**p, row=self.row, line=self.line)
145
+ shifted_anchor_points = shift(image_shapes[idx], self.strides[idx], anchor_points)
146
+ all_anchor_points = np.append(all_anchor_points, shifted_anchor_points, axis=0)
147
+
148
+ all_anchor_points = np.expand_dims(all_anchor_points, axis=0)
149
+ # send reference points to device
150
+ if torch.cuda.is_available():
151
+ return torch.from_numpy(all_anchor_points.astype(np.float32)).cuda()
152
+ else:
153
+ return torch.from_numpy(all_anchor_points.astype(np.float32))
154
+
155
+ class Decoder(nn.Module):
156
+ def __init__(self, C3_size, C4_size, C5_size, feature_size=256):
157
+ super(Decoder, self).__init__()
158
+
159
+ # upsample C5 to get P5 from the FPN paper
160
+ self.P5_1 = nn.Conv2d(C5_size, feature_size, kernel_size=1, stride=1, padding=0)
161
+ self.P5_upsampled = nn.Upsample(scale_factor=2, mode='nearest')
162
+ self.P5_2 = nn.Conv2d(feature_size, feature_size, kernel_size=3, stride=1, padding=1)
163
+
164
+ # add P5 elementwise to C4
165
+ self.P4_1 = nn.Conv2d(C4_size, feature_size, kernel_size=1, stride=1, padding=0)
166
+ self.P4_upsampled = nn.Upsample(scale_factor=2, mode='nearest')
167
+ self.P4_2 = nn.Conv2d(feature_size, feature_size, kernel_size=3, stride=1, padding=1)
168
+
169
+ # add P4 elementwise to C3
170
+ self.P3_1 = nn.Conv2d(C3_size, feature_size, kernel_size=1, stride=1, padding=0)
171
+ self.P3_upsampled = nn.Upsample(scale_factor=2, mode='nearest')
172
+ self.P3_2 = nn.Conv2d(feature_size, feature_size, kernel_size=3, stride=1, padding=1)
173
+
174
+
175
+ def forward(self, inputs):
176
+ C3, C4, C5 = inputs
177
+
178
+ P5_x = self.P5_1(C5)
179
+ P5_upsampled_x = self.P5_upsampled(P5_x)
180
+ P5_x = self.P5_2(P5_x)
181
+
182
+ P4_x = self.P4_1(C4)
183
+ P4_x = P5_upsampled_x + P4_x
184
+ P4_upsampled_x = self.P4_upsampled(P4_x)
185
+ P4_x = self.P4_2(P4_x)
186
+
187
+ P3_x = self.P3_1(C3)
188
+ P3_x = P3_x + P4_upsampled_x
189
+ P3_x = self.P3_2(P3_x)
190
+
191
+ return [P3_x, P4_x, P5_x]
192
+
193
+ # the defenition of the P2PNet model
194
+ class P2PNet(nn.Module):
195
+ def __init__(self, backbone, row=2, line=2):
196
+ super().__init__()
197
+ self.backbone = backbone
198
+ self.num_classes = 2
199
+ # the number of all anchor points
200
+ num_anchor_points = row * line
201
+
202
+ self.regression = RegressionModel(num_features_in=256, num_anchor_points=num_anchor_points)
203
+ self.classification = ClassificationModel(num_features_in=256, \
204
+ num_classes=self.num_classes, \
205
+ num_anchor_points=num_anchor_points)
206
+
207
+ self.anchor_points = AnchorPoints(pyramid_levels=[3,], row=row, line=line)
208
+
209
+ self.fpn = Decoder(256, 512, 512)
210
+
211
+ def forward(self, samples: NestedTensor):
212
+ # get the backbone features
213
+ features = self.backbone(samples)
214
+ # forward the feature pyramid
215
+ features_fpn = self.fpn([features[1], features[2], features[3]])
216
+
217
+ batch_size = features[0].shape[0]
218
+ # run the regression and classification branch
219
+ regression = self.regression(features_fpn[1]) * 100 # 8x
220
+ classification = self.classification(features_fpn[1])
221
+ anchor_points = self.anchor_points(samples).repeat(batch_size, 1, 1)
222
+ # decode the points as prediction
223
+ output_coord = regression + anchor_points
224
+ output_class = classification
225
+ out = {'pred_logits': output_class, 'pred_points': output_coord}
226
+
227
+ return out
228
+
229
+ class SetCriterion_Crowd(nn.Module):
230
+
231
+ def __init__(self, num_classes, matcher, weight_dict, eos_coef, losses):
232
+ """ Create the criterion.
233
+ Parameters:
234
+ num_classes: number of object categories, omitting the special no-object category
235
+ matcher: module able to compute a matching between targets and proposals
236
+ weight_dict: dict containing as key the names of the losses and as values their relative weight.
237
+ eos_coef: relative classification weight applied to the no-object category
238
+ losses: list of all the losses to be applied. See get_loss for list of available losses.
239
+ """
240
+ super().__init__()
241
+ self.num_classes = num_classes
242
+ self.matcher = matcher
243
+ self.weight_dict = weight_dict
244
+ self.eos_coef = eos_coef
245
+ self.losses = losses
246
+ empty_weight = torch.ones(self.num_classes + 1)
247
+ empty_weight[0] = self.eos_coef
248
+ self.register_buffer('empty_weight', empty_weight)
249
+
250
+ def loss_labels(self, outputs, targets, indices, num_points):
251
+ """Classification loss (NLL)
252
+ targets dicts must contain the key "labels" containing a tensor of dim [nb_target_boxes]
253
+ """
254
+ assert 'pred_logits' in outputs
255
+ src_logits = outputs['pred_logits']
256
+
257
+ idx = self._get_src_permutation_idx(indices)
258
+ target_classes_o = torch.cat([t["labels"][J] for t, (_, J) in zip(targets, indices)])
259
+ target_classes = torch.full(src_logits.shape[:2], 0,
260
+ dtype=torch.int64, device=src_logits.device)
261
+ target_classes[idx] = target_classes_o
262
+
263
+ loss_ce = F.cross_entropy(src_logits.transpose(1, 2), target_classes, self.empty_weight)
264
+ losses = {'loss_ce': loss_ce}
265
+
266
+ return losses
267
+
268
+ def loss_points(self, outputs, targets, indices, num_points):
269
+
270
+ assert 'pred_points' in outputs
271
+ idx = self._get_src_permutation_idx(indices)
272
+ src_points = outputs['pred_points'][idx]
273
+ target_points = torch.cat([t['point'][i] for t, (_, i) in zip(targets, indices)], dim=0)
274
+
275
+ loss_bbox = F.mse_loss(src_points, target_points, reduction='none')
276
+
277
+ losses = {}
278
+ losses['loss_point'] = loss_bbox.sum() / num_points
279
+
280
+ return losses
281
+
282
+ def _get_src_permutation_idx(self, indices):
283
+ # permute predictions following indices
284
+ batch_idx = torch.cat([torch.full_like(src, i) for i, (src, _) in enumerate(indices)])
285
+ src_idx = torch.cat([src for (src, _) in indices])
286
+ return batch_idx, src_idx
287
+
288
+ def _get_tgt_permutation_idx(self, indices):
289
+ # permute targets following indices
290
+ batch_idx = torch.cat([torch.full_like(tgt, i) for i, (_, tgt) in enumerate(indices)])
291
+ tgt_idx = torch.cat([tgt for (_, tgt) in indices])
292
+ return batch_idx, tgt_idx
293
+
294
+ def get_loss(self, loss, outputs, targets, indices, num_points, **kwargs):
295
+ loss_map = {
296
+ 'labels': self.loss_labels,
297
+ 'points': self.loss_points,
298
+ }
299
+ assert loss in loss_map, f'do you really want to compute {loss} loss?'
300
+ return loss_map[loss](outputs, targets, indices, num_points, **kwargs)
301
+
302
+ def forward(self, outputs, targets):
303
+ """ This performs the loss computation.
304
+ Parameters:
305
+ outputs: dict of tensors, see the output specification of the model for the format
306
+ targets: list of dicts, such that len(targets) == batch_size.
307
+ The expected keys in each dict depends on the losses applied, see each loss' doc
308
+ """
309
+ output1 = {'pred_logits': outputs['pred_logits'], 'pred_points': outputs['pred_points']}
310
+
311
+ indices1 = self.matcher(output1, targets)
312
+
313
+ num_points = sum(len(t["labels"]) for t in targets)
314
+ num_points = torch.as_tensor([num_points], dtype=torch.float, device=next(iter(output1.values())).device)
315
+ if is_dist_avail_and_initialized():
316
+ torch.distributed.all_reduce(num_points)
317
+ num_boxes = torch.clamp(num_points / get_world_size(), min=1).item()
318
+
319
+ losses = {}
320
+ for loss in self.losses:
321
+ losses.update(self.get_loss(loss, output1, targets, indices1, num_boxes))
322
+
323
+ return losses
324
+
325
+ # create the P2PNet model
326
+ def build(args, training):
327
+ # treats persons as a single class
328
+ num_classes = 1
329
+
330
+ backbone = build_backbone(args)
331
+ model = P2PNet(backbone, args.row, args.line)
332
+ if not training:
333
+ return model
334
+
335
+ weight_dict = {'loss_ce': 1, 'loss_points': args.point_loss_coef}
336
+ losses = ['labels', 'points']
337
+ matcher = build_matcher_crowd(args)
338
+ criterion = SetCriterion_Crowd(num_classes, \
339
+ matcher=matcher, weight_dict=weight_dict, \
340
+ eos_coef=args.eos_coef, losses=losses)
341
+
342
+ return model, criterion
crowd_counter/models/vgg_.py ADDED
@@ -0,0 +1,196 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2
+ """
3
+ Mostly copy-paste from torchvision references.
4
+ """
5
+ import torch
6
+ import torch.nn as nn
7
+
8
+
9
+ __all__ = [
10
+ 'VGG', 'vgg11', 'vgg11_bn', 'vgg13', 'vgg13_bn', 'vgg16', 'vgg16_bn',
11
+ 'vgg19_bn', 'vgg19',
12
+ ]
13
+
14
+
15
+ model_urls = {
16
+ 'vgg11': 'https://download.pytorch.org/models/vgg11-bbd30ac9.pth',
17
+ 'vgg13': 'https://download.pytorch.org/models/vgg13-c768596a.pth',
18
+ 'vgg16': 'https://download.pytorch.org/models/vgg16-397923af.pth',
19
+ 'vgg19': 'https://download.pytorch.org/models/vgg19-dcbb9e9d.pth',
20
+ 'vgg11_bn': 'https://download.pytorch.org/models/vgg11_bn-6002323d.pth',
21
+ 'vgg13_bn': 'https://download.pytorch.org/models/vgg13_bn-abd245e5.pth',
22
+ 'vgg16_bn': 'https://download.pytorch.org/models/vgg16_bn-6c64b313.pth',
23
+ 'vgg19_bn': 'https://download.pytorch.org/models/vgg19_bn-c79401a0.pth',
24
+ }
25
+
26
+
27
+ model_paths = {
28
+ 'vgg16_bn': 'crowd_counter/weights/vgg16_bn-6c64b313.pth',
29
+ 'vgg16': 'crowd_counter/weights/vgg16-397923af.pth',
30
+
31
+ }
32
+
33
+
34
+ class VGG(nn.Module):
35
+
36
+ def __init__(self, features, num_classes=1000, init_weights=True):
37
+ super(VGG, self).__init__()
38
+ self.features = features
39
+ self.avgpool = nn.AdaptiveAvgPool2d((7, 7))
40
+ self.classifier = nn.Sequential(
41
+ nn.Linear(512 * 7 * 7, 4096),
42
+ nn.ReLU(True),
43
+ nn.Dropout(),
44
+ nn.Linear(4096, 4096),
45
+ nn.ReLU(True),
46
+ nn.Dropout(),
47
+ nn.Linear(4096, num_classes),
48
+ )
49
+ if init_weights:
50
+ self._initialize_weights()
51
+
52
+ def forward(self, x):
53
+ x = self.features(x)
54
+ x = self.avgpool(x)
55
+ x = torch.flatten(x, 1)
56
+ x = self.classifier(x)
57
+ return x
58
+
59
+ def _initialize_weights(self):
60
+ for m in self.modules():
61
+ if isinstance(m, nn.Conv2d):
62
+ nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
63
+ if m.bias is not None:
64
+ nn.init.constant_(m.bias, 0)
65
+ elif isinstance(m, nn.BatchNorm2d):
66
+ nn.init.constant_(m.weight, 1)
67
+ nn.init.constant_(m.bias, 0)
68
+ elif isinstance(m, nn.Linear):
69
+ nn.init.normal_(m.weight, 0, 0.01)
70
+ nn.init.constant_(m.bias, 0)
71
+
72
+
73
+ def make_layers(cfg, batch_norm=False, sync=False):
74
+ layers = []
75
+ in_channels = 3
76
+ for v in cfg:
77
+ if v == 'M':
78
+ layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
79
+ else:
80
+ conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1)
81
+ if batch_norm:
82
+ if sync:
83
+ print('use sync backbone')
84
+ layers += [conv2d, nn.SyncBatchNorm(v), nn.ReLU(inplace=True)]
85
+ else:
86
+ layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]
87
+ else:
88
+ layers += [conv2d, nn.ReLU(inplace=True)]
89
+ in_channels = v
90
+ return nn.Sequential(*layers)
91
+
92
+
93
+ cfgs = {
94
+ 'A': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
95
+ 'B': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
96
+ 'D': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'],
97
+ 'E': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'],
98
+ }
99
+
100
+
101
+ def _vgg(arch, cfg, batch_norm, pretrained, progress, sync=False, **kwargs):
102
+ if pretrained:
103
+ kwargs['init_weights'] = False
104
+ model = VGG(make_layers(cfgs[cfg], batch_norm=batch_norm, sync=sync), **kwargs)
105
+ if pretrained:
106
+ state_dict = torch.load(model_paths[arch])
107
+ model.load_state_dict(state_dict)
108
+ return model
109
+
110
+
111
+ def vgg11(pretrained=False, progress=True, **kwargs):
112
+ r"""VGG 11-layer model (configuration "A") from
113
+ `"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_
114
+
115
+ Args:
116
+ pretrained (bool): If True, returns a model pre-trained on ImageNet
117
+ progress (bool): If True, displays a progress bar of the download to stderr
118
+ """
119
+ return _vgg('vgg11', 'A', False, pretrained, progress, **kwargs)
120
+
121
+
122
+ def vgg11_bn(pretrained=False, progress=True, **kwargs):
123
+ r"""VGG 11-layer model (configuration "A") with batch normalization
124
+ `"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_
125
+
126
+ Args:
127
+ pretrained (bool): If True, returns a model pre-trained on ImageNet
128
+ progress (bool): If True, displays a progress bar of the download to stderr
129
+ """
130
+ return _vgg('vgg11_bn', 'A', True, pretrained, progress, **kwargs)
131
+
132
+
133
+ def vgg13(pretrained=False, progress=True, **kwargs):
134
+ r"""VGG 13-layer model (configuration "B")
135
+ `"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_
136
+
137
+ Args:
138
+ pretrained (bool): If True, returns a model pre-trained on ImageNet
139
+ progress (bool): If True, displays a progress bar of the download to stderr
140
+ """
141
+ return _vgg('vgg13', 'B', False, pretrained, progress, **kwargs)
142
+
143
+
144
+ def vgg13_bn(pretrained=False, progress=True, **kwargs):
145
+ r"""VGG 13-layer model (configuration "B") with batch normalization
146
+ `"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_
147
+
148
+ Args:
149
+ pretrained (bool): If True, returns a model pre-trained on ImageNet
150
+ progress (bool): If True, displays a progress bar of the download to stderr
151
+ """
152
+ return _vgg('vgg13_bn', 'B', True, pretrained, progress, **kwargs)
153
+
154
+
155
+ def vgg16(pretrained=False, progress=True, **kwargs):
156
+ r"""VGG 16-layer model (configuration "D")
157
+ `"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_
158
+
159
+ Args:
160
+ pretrained (bool): If True, returns a model pre-trained on ImageNet
161
+ progress (bool): If True, displays a progress bar of the download to stderr
162
+ """
163
+ return _vgg('vgg16', 'D', False, pretrained, progress, **kwargs)
164
+
165
+
166
+ def vgg16_bn(pretrained=False, progress=True, sync=False, **kwargs):
167
+ r"""VGG 16-layer model (configuration "D") with batch normalization
168
+ `"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_
169
+
170
+ Args:
171
+ pretrained (bool): If True, returns a model pre-trained on ImageNet
172
+ progress (bool): If True, displays a progress bar of the download to stderr
173
+ """
174
+ return _vgg('vgg16_bn', 'D', True, pretrained, progress, sync=sync, **kwargs)
175
+
176
+
177
+ def vgg19(pretrained=False, progress=True, **kwargs):
178
+ r"""VGG 19-layer model (configuration "E")
179
+ `"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_
180
+
181
+ Args:
182
+ pretrained (bool): If True, returns a model pre-trained on ImageNet
183
+ progress (bool): If True, displays a progress bar of the download to stderr
184
+ """
185
+ return _vgg('vgg19', 'E', False, pretrained, progress, **kwargs)
186
+
187
+
188
+ def vgg19_bn(pretrained=False, progress=True, **kwargs):
189
+ r"""VGG 19-layer model (configuration 'E') with batch normalization
190
+ `"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_
191
+
192
+ Args:
193
+ pretrained (bool): If True, returns a model pre-trained on ImageNet
194
+ progress (bool): If True, displays a progress bar of the download to stderr
195
+ """
196
+ return _vgg('vgg19_bn', 'E', True, pretrained, progress, **kwargs)
crowd_counter/run_test.py ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import datetime
3
+ import random
4
+ import time
5
+ from pathlib import Path
6
+
7
+ import torch
8
+ import torchvision.transforms as standard_transforms
9
+ import numpy as np
10
+
11
+ from PIL import Image
12
+ import cv2
13
+ from crowd_datasets import build_dataset
14
+ from engine import *
15
+ from models import build_model
16
+ import os
17
+ import warnings
18
+ warnings.filterwarnings('ignore')
19
+
20
+ def get_args_parser():
21
+ parser = argparse.ArgumentParser('Set parameters for P2PNet evaluation', add_help=False)
22
+
23
+ # * Backbone
24
+ parser.add_argument('--backbone', default='vgg16_bn', type=str,
25
+ help="name of the convolutional backbone to use")
26
+
27
+ parser.add_argument('--row', default=2, type=int,
28
+ help="row number of anchor points")
29
+ parser.add_argument('--line', default=2, type=int,
30
+ help="line number of anchor points")
31
+
32
+ parser.add_argument('--output_dir', default='',
33
+ help='path where to save')
34
+ parser.add_argument('--weight_path', default='',
35
+ help='path where the trained weights saved')
36
+
37
+ parser.add_argument('--gpu_id', default=0, type=int, help='the gpu used for evaluation')
38
+
39
+ return parser
40
+
41
+ def main(args, img_path, debug=False):
42
+ os.environ["CUDA_VISIBLE_DEVICES"] = '{}'.format(args.gpu_id)
43
+ print(img_path)
44
+ device = torch.device('cuda')
45
+ # get the P2PNet
46
+ model = build_model(args)
47
+ # move to GPU
48
+ model.to(device)
49
+ # load trained model
50
+ if args.weight_path is not None:
51
+ checkpoint = torch.load(args.weight_path, map_location='cpu')
52
+ model.load_state_dict(checkpoint['model'])
53
+ # convert to eval mode
54
+ model.eval()
55
+ # create the pre-processing transform
56
+ transform = standard_transforms.Compose([
57
+ standard_transforms.ToTensor(),
58
+ standard_transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
59
+ ])
60
+
61
+ # set your image path here
62
+ clean_img_path = img_path.split('/')[-1]
63
+
64
+ # load the images
65
+ img_raw = Image.open(img_path).convert('RGB')
66
+ # round the size
67
+ width, height = img_raw.size
68
+ new_width = width // 128 * 128
69
+ new_height = height // 128 * 128
70
+ img_raw = img_raw.resize((new_width, new_height), Image.ANTIALIAS)
71
+ # pre-proccessing
72
+ img = transform(img_raw)
73
+
74
+ samples = torch.Tensor(img).unsqueeze(0)
75
+ samples = samples.to(device)
76
+ # run inference
77
+ outputs = model(samples)
78
+ outputs_scores = torch.nn.functional.softmax(outputs['pred_logits'], -1)[:, :, 1][0]
79
+
80
+ outputs_points = outputs['pred_points'][0]
81
+
82
+ threshold = 0.5
83
+ # filter the predictions
84
+ points = outputs_points[outputs_scores > threshold].detach().cpu().numpy().tolist()
85
+ predict_cnt = int((outputs_scores > threshold).sum())
86
+
87
+ outputs_scores = torch.nn.functional.softmax(outputs['pred_logits'], -1)[:, :, 1][0]
88
+
89
+ outputs_points = outputs['pred_points'][0]
90
+ # draw the predictions
91
+ size = 5
92
+ img_to_draw = cv2.cvtColor(np.array(img_raw), cv2.COLOR_RGB2BGR)
93
+ output_file = open(args.output_dir+"/"+clean_img_path, "w")
94
+ for p in points:
95
+ img_to_draw = cv2.circle(img_to_draw, (int(p[0]), int(p[1])), size, (0, 0, 255), -1)
96
+ output_file.write(str(p[0]) + " ")
97
+ output_file.write(str(p[1]) + "\n")
98
+ # save the visualized image
99
+ cv2.imwrite(os.path.join(args.output_dir, clean_img_path), img_to_draw)
100
+
101
+ if __name__ == '__main__':
102
+ parser = argparse.ArgumentParser('P2PNet evaluation script', parents=[get_args_parser()])
103
+ args = parser.parse_args()
104
+ main(args)
crowd_counter/train.py ADDED
@@ -0,0 +1,222 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import datetime
3
+ import random
4
+ import time
5
+ from pathlib import Path
6
+
7
+ import torch
8
+ from torch.utils.data import DataLoader, DistributedSampler
9
+
10
+ from crowd_datasets import build_dataset
11
+ from engine import *
12
+ from models import build_model
13
+ import os
14
+ from tensorboardX import SummaryWriter
15
+ import warnings
16
+ warnings.filterwarnings('ignore')
17
+
18
+ def get_args_parser():
19
+ parser = argparse.ArgumentParser('Set parameters for training P2PNet', add_help=False)
20
+ parser.add_argument('--lr', default=1e-4, type=float)
21
+ parser.add_argument('--lr_backbone', default=1e-5, type=float)
22
+ parser.add_argument('--batch_size', default=8, type=int)
23
+ parser.add_argument('--weight_decay', default=1e-4, type=float)
24
+ parser.add_argument('--epochs', default=3500, type=int)
25
+ parser.add_argument('--lr_drop', default=3500, type=int)
26
+ parser.add_argument('--clip_max_norm', default=0.1, type=float,
27
+ help='gradient clipping max norm')
28
+
29
+ # Model parameters
30
+ parser.add_argument('--frozen_weights', type=str, default=None,
31
+ help="Path to the pretrained model. If set, only the mask head will be trained")
32
+
33
+ # * Backbone
34
+ parser.add_argument('--backbone', default='vgg16_bn', type=str,
35
+ help="Name of the convolutional backbone to use")
36
+
37
+ # * Matcher
38
+ parser.add_argument('--set_cost_class', default=1, type=float,
39
+ help="Class coefficient in the matching cost")
40
+
41
+ parser.add_argument('--set_cost_point', default=0.05, type=float,
42
+ help="L1 point coefficient in the matching cost")
43
+
44
+ # * Loss coefficients
45
+ parser.add_argument('--point_loss_coef', default=0.0002, type=float)
46
+
47
+ parser.add_argument('--eos_coef', default=0.5, type=float,
48
+ help="Relative classification weight of the no-object class")
49
+ parser.add_argument('--row', default=2, type=int,
50
+ help="row number of anchor points")
51
+ parser.add_argument('--line', default=2, type=int,
52
+ help="line number of anchor points")
53
+
54
+ # dataset parameters
55
+ parser.add_argument('--dataset_file', default='SHHA')
56
+ parser.add_argument('--data_root', default='./new_public_density_data',
57
+ help='path where the dataset is')
58
+
59
+ parser.add_argument('--output_dir', default='./log',
60
+ help='path where to save, empty for no saving')
61
+ parser.add_argument('--checkpoints_dir', default='./ckpt',
62
+ help='path where to save checkpoints, empty for no saving')
63
+ parser.add_argument('--tensorboard_dir', default='./runs',
64
+ help='path where to save, empty for no saving')
65
+
66
+ parser.add_argument('--seed', default=42, type=int)
67
+ parser.add_argument('--resume', default='', help='resume from checkpoint')
68
+ parser.add_argument('--start_epoch', default=0, type=int, metavar='N',
69
+ help='start epoch')
70
+ parser.add_argument('--eval', action='store_true')
71
+ parser.add_argument('--num_workers', default=8, type=int)
72
+ parser.add_argument('--eval_freq', default=5, type=int,
73
+ help='frequency of evaluation, default setting is evaluating in every 5 epoch')
74
+ parser.add_argument('--gpu_id', default=0, type=int, help='the gpu used for training')
75
+
76
+ return parser
77
+
78
+ def main(args):
79
+ os.environ["CUDA_VISIBLE_DEVICES"] = '{}'.format(args.gpu_id)
80
+ # create the logging file
81
+ run_log_name = os.path.join(args.output_dir, 'run_log.txt')
82
+ with open(run_log_name, "w") as log_file:
83
+ log_file.write('Eval Log %s\n' % time.strftime("%c"))
84
+
85
+ if args.frozen_weights is not None:
86
+ assert args.masks, "Frozen training is meant for segmentation only"
87
+ # backup the arguments
88
+ print(args)
89
+ with open(run_log_name, "a") as log_file:
90
+ log_file.write("{}".format(args))
91
+ device = torch.device('cuda')
92
+ # fix the seed for reproducibility
93
+ seed = args.seed + utils.get_rank()
94
+ torch.manual_seed(seed)
95
+ np.random.seed(seed)
96
+ random.seed(seed)
97
+ # get the P2PNet model
98
+ model, criterion = build_model(args, training=True)
99
+ # move to GPU
100
+ model.to(device)
101
+ criterion.to(device)
102
+
103
+ model_without_ddp = model
104
+
105
+ n_parameters = sum(p.numel() for p in model.parameters() if p.requires_grad)
106
+ print('number of params:', n_parameters)
107
+ # use different optimation params for different parts of the model
108
+ param_dicts = [
109
+ {"params": [p for n, p in model_without_ddp.named_parameters() if "backbone" not in n and p.requires_grad]},
110
+ {
111
+ "params": [p for n, p in model_without_ddp.named_parameters() if "backbone" in n and p.requires_grad],
112
+ "lr": args.lr_backbone,
113
+ },
114
+ ]
115
+ # Adam is used by default
116
+ optimizer = torch.optim.Adam(param_dicts, lr=args.lr)
117
+ lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, args.lr_drop)
118
+ # create the dataset
119
+ loading_data = build_dataset(args=args)
120
+ # create the training and valiation set
121
+ train_set, val_set = loading_data(args.data_root)
122
+ # create the sampler used during training
123
+ sampler_train = torch.utils.data.RandomSampler(train_set)
124
+ sampler_val = torch.utils.data.SequentialSampler(val_set)
125
+
126
+ batch_sampler_train = torch.utils.data.BatchSampler(
127
+ sampler_train, args.batch_size, drop_last=True)
128
+ # the dataloader for training
129
+ data_loader_train = DataLoader(train_set, batch_sampler=batch_sampler_train,
130
+ collate_fn=utils.collate_fn_crowd, num_workers=args.num_workers)
131
+
132
+ data_loader_val = DataLoader(val_set, 1, sampler=sampler_val,
133
+ drop_last=False, collate_fn=utils.collate_fn_crowd, num_workers=args.num_workers)
134
+
135
+ if args.frozen_weights is not None:
136
+ checkpoint = torch.load(args.frozen_weights, map_location='cpu')
137
+ model_without_ddp.detr.load_state_dict(checkpoint['model'])
138
+ # resume the weights and training state if exists
139
+ if args.resume:
140
+ checkpoint = torch.load(args.resume, map_location='cpu')
141
+ model_without_ddp.load_state_dict(checkpoint['model'])
142
+ if not args.eval and 'optimizer' in checkpoint and 'lr_scheduler' in checkpoint and 'epoch' in checkpoint:
143
+ optimizer.load_state_dict(checkpoint['optimizer'])
144
+ lr_scheduler.load_state_dict(checkpoint['lr_scheduler'])
145
+ args.start_epoch = checkpoint['epoch'] + 1
146
+
147
+ print("Start training")
148
+ start_time = time.time()
149
+ # save the performance during the training
150
+ mae = []
151
+ mse = []
152
+ # the logger writer
153
+ writer = SummaryWriter(args.tensorboard_dir)
154
+
155
+ step = 0
156
+ # training starts here
157
+ for epoch in range(args.start_epoch, args.epochs):
158
+ t1 = time.time()
159
+ stat = train_one_epoch(
160
+ model, criterion, data_loader_train, optimizer, device, epoch,
161
+ args.clip_max_norm)
162
+
163
+ # record the training states after every epoch
164
+ if writer is not None:
165
+ with open(run_log_name, "a") as log_file:
166
+ log_file.write("loss/loss@{}: {}".format(epoch, stat['loss']))
167
+ log_file.write("loss/loss_ce@{}: {}".format(epoch, stat['loss_ce']))
168
+
169
+ writer.add_scalar('loss/loss', stat['loss'], epoch)
170
+ writer.add_scalar('loss/loss_ce', stat['loss_ce'], epoch)
171
+
172
+ t2 = time.time()
173
+ print('[ep %d][lr %.7f][%.2fs]' % \
174
+ (epoch, optimizer.param_groups[0]['lr'], t2 - t1))
175
+ with open(run_log_name, "a") as log_file:
176
+ log_file.write('[ep %d][lr %.7f][%.2fs]' % (epoch, optimizer.param_groups[0]['lr'], t2 - t1))
177
+ # change lr according to the scheduler
178
+ lr_scheduler.step()
179
+ # save latest weights every epoch
180
+ checkpoint_latest_path = os.path.join(args.checkpoints_dir, 'latest.pth')
181
+ torch.save({
182
+ 'model': model_without_ddp.state_dict(),
183
+ }, checkpoint_latest_path)
184
+ # run evaluation
185
+ if epoch % args.eval_freq == 0 and epoch != 0:
186
+ t1 = time.time()
187
+ result = evaluate_crowd_no_overlap(model, data_loader_val, device)
188
+ t2 = time.time()
189
+
190
+ mae.append(result[0])
191
+ mse.append(result[1])
192
+ # print the evaluation results
193
+ print('=======================================test=======================================')
194
+ print("mae:", result[0], "mse:", result[1], "time:", t2 - t1, "best mae:", np.min(mae), )
195
+ with open(run_log_name, "a") as log_file:
196
+ log_file.write("mae:{}, mse:{}, time:{}, best mae:{}".format(result[0],
197
+ result[1], t2 - t1, np.min(mae)))
198
+ print('=======================================test=======================================')
199
+ # recored the evaluation results
200
+ if writer is not None:
201
+ with open(run_log_name, "a") as log_file:
202
+ log_file.write("metric/mae@{}: {}".format(step, result[0]))
203
+ log_file.write("metric/mse@{}: {}".format(step, result[1]))
204
+ writer.add_scalar('metric/mae', result[0], step)
205
+ writer.add_scalar('metric/mse', result[1], step)
206
+ step += 1
207
+
208
+ # save the best model since begining
209
+ if abs(np.min(mae) - result[0]) < 0.01:
210
+ checkpoint_best_path = os.path.join(args.checkpoints_dir, 'best_mae.pth')
211
+ torch.save({
212
+ 'model': model_without_ddp.state_dict(),
213
+ }, checkpoint_best_path)
214
+ # total time for training
215
+ total_time = time.time() - start_time
216
+ total_time_str = str(datetime.timedelta(seconds=int(total_time)))
217
+ print('Training time {}'.format(total_time_str))
218
+
219
+ if __name__ == '__main__':
220
+ parser = argparse.ArgumentParser('P2PNet training and evaluation script', parents=[get_args_parser()])
221
+ args = parser.parse_args()
222
+ main(args)
crowd_counter/util/__init__.py ADDED
File without changes
crowd_counter/util/misc.py ADDED
@@ -0,0 +1,517 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2
+ """
3
+ Misc functions, including distributed helpers.
4
+
5
+ Mostly copy-paste from torchvision references.
6
+ """
7
+ import os
8
+ import subprocess
9
+ import time
10
+ from collections import defaultdict, deque
11
+ import datetime
12
+ import pickle
13
+ from typing import Optional, List
14
+
15
+ import torch
16
+ import torch.distributed as dist
17
+ from torch import Tensor
18
+
19
+ import torch.nn as nn
20
+ import torch.nn.functional as F
21
+ from torch.autograd import Variable
22
+
23
+ # needed due to empty tensor bug in pytorch and torchvision 0.5
24
+ import torchvision
25
+ if float(torchvision.__version__[:3]) < 0.7:
26
+ pass
27
+
28
+
29
+ class SmoothedValue(object):
30
+ """Track a series of values and provide access to smoothed values over a
31
+ window or the global series average.
32
+ """
33
+
34
+ def __init__(self, window_size=20, fmt=None):
35
+ if fmt is None:
36
+ fmt = "{median:.4f} ({global_avg:.4f})"
37
+ self.deque = deque(maxlen=window_size)
38
+ self.total = 0.0
39
+ self.count = 0
40
+ self.fmt = fmt
41
+
42
+ def update(self, value, n=1):
43
+ self.deque.append(value)
44
+ self.count += n
45
+ self.total += value * n
46
+
47
+ def synchronize_between_processes(self):
48
+ """
49
+ Warning: does not synchronize the deque!
50
+ """
51
+ if not is_dist_avail_and_initialized():
52
+ return
53
+ t = torch.tensor([self.count, self.total], dtype=torch.float64, device='cuda')
54
+ dist.barrier()
55
+ dist.all_reduce(t)
56
+ t = t.tolist()
57
+ self.count = int(t[0])
58
+ self.total = t[1]
59
+
60
+ @property
61
+ def median(self):
62
+ d = torch.tensor(list(self.deque))
63
+ return d.median().item()
64
+
65
+ @property
66
+ def avg(self):
67
+ d = torch.tensor(list(self.deque), dtype=torch.float32)
68
+ return d.mean().item()
69
+
70
+ @property
71
+ def global_avg(self):
72
+ return self.total / self.count
73
+
74
+ @property
75
+ def max(self):
76
+ return max(self.deque)
77
+
78
+ @property
79
+ def value(self):
80
+ return self.deque[-1]
81
+
82
+ def __str__(self):
83
+ return self.fmt.format(
84
+ median=self.median,
85
+ avg=self.avg,
86
+ global_avg=self.global_avg,
87
+ max=self.max,
88
+ value=self.value)
89
+
90
+
91
+ def all_gather(data):
92
+ """
93
+ Run all_gather on arbitrary picklable data (not necessarily tensors)
94
+ Args:
95
+ data: any picklable object
96
+ Returns:
97
+ list[data]: list of data gathered from each rank
98
+ """
99
+ world_size = get_world_size()
100
+ if world_size == 1:
101
+ return [data]
102
+
103
+ # serialized to a Tensor
104
+ buffer = pickle.dumps(data)
105
+ storage = torch.ByteStorage.from_buffer(buffer)
106
+ tensor = torch.ByteTensor(storage).to("cuda")
107
+
108
+ # obtain Tensor size of each rank
109
+ local_size = torch.tensor([tensor.numel()], device="cuda")
110
+ size_list = [torch.tensor([0], device="cuda") for _ in range(world_size)]
111
+ dist.all_gather(size_list, local_size)
112
+ size_list = [int(size.item()) for size in size_list]
113
+ max_size = max(size_list)
114
+
115
+ # receiving Tensor from all ranks
116
+ # we pad the tensor because torch all_gather does not support
117
+ # gathering tensors of different shapes
118
+ tensor_list = []
119
+ for _ in size_list:
120
+ tensor_list.append(torch.empty((max_size,), dtype=torch.uint8, device="cuda"))
121
+ if local_size != max_size:
122
+ padding = torch.empty(size=(max_size - local_size,), dtype=torch.uint8, device="cuda")
123
+ tensor = torch.cat((tensor, padding), dim=0)
124
+ dist.all_gather(tensor_list, tensor)
125
+
126
+ data_list = []
127
+ for size, tensor in zip(size_list, tensor_list):
128
+ buffer = tensor.cpu().numpy().tobytes()[:size]
129
+ data_list.append(pickle.loads(buffer))
130
+
131
+ return data_list
132
+
133
+
134
+ def reduce_dict(input_dict, average=True):
135
+ """
136
+ Args:
137
+ input_dict (dict): all the values will be reduced
138
+ average (bool): whether to do average or sum
139
+ Reduce the values in the dictionary from all processes so that all processes
140
+ have the averaged results. Returns a dict with the same fields as
141
+ input_dict, after reduction.
142
+ """
143
+ world_size = get_world_size()
144
+ if world_size < 2:
145
+ return input_dict
146
+ with torch.no_grad():
147
+ names = []
148
+ values = []
149
+ # sort the keys so that they are consistent across processes
150
+ for k in sorted(input_dict.keys()):
151
+ names.append(k)
152
+ values.append(input_dict[k])
153
+ values = torch.stack(values, dim=0)
154
+ dist.all_reduce(values)
155
+ if average:
156
+ values /= world_size
157
+ reduced_dict = {k: v for k, v in zip(names, values)}
158
+ return reduced_dict
159
+
160
+
161
+ class MetricLogger(object):
162
+ def __init__(self, delimiter="\t"):
163
+ self.meters = defaultdict(SmoothedValue)
164
+ self.delimiter = delimiter
165
+
166
+ def update(self, **kwargs):
167
+ for k, v in kwargs.items():
168
+ if isinstance(v, torch.Tensor):
169
+ v = v.item()
170
+ assert isinstance(v, (float, int))
171
+ self.meters[k].update(v)
172
+
173
+ def __getattr__(self, attr):
174
+ if attr in self.meters:
175
+ return self.meters[attr]
176
+ if attr in self.__dict__:
177
+ return self.__dict__[attr]
178
+ raise AttributeError("'{}' object has no attribute '{}'".format(
179
+ type(self).__name__, attr))
180
+
181
+ def __str__(self):
182
+ loss_str = []
183
+ for name, meter in self.meters.items():
184
+ loss_str.append(
185
+ "{}: {}".format(name, str(meter))
186
+ )
187
+ return self.delimiter.join(loss_str)
188
+
189
+ def synchronize_between_processes(self):
190
+ for meter in self.meters.values():
191
+ meter.synchronize_between_processes()
192
+
193
+ def add_meter(self, name, meter):
194
+ self.meters[name] = meter
195
+
196
+ def log_every(self, iterable, print_freq, header=None):
197
+ i = 0
198
+ if not header:
199
+ header = ''
200
+ start_time = time.time()
201
+ end = time.time()
202
+ iter_time = SmoothedValue(fmt='{avg:.4f}')
203
+ data_time = SmoothedValue(fmt='{avg:.4f}')
204
+ space_fmt = ':' + str(len(str(len(iterable)))) + 'd'
205
+ if torch.cuda.is_available():
206
+ log_msg = self.delimiter.join([
207
+ header,
208
+ '[{0' + space_fmt + '}/{1}]',
209
+ 'eta: {eta}',
210
+ '{meters}',
211
+ 'time: {time}',
212
+ 'data: {data}',
213
+ 'max mem: {memory:.0f}'
214
+ ])
215
+ else:
216
+ log_msg = self.delimiter.join([
217
+ header,
218
+ '[{0' + space_fmt + '}/{1}]',
219
+ 'eta: {eta}',
220
+ '{meters}',
221
+ 'time: {time}',
222
+ 'data: {data}'
223
+ ])
224
+ MB = 1024.0 * 1024.0
225
+ for obj in iterable:
226
+ data_time.update(time.time() - end)
227
+ yield obj
228
+ iter_time.update(time.time() - end)
229
+ if i % print_freq == 0 or i == len(iterable) - 1:
230
+ eta_seconds = iter_time.global_avg * (len(iterable) - i)
231
+ eta_string = str(datetime.timedelta(seconds=int(eta_seconds)))
232
+ if torch.cuda.is_available():
233
+ print(log_msg.format(
234
+ i, len(iterable), eta=eta_string,
235
+ meters=str(self),
236
+ time=str(iter_time), data=str(data_time),
237
+ memory=torch.cuda.max_memory_allocated() / MB))
238
+ else:
239
+ print(log_msg.format(
240
+ i, len(iterable), eta=eta_string,
241
+ meters=str(self),
242
+ time=str(iter_time), data=str(data_time)))
243
+ i += 1
244
+ end = time.time()
245
+ total_time = time.time() - start_time
246
+ total_time_str = str(datetime.timedelta(seconds=int(total_time)))
247
+ print('{} Total time: {} ({:.4f} s / it)'.format(
248
+ header, total_time_str, total_time / len(iterable)))
249
+
250
+
251
+ def get_sha():
252
+ cwd = os.path.dirname(os.path.abspath(__file__))
253
+
254
+ def _run(command):
255
+ return subprocess.check_output(command, cwd=cwd).decode('ascii').strip()
256
+ sha = 'N/A'
257
+ diff = "clean"
258
+ branch = 'N/A'
259
+ try:
260
+ sha = _run(['git', 'rev-parse', 'HEAD'])
261
+ subprocess.check_output(['git', 'diff'], cwd=cwd)
262
+ diff = _run(['git', 'diff-index', 'HEAD'])
263
+ diff = "has uncommited changes" if diff else "clean"
264
+ branch = _run(['git', 'rev-parse', '--abbrev-ref', 'HEAD'])
265
+ except Exception:
266
+ pass
267
+ message = f"sha: {sha}, status: {diff}, branch: {branch}"
268
+ return message
269
+
270
+
271
+ def collate_fn(batch):
272
+ batch = list(zip(*batch))
273
+ batch[0] = nested_tensor_from_tensor_list(batch[0])
274
+ return tuple(batch)
275
+
276
+ def collate_fn_crowd(batch):
277
+ # re-organize the batch
278
+ batch_new = []
279
+ for b in batch:
280
+ imgs, points = b
281
+ if imgs.ndim == 3:
282
+ imgs = imgs.unsqueeze(0)
283
+ for i in range(len(imgs)):
284
+ batch_new.append((imgs[i, :, :, :], points[i]))
285
+ batch = batch_new
286
+ batch = list(zip(*batch))
287
+ batch[0] = nested_tensor_from_tensor_list(batch[0])
288
+ return tuple(batch)
289
+
290
+
291
+ def _max_by_axis(the_list):
292
+ # type: (List[List[int]]) -> List[int]
293
+ maxes = the_list[0]
294
+ for sublist in the_list[1:]:
295
+ for index, item in enumerate(sublist):
296
+ maxes[index] = max(maxes[index], item)
297
+ return maxes
298
+
299
+ def _max_by_axis_pad(the_list):
300
+ # type: (List[List[int]]) -> List[int]
301
+ maxes = the_list[0]
302
+ for sublist in the_list[1:]:
303
+ for index, item in enumerate(sublist):
304
+ maxes[index] = max(maxes[index], item)
305
+
306
+ block = 128
307
+
308
+ for i in range(2):
309
+ maxes[i+1] = ((maxes[i+1] - 1) // block + 1) * block
310
+ return maxes
311
+
312
+
313
+ def nested_tensor_from_tensor_list(tensor_list: List[Tensor]):
314
+ # TODO make this more general
315
+ if tensor_list[0].ndim == 3:
316
+
317
+ # TODO make it support different-sized images
318
+ max_size = _max_by_axis_pad([list(img.shape) for img in tensor_list])
319
+ # min_size = tuple(min(s) for s in zip(*[img.shape for img in tensor_list]))
320
+ batch_shape = [len(tensor_list)] + max_size
321
+ b, c, h, w = batch_shape
322
+ dtype = tensor_list[0].dtype
323
+ device = tensor_list[0].device
324
+ tensor = torch.zeros(batch_shape, dtype=dtype, device=device)
325
+ for img, pad_img in zip(tensor_list, tensor):
326
+ pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)
327
+ else:
328
+ raise ValueError('not supported')
329
+ return tensor
330
+
331
+ class NestedTensor(object):
332
+ def __init__(self, tensors, mask: Optional[Tensor]):
333
+ self.tensors = tensors
334
+ self.mask = mask
335
+
336
+ def to(self, device):
337
+ # type: (Device) -> NestedTensor # noqa
338
+ cast_tensor = self.tensors.to(device)
339
+ mask = self.mask
340
+ if mask is not None:
341
+ assert mask is not None
342
+ cast_mask = mask.to(device)
343
+ else:
344
+ cast_mask = None
345
+ return NestedTensor(cast_tensor, cast_mask)
346
+
347
+ def decompose(self):
348
+ return self.tensors, self.mask
349
+
350
+ def __repr__(self):
351
+ return str(self.tensors)
352
+
353
+
354
+ def setup_for_distributed(is_master):
355
+ """
356
+ This function disables printing when not in master process
357
+ """
358
+ import builtins as __builtin__
359
+ builtin_print = __builtin__.print
360
+
361
+ def print(*args, **kwargs):
362
+ force = kwargs.pop('force', False)
363
+ if is_master or force:
364
+ builtin_print(*args, **kwargs)
365
+
366
+ __builtin__.print = print
367
+
368
+
369
+ def is_dist_avail_and_initialized():
370
+ if not dist.is_available():
371
+ return False
372
+ if not dist.is_initialized():
373
+ return False
374
+ return True
375
+
376
+
377
+ def get_world_size():
378
+ if not is_dist_avail_and_initialized():
379
+ return 1
380
+ return dist.get_world_size()
381
+
382
+
383
+ def get_rank():
384
+ if not is_dist_avail_and_initialized():
385
+ return 0
386
+ return dist.get_rank()
387
+
388
+
389
+ def is_main_process():
390
+ return get_rank() == 0
391
+
392
+
393
+ def save_on_master(*args, **kwargs):
394
+ if is_main_process():
395
+ torch.save(*args, **kwargs)
396
+
397
+
398
+ def init_distributed_mode(args):
399
+ if 'RANK' in os.environ and 'WORLD_SIZE' in os.environ:
400
+ args.rank = int(os.environ["RANK"])
401
+ args.world_size = int(os.environ['WORLD_SIZE'])
402
+ args.gpu = int(os.environ['LOCAL_RANK'])
403
+ elif 'SLURM_PROCID' in os.environ:
404
+ args.rank = int(os.environ['SLURM_PROCID'])
405
+ args.gpu = args.rank % torch.cuda.device_count()
406
+ else:
407
+ print('Not using distributed mode')
408
+ args.distributed = False
409
+ return
410
+
411
+ args.distributed = True
412
+
413
+ torch.cuda.set_device(args.gpu)
414
+ args.dist_backend = 'nccl'
415
+ print('| distributed init (rank {}): {}'.format(
416
+ args.rank, args.dist_url), flush=True)
417
+ torch.distributed.init_process_group(backend=args.dist_backend, init_method=args.dist_url,
418
+ world_size=args.world_size, rank=args.rank)
419
+ torch.distributed.barrier()
420
+ setup_for_distributed(args.rank == 0)
421
+
422
+
423
+ @torch.no_grad()
424
+ def accuracy(output, target, topk=(1,)):
425
+ """Computes the precision@k for the specified values of k"""
426
+ if target.numel() == 0:
427
+ return [torch.zeros([], device=output.device)]
428
+ maxk = max(topk)
429
+ batch_size = target.size(0)
430
+
431
+ _, pred = output.topk(maxk, 1, True, True)
432
+ pred = pred.t()
433
+ correct = pred.eq(target.view(1, -1).expand_as(pred))
434
+
435
+ res = []
436
+ for k in topk:
437
+ correct_k = correct[:k].view(-1).float().sum(0)
438
+ res.append(correct_k.mul_(100.0 / batch_size))
439
+ return res
440
+
441
+
442
+ def interpolate(input, size=None, scale_factor=None, mode="nearest", align_corners=None):
443
+ # type: (Tensor, Optional[List[int]], Optional[float], str, Optional[bool]) -> Tensor
444
+ """
445
+ Equivalent to nn.functional.interpolate, but with support for empty batch sizes.
446
+ This will eventually be supported natively by PyTorch, and this
447
+ class can go away.
448
+ """
449
+ if float(torchvision.__version__[:3]) < 0.7:
450
+ if input.numel() > 0:
451
+ return torch.nn.functional.interpolate(
452
+ input, size, scale_factor, mode, align_corners
453
+ )
454
+
455
+ output_shape = _output_size(2, input, size, scale_factor)
456
+ output_shape = list(input.shape[:-2]) + list(output_shape)
457
+ return _new_empty_tensor(input, output_shape)
458
+ else:
459
+ return torchvision.ops.misc.interpolate(input, size, scale_factor, mode, align_corners)
460
+
461
+
462
+ class FocalLoss(nn.Module):
463
+ r"""
464
+ This criterion is a implemenation of Focal Loss, which is proposed in
465
+ Focal Loss for Dense Object Detection.
466
+
467
+ Loss(x, class) = - \alpha (1-softmax(x)[class])^gamma \log(softmax(x)[class])
468
+
469
+ The losses are averaged across observations for each minibatch.
470
+
471
+ Args:
472
+ alpha(1D Tensor, Variable) : the scalar factor for this criterion
473
+ gamma(float, double) : gamma > 0; reduces the relative loss for well-classified examples (p > .5),
474
+ putting more focus on hard, misclassified examples
475
+ size_average(bool): By default, the losses are averaged over observations for each minibatch.
476
+ However, if the field size_average is set to False, the losses are
477
+ instead summed for each minibatch.
478
+
479
+
480
+ """
481
+ def __init__(self, class_num, alpha=None, gamma=2, size_average=True):
482
+ super(FocalLoss, self).__init__()
483
+ if alpha is None:
484
+ self.alpha = Variable(torch.ones(class_num, 1))
485
+ else:
486
+ if isinstance(alpha, Variable):
487
+ self.alpha = alpha
488
+ else:
489
+ self.alpha = Variable(alpha)
490
+ self.gamma = gamma
491
+ self.class_num = class_num
492
+ self.size_average = size_average
493
+
494
+ def forward(self, inputs, targets):
495
+ N = inputs.size(0)
496
+ C = inputs.size(1)
497
+ P = F.softmax(inputs)
498
+
499
+ class_mask = inputs.data.new(N, C).fill_(0)
500
+ class_mask = Variable(class_mask)
501
+ ids = targets.view(-1, 1)
502
+ class_mask.scatter_(1, ids.data, 1.)
503
+
504
+ if inputs.is_cuda and not self.alpha.is_cuda:
505
+ self.alpha = self.alpha.cuda()
506
+ alpha = self.alpha[ids.data.view(-1)]
507
+
508
+ probs = (P*class_mask).sum(1).view(-1,1)
509
+
510
+ log_p = probs.log()
511
+ batch_loss = -alpha*(torch.pow((1-probs), self.gamma))*log_p
512
+
513
+ if self.size_average:
514
+ loss = batch_loss.mean()
515
+ else:
516
+ loss = batch_loss.sum()
517
+ return loss
crowd_counter/weights/SHTechA.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:506047732b128ff09efef18e94bfacbe35fcfef300e5e9eeeece259b0488c63f
3
+ size 86372926
images/img-1.jpg ADDED
images/img-2.jpg ADDED
images/img-3.jpg ADDED
requirements.txt ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ torch
2
+ torchvision
3
+ tensorboardX
4
+ easydict
5
+ pandas
6
+ numpy
7
+ scipy
8
+ matplotlib
9
+ Pillow
10
+ opencv-python
11
+ gradio==3.50