Kano001 commited on
Commit
49cd62a
1 Parent(s): c61ccee

Upload 9 files

Browse files
MLPY/Lib/site-packages/torch-2.3.1.dist-info/INSTALLER ADDED
@@ -0,0 +1 @@
 
 
1
+ pip
MLPY/Lib/site-packages/torch-2.3.1.dist-info/LICENSE ADDED
The diff for this file is too large to render. See raw diff
 
MLPY/Lib/site-packages/torch-2.3.1.dist-info/METADATA ADDED
@@ -0,0 +1,520 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: torch
3
+ Version: 2.3.1
4
+ Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
5
+ Home-page: https://pytorch.org/
6
+ Download-URL: https://github.com/pytorch/pytorch/tags
7
+ Author: PyTorch Team
8
+ Author-email: [email protected]
9
+ License: BSD-3
10
+ Keywords: pytorch,machine learning
11
+ Classifier: Development Status :: 5 - Production/Stable
12
+ Classifier: Intended Audience :: Developers
13
+ Classifier: Intended Audience :: Education
14
+ Classifier: Intended Audience :: Science/Research
15
+ Classifier: License :: OSI Approved :: BSD License
16
+ Classifier: Topic :: Scientific/Engineering
17
+ Classifier: Topic :: Scientific/Engineering :: Mathematics
18
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
19
+ Classifier: Topic :: Software Development
20
+ Classifier: Topic :: Software Development :: Libraries
21
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
22
+ Classifier: Programming Language :: C++
23
+ Classifier: Programming Language :: Python :: 3
24
+ Classifier: Programming Language :: Python :: 3.8
25
+ Classifier: Programming Language :: Python :: 3.9
26
+ Classifier: Programming Language :: Python :: 3.10
27
+ Classifier: Programming Language :: Python :: 3.11
28
+ Classifier: Programming Language :: Python :: 3.12
29
+ Requires-Python: >=3.8.0
30
+ Description-Content-Type: text/markdown
31
+ License-File: LICENSE
32
+ License-File: NOTICE
33
+ Requires-Dist: filelock
34
+ Requires-Dist: typing-extensions >=4.8.0
35
+ Requires-Dist: sympy
36
+ Requires-Dist: networkx
37
+ Requires-Dist: jinja2
38
+ Requires-Dist: fsspec
39
+ Requires-Dist: nvidia-cuda-nvrtc-cu12 ==12.1.105 ; platform_system == "Linux" and platform_machine == "x86_64"
40
+ Requires-Dist: nvidia-cuda-runtime-cu12 ==12.1.105 ; platform_system == "Linux" and platform_machine == "x86_64"
41
+ Requires-Dist: nvidia-cuda-cupti-cu12 ==12.1.105 ; platform_system == "Linux" and platform_machine == "x86_64"
42
+ Requires-Dist: nvidia-cudnn-cu12 ==8.9.2.26 ; platform_system == "Linux" and platform_machine == "x86_64"
43
+ Requires-Dist: nvidia-cublas-cu12 ==12.1.3.1 ; platform_system == "Linux" and platform_machine == "x86_64"
44
+ Requires-Dist: nvidia-cufft-cu12 ==11.0.2.54 ; platform_system == "Linux" and platform_machine == "x86_64"
45
+ Requires-Dist: nvidia-curand-cu12 ==10.3.2.106 ; platform_system == "Linux" and platform_machine == "x86_64"
46
+ Requires-Dist: nvidia-cusolver-cu12 ==11.4.5.107 ; platform_system == "Linux" and platform_machine == "x86_64"
47
+ Requires-Dist: nvidia-cusparse-cu12 ==12.1.0.106 ; platform_system == "Linux" and platform_machine == "x86_64"
48
+ Requires-Dist: nvidia-nccl-cu12 ==2.20.5 ; platform_system == "Linux" and platform_machine == "x86_64"
49
+ Requires-Dist: nvidia-nvtx-cu12 ==12.1.105 ; platform_system == "Linux" and platform_machine == "x86_64"
50
+ Requires-Dist: triton ==2.3.1 ; platform_system == "Linux" and platform_machine == "x86_64" and python_version < "3.12"
51
+ Requires-Dist: mkl <=2021.4.0,>=2021.1.1 ; platform_system == "Windows"
52
+ Provides-Extra: opt-einsum
53
+ Requires-Dist: opt-einsum >=3.3 ; extra == 'opt-einsum'
54
+ Provides-Extra: optree
55
+ Requires-Dist: optree >=0.9.1 ; extra == 'optree'
56
+
57
+ ![PyTorch Logo](https://github.com/pytorch/pytorch/blob/main/docs/source/_static/img/pytorch-logo-dark.png)
58
+
59
+ --------------------------------------------------------------------------------
60
+
61
+ PyTorch is a Python package that provides two high-level features:
62
+ - Tensor computation (like NumPy) with strong GPU acceleration
63
+ - Deep neural networks built on a tape-based autograd system
64
+
65
+ You can reuse your favorite Python packages such as NumPy, SciPy, and Cython to extend PyTorch when needed.
66
+
67
+ Our trunk health (Continuous Integration signals) can be found at [hud.pytorch.org](https://hud.pytorch.org/ci/pytorch/pytorch/main).
68
+
69
+ <!-- toc -->
70
+
71
+ - [More About PyTorch](#more-about-pytorch)
72
+ - [A GPU-Ready Tensor Library](#a-gpu-ready-tensor-library)
73
+ - [Dynamic Neural Networks: Tape-Based Autograd](#dynamic-neural-networks-tape-based-autograd)
74
+ - [Python First](#python-first)
75
+ - [Imperative Experiences](#imperative-experiences)
76
+ - [Fast and Lean](#fast-and-lean)
77
+ - [Extensions Without Pain](#extensions-without-pain)
78
+ - [Installation](#installation)
79
+ - [Binaries](#binaries)
80
+ - [NVIDIA Jetson Platforms](#nvidia-jetson-platforms)
81
+ - [From Source](#from-source)
82
+ - [Prerequisites](#prerequisites)
83
+ - [Install Dependencies](#install-dependencies)
84
+ - [Get the PyTorch Source](#get-the-pytorch-source)
85
+ - [Install PyTorch](#install-pytorch)
86
+ - [Adjust Build Options (Optional)](#adjust-build-options-optional)
87
+ - [Docker Image](#docker-image)
88
+ - [Using pre-built images](#using-pre-built-images)
89
+ - [Building the image yourself](#building-the-image-yourself)
90
+ - [Building the Documentation](#building-the-documentation)
91
+ - [Previous Versions](#previous-versions)
92
+ - [Getting Started](#getting-started)
93
+ - [Resources](#resources)
94
+ - [Communication](#communication)
95
+ - [Releases and Contributing](#releases-and-contributing)
96
+ - [The Team](#the-team)
97
+ - [License](#license)
98
+
99
+ <!-- tocstop -->
100
+
101
+ ## More About PyTorch
102
+
103
+ [Learn the basics of PyTorch](https://pytorch.org/tutorials/beginner/basics/intro.html)
104
+
105
+ At a granular level, PyTorch is a library that consists of the following components:
106
+
107
+ | Component | Description |
108
+ | ---- | --- |
109
+ | [**torch**](https://pytorch.org/docs/stable/torch.html) | A Tensor library like NumPy, with strong GPU support |
110
+ | [**torch.autograd**](https://pytorch.org/docs/stable/autograd.html) | A tape-based automatic differentiation library that supports all differentiable Tensor operations in torch |
111
+ | [**torch.jit**](https://pytorch.org/docs/stable/jit.html) | A compilation stack (TorchScript) to create serializable and optimizable models from PyTorch code |
112
+ | [**torch.nn**](https://pytorch.org/docs/stable/nn.html) | A neural networks library deeply integrated with autograd designed for maximum flexibility |
113
+ | [**torch.multiprocessing**](https://pytorch.org/docs/stable/multiprocessing.html) | Python multiprocessing, but with magical memory sharing of torch Tensors across processes. Useful for data loading and Hogwild training |
114
+ | [**torch.utils**](https://pytorch.org/docs/stable/data.html) | DataLoader and other utility functions for convenience |
115
+
116
+ Usually, PyTorch is used either as:
117
+
118
+ - A replacement for NumPy to use the power of GPUs.
119
+ - A deep learning research platform that provides maximum flexibility and speed.
120
+
121
+ Elaborating Further:
122
+
123
+ ### A GPU-Ready Tensor Library
124
+
125
+ If you use NumPy, then you have used Tensors (a.k.a. ndarray).
126
+
127
+ ![Tensor illustration](./docs/source/_static/img/tensor_illustration.png)
128
+
129
+ PyTorch provides Tensors that can live either on the CPU or the GPU and accelerates the
130
+ computation by a huge amount.
131
+
132
+ We provide a wide variety of tensor routines to accelerate and fit your scientific computation needs
133
+ such as slicing, indexing, mathematical operations, linear algebra, reductions.
134
+ And they are fast!
135
+
136
+ ### Dynamic Neural Networks: Tape-Based Autograd
137
+
138
+ PyTorch has a unique way of building neural networks: using and replaying a tape recorder.
139
+
140
+ Most frameworks such as TensorFlow, Theano, Caffe, and CNTK have a static view of the world.
141
+ One has to build a neural network and reuse the same structure again and again.
142
+ Changing the way the network behaves means that one has to start from scratch.
143
+
144
+ With PyTorch, we use a technique called reverse-mode auto-differentiation, which allows you to
145
+ change the way your network behaves arbitrarily with zero lag or overhead. Our inspiration comes
146
+ from several research papers on this topic, as well as current and past work such as
147
+ [torch-autograd](https://github.com/twitter/torch-autograd),
148
+ [autograd](https://github.com/HIPS/autograd),
149
+ [Chainer](https://chainer.org), etc.
150
+
151
+ While this technique is not unique to PyTorch, it's one of the fastest implementations of it to date.
152
+ You get the best of speed and flexibility for your crazy research.
153
+
154
+ ![Dynamic graph](https://github.com/pytorch/pytorch/blob/main/docs/source/_static/img/dynamic_graph.gif)
155
+
156
+ ### Python First
157
+
158
+ PyTorch is not a Python binding into a monolithic C++ framework.
159
+ It is built to be deeply integrated into Python.
160
+ You can use it naturally like you would use [NumPy](https://www.numpy.org/) / [SciPy](https://www.scipy.org/) / [scikit-learn](https://scikit-learn.org) etc.
161
+ You can write your new neural network layers in Python itself, using your favorite libraries
162
+ and use packages such as [Cython](https://cython.org/) and [Numba](http://numba.pydata.org/).
163
+ Our goal is to not reinvent the wheel where appropriate.
164
+
165
+ ### Imperative Experiences
166
+
167
+ PyTorch is designed to be intuitive, linear in thought, and easy to use.
168
+ When you execute a line of code, it gets executed. There isn't an asynchronous view of the world.
169
+ When you drop into a debugger or receive error messages and stack traces, understanding them is straightforward.
170
+ The stack trace points to exactly where your code was defined.
171
+ We hope you never spend hours debugging your code because of bad stack traces or asynchronous and opaque execution engines.
172
+
173
+ ### Fast and Lean
174
+
175
+ PyTorch has minimal framework overhead. We integrate acceleration libraries
176
+ such as [Intel MKL](https://software.intel.com/mkl) and NVIDIA ([cuDNN](https://developer.nvidia.com/cudnn), [NCCL](https://developer.nvidia.com/nccl)) to maximize speed.
177
+ At the core, its CPU and GPU Tensor and neural network backends
178
+ are mature and have been tested for years.
179
+
180
+ Hence, PyTorch is quite fast — whether you run small or large neural networks.
181
+
182
+ The memory usage in PyTorch is extremely efficient compared to Torch or some of the alternatives.
183
+ We've written custom memory allocators for the GPU to make sure that
184
+ your deep learning models are maximally memory efficient.
185
+ This enables you to train bigger deep learning models than before.
186
+
187
+ ### Extensions Without Pain
188
+
189
+ Writing new neural network modules, or interfacing with PyTorch's Tensor API was designed to be straightforward
190
+ and with minimal abstractions.
191
+
192
+ You can write new neural network layers in Python using the torch API
193
+ [or your favorite NumPy-based libraries such as SciPy](https://pytorch.org/tutorials/advanced/numpy_extensions_tutorial.html).
194
+
195
+ If you want to write your layers in C/C++, we provide a convenient extension API that is efficient and with minimal boilerplate.
196
+ No wrapper code needs to be written. You can see [a tutorial here](https://pytorch.org/tutorials/advanced/cpp_extension.html) and [an example here](https://github.com/pytorch/extension-cpp).
197
+
198
+
199
+ ## Installation
200
+
201
+ ### Binaries
202
+ Commands to install binaries via Conda or pip wheels are on our website: [https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/)
203
+
204
+
205
+ #### NVIDIA Jetson Platforms
206
+
207
+ Python wheels for NVIDIA's Jetson Nano, Jetson TX1/TX2, Jetson Xavier NX/AGX, and Jetson AGX Orin are provided [here](https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-10-now-available/72048) and the L4T container is published [here](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-pytorch)
208
+
209
+ They require JetPack 4.2 and above, and [@dusty-nv](https://github.com/dusty-nv) and [@ptrblck](https://github.com/ptrblck) are maintaining them.
210
+
211
+
212
+ ### From Source
213
+
214
+ #### Prerequisites
215
+ If you are installing from source, you will need:
216
+ - Python 3.8 or later (for Linux, Python 3.8.1+ is needed)
217
+ - A compiler that fully supports C++17, such as clang or gcc (gcc 9.4.0 or newer is required)
218
+
219
+ We highly recommend installing an [Anaconda](https://www.anaconda.com/download) environment. You will get a high-quality BLAS library (MKL) and you get controlled dependency versions regardless of your Linux distro.
220
+
221
+ If you want to compile with CUDA support, [select a supported version of CUDA from our support matrix](https://pytorch.org/get-started/locally/), then install the following:
222
+ - [NVIDIA CUDA](https://developer.nvidia.com/cuda-downloads)
223
+ - [NVIDIA cuDNN](https://developer.nvidia.com/cudnn) v8.5 or above
224
+ - [Compiler](https://gist.github.com/ax3l/9489132) compatible with CUDA
225
+
226
+ Note: You could refer to the [cuDNN Support Matrix](https://docs.nvidia.com/deeplearning/cudnn/reference/support-matrix.html) for cuDNN versions with the various supported CUDA, CUDA driver and NVIDIA hardware
227
+
228
+ If you want to disable CUDA support, export the environment variable `USE_CUDA=0`.
229
+ Other potentially useful environment variables may be found in `setup.py`.
230
+
231
+ If you are building for NVIDIA's Jetson platforms (Jetson Nano, TX1, TX2, AGX Xavier), Instructions to install PyTorch for Jetson Nano are [available here](https://devtalk.nvidia.com/default/topic/1049071/jetson-nano/pytorch-for-jetson-nano/)
232
+
233
+ If you want to compile with ROCm support, install
234
+ - [AMD ROCm](https://rocm.docs.amd.com/en/latest/deploy/linux/quick_start.html) 4.0 and above installation
235
+ - ROCm is currently supported only for Linux systems.
236
+
237
+ If you want to disable ROCm support, export the environment variable `USE_ROCM=0`.
238
+ Other potentially useful environment variables may be found in `setup.py`.
239
+
240
+ #### Install Dependencies
241
+
242
+ **Common**
243
+
244
+ ```bash
245
+ conda install cmake ninja
246
+ # Run this command from the PyTorch directory after cloning the source code using the “Get the PyTorch Source“ section below
247
+ pip install -r requirements.txt
248
+ ```
249
+
250
+ **On Linux**
251
+
252
+ ```bash
253
+ conda install intel::mkl-static intel::mkl-include
254
+ # CUDA only: Add LAPACK support for the GPU if needed
255
+ conda install -c pytorch magma-cuda110 # or the magma-cuda* that matches your CUDA version from https://anaconda.org/pytorch/repo
256
+
257
+ # (optional) If using torch.compile with inductor/triton, install the matching version of triton
258
+ # Run from the pytorch directory after cloning
259
+ make triton
260
+ ```
261
+
262
+ **On MacOS**
263
+
264
+ ```bash
265
+ # Add this package on intel x86 processor machines only
266
+ conda install intel::mkl-static intel::mkl-include
267
+ # Add these packages if torch.distributed is needed
268
+ conda install pkg-config libuv
269
+ ```
270
+
271
+ **On Windows**
272
+
273
+ ```bash
274
+ conda install intel::mkl-static intel::mkl-include
275
+ # Add these packages if torch.distributed is needed.
276
+ # Distributed package support on Windows is a prototype feature and is subject to changes.
277
+ conda install -c conda-forge libuv=1.39
278
+ ```
279
+
280
+ #### Get the PyTorch Source
281
+ ```bash
282
+ git clone --recursive https://github.com/pytorch/pytorch
283
+ cd pytorch
284
+ # if you are updating an existing checkout
285
+ git submodule sync
286
+ git submodule update --init --recursive
287
+ ```
288
+
289
+ #### Install PyTorch
290
+ **On Linux**
291
+
292
+ If you would like to compile PyTorch with [new C++ ABI](https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html) enabled, then first run this command:
293
+ ```bash
294
+ export _GLIBCXX_USE_CXX11_ABI=1
295
+ ```
296
+
297
+ If you're compiling for AMD ROCm then first run this command:
298
+ ```bash
299
+ # Only run this if you're compiling for ROCm
300
+ python tools/amd_build/build_amd.py
301
+ ```
302
+
303
+ Install PyTorch
304
+ ```bash
305
+ export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
306
+ python setup.py develop
307
+ ```
308
+
309
+ > _Aside:_ If you are using [Anaconda](https://www.anaconda.com/distribution/#download-section), you may experience an error caused by the linker:
310
+ >
311
+ > ```plaintext
312
+ > build/temp.linux-x86_64-3.7/torch/csrc/stub.o: file not recognized: file format not recognized
313
+ > collect2: error: ld returned 1 exit status
314
+ > error: command 'g++' failed with exit status 1
315
+ > ```
316
+ >
317
+ > This is caused by `ld` from the Conda environment shadowing the system `ld`. You should use a newer version of Python that fixes this issue. The recommended Python version is 3.8.1+.
318
+
319
+ **On macOS**
320
+
321
+ ```bash
322
+ python3 setup.py develop
323
+ ```
324
+
325
+ **On Windows**
326
+
327
+ Choose Correct Visual Studio Version.
328
+
329
+ PyTorch CI uses Visual C++ BuildTools, which come with Visual Studio Enterprise,
330
+ Professional, or Community Editions. You can also install the build tools from
331
+ https://visualstudio.microsoft.com/visual-cpp-build-tools/. The build tools *do not*
332
+ come with Visual Studio Code by default.
333
+
334
+ If you want to build legacy python code, please refer to [Building on legacy code and CUDA](https://github.com/pytorch/pytorch/blob/main/CONTRIBUTING.md#building-on-legacy-code-and-cuda)
335
+
336
+ **CPU-only builds**
337
+
338
+ In this mode PyTorch computations will run on your CPU, not your GPU
339
+
340
+ ```cmd
341
+ conda activate
342
+ python setup.py develop
343
+ ```
344
+
345
+ Note on OpenMP: The desired OpenMP implementation is Intel OpenMP (iomp). In order to link against iomp, you'll need to manually download the library and set up the building environment by tweaking `CMAKE_INCLUDE_PATH` and `LIB`. The instruction [here](https://github.com/pytorch/pytorch/blob/main/docs/source/notes/windows.rst#building-from-source) is an example for setting up both MKL and Intel OpenMP. Without these configurations for CMake, Microsoft Visual C OpenMP runtime (vcomp) will be used.
346
+
347
+ **CUDA based build**
348
+
349
+ In this mode PyTorch computations will leverage your GPU via CUDA for faster number crunching
350
+
351
+ [NVTX](https://docs.nvidia.com/gameworks/content/gameworkslibrary/nvtx/nvidia_tools_extension_library_nvtx.htm) is needed to build Pytorch with CUDA.
352
+ NVTX is a part of CUDA distributive, where it is called "Nsight Compute". To install it onto an already installed CUDA run CUDA installation once again and check the corresponding checkbox.
353
+ Make sure that CUDA with Nsight Compute is installed after Visual Studio.
354
+
355
+ Currently, VS 2017 / 2019, and Ninja are supported as the generator of CMake. If `ninja.exe` is detected in `PATH`, then Ninja will be used as the default generator, otherwise, it will use VS 2017 / 2019.
356
+ <br/> If Ninja is selected as the generator, the latest MSVC will get selected as the underlying toolchain.
357
+
358
+ Additional libraries such as
359
+ [Magma](https://developer.nvidia.com/magma), [oneDNN, a.k.a. MKLDNN or DNNL](https://github.com/oneapi-src/oneDNN), and [Sccache](https://github.com/mozilla/sccache) are often needed. Please refer to the [installation-helper](https://github.com/pytorch/pytorch/tree/main/.ci/pytorch/win-test-helpers/installation-helpers) to install them.
360
+
361
+ You can refer to the [build_pytorch.bat](https://github.com/pytorch/pytorch/blob/main/.ci/pytorch/win-test-helpers/build_pytorch.bat) script for some other environment variables configurations
362
+
363
+
364
+ ```cmd
365
+ cmd
366
+
367
+ :: Set the environment variables after you have downloaded and unzipped the mkl package,
368
+ :: else CMake would throw an error as `Could NOT find OpenMP`.
369
+ set CMAKE_INCLUDE_PATH={Your directory}\mkl\include
370
+ set LIB={Your directory}\mkl\lib;%LIB%
371
+
372
+ :: Read the content in the previous section carefully before you proceed.
373
+ :: [Optional] If you want to override the underlying toolset used by Ninja and Visual Studio with CUDA, please run the following script block.
374
+ :: "Visual Studio 2019 Developer Command Prompt" will be run automatically.
375
+ :: Make sure you have CMake >= 3.12 before you do this when you use the Visual Studio generator.
376
+ set CMAKE_GENERATOR_TOOLSET_VERSION=14.27
377
+ set DISTUTILS_USE_SDK=1
378
+ for /f "usebackq tokens=*" %i in (`"%ProgramFiles(x86)%\Microsoft Visual Studio\Installer\vswhere.exe" -version [15^,17^) -products * -latest -property installationPath`) do call "%i\VC\Auxiliary\Build\vcvarsall.bat" x64 -vcvars_ver=%CMAKE_GENERATOR_TOOLSET_VERSION%
379
+
380
+ :: [Optional] If you want to override the CUDA host compiler
381
+ set CUDAHOSTCXX=C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.27.29110\bin\HostX64\x64\cl.exe
382
+
383
+ python setup.py develop
384
+
385
+ ```
386
+
387
+ ##### Adjust Build Options (Optional)
388
+
389
+ You can adjust the configuration of cmake variables optionally (without building first), by doing
390
+ the following. For example, adjusting the pre-detected directories for CuDNN or BLAS can be done
391
+ with such a step.
392
+
393
+ On Linux
394
+ ```bash
395
+ export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
396
+ python setup.py build --cmake-only
397
+ ccmake build # or cmake-gui build
398
+ ```
399
+
400
+ On macOS
401
+ ```bash
402
+ export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
403
+ MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py build --cmake-only
404
+ ccmake build # or cmake-gui build
405
+ ```
406
+
407
+ ### Docker Image
408
+
409
+ #### Using pre-built images
410
+
411
+ You can also pull a pre-built docker image from Docker Hub and run with docker v19.03+
412
+
413
+ ```bash
414
+ docker run --gpus all --rm -ti --ipc=host pytorch/pytorch:latest
415
+ ```
416
+
417
+ Please note that PyTorch uses shared memory to share data between processes, so if torch multiprocessing is used (e.g.
418
+ for multithreaded data loaders) the default shared memory segment size that container runs with is not enough, and you
419
+ should increase shared memory size either with `--ipc=host` or `--shm-size` command line options to `nvidia-docker run`.
420
+
421
+ #### Building the image yourself
422
+
423
+ **NOTE:** Must be built with a docker version > 18.06
424
+
425
+ The `Dockerfile` is supplied to build images with CUDA 11.1 support and cuDNN v8.
426
+ You can pass `PYTHON_VERSION=x.y` make variable to specify which Python version is to be used by Miniconda, or leave it
427
+ unset to use the default.
428
+
429
+ ```bash
430
+ make -f docker.Makefile
431
+ # images are tagged as docker.io/${your_docker_username}/pytorch
432
+ ```
433
+
434
+ You can also pass the `CMAKE_VARS="..."` environment variable to specify additional CMake variables to be passed to CMake during the build.
435
+ See [setup.py](./setup.py) for the list of available variables.
436
+
437
+ ```bash
438
+ CMAKE_VARS="BUILD_CAFFE2=ON BUILD_CAFFE2_OPS=ON" make -f docker.Makefile
439
+ ```
440
+
441
+ ### Building the Documentation
442
+
443
+ To build documentation in various formats, you will need [Sphinx](http://www.sphinx-doc.org) and the
444
+ readthedocs theme.
445
+
446
+ ```bash
447
+ cd docs/
448
+ pip install -r requirements.txt
449
+ ```
450
+ You can then build the documentation by running `make <format>` from the
451
+ `docs/` folder. Run `make` to get a list of all available output formats.
452
+
453
+ If you get a katex error run `npm install katex`. If it persists, try
454
+ `npm install -g katex`
455
+
456
+ > Note: if you installed `nodejs` with a different package manager (e.g.,
457
+ `conda`) then `npm` will probably install a version of `katex` that is not
458
+ compatible with your version of `nodejs` and doc builds will fail.
459
+ A combination of versions that is known to work is `[email protected]` and
460
+ `[email protected]`. To install the latter with `npm` you can run
461
+ ```npm install -g [email protected]```
462
+
463
+ ### Previous Versions
464
+
465
+ Installation instructions and binaries for previous PyTorch versions may be found
466
+ on [our website](https://pytorch.org/previous-versions).
467
+
468
+
469
+ ## Getting Started
470
+
471
+ Three-pointers to get you started:
472
+ - [Tutorials: get you started with understanding and using PyTorch](https://pytorch.org/tutorials/)
473
+ - [Examples: easy to understand PyTorch code across all domains](https://github.com/pytorch/examples)
474
+ - [The API Reference](https://pytorch.org/docs/)
475
+ - [Glossary](https://github.com/pytorch/pytorch/blob/main/GLOSSARY.md)
476
+
477
+ ## Resources
478
+
479
+ * [PyTorch.org](https://pytorch.org/)
480
+ * [PyTorch Tutorials](https://pytorch.org/tutorials/)
481
+ * [PyTorch Examples](https://github.com/pytorch/examples)
482
+ * [PyTorch Models](https://pytorch.org/hub/)
483
+ * [Intro to Deep Learning with PyTorch from Udacity](https://www.udacity.com/course/deep-learning-pytorch--ud188)
484
+ * [Intro to Machine Learning with PyTorch from Udacity](https://www.udacity.com/course/intro-to-machine-learning-nanodegree--nd229)
485
+ * [Deep Neural Networks with PyTorch from Coursera](https://www.coursera.org/learn/deep-neural-networks-with-pytorch)
486
+ * [PyTorch Twitter](https://twitter.com/PyTorch)
487
+ * [PyTorch Blog](https://pytorch.org/blog/)
488
+ * [PyTorch YouTube](https://www.youtube.com/channel/UCWXI5YeOsh03QvJ59PMaXFw)
489
+
490
+ ## Communication
491
+ * Forums: Discuss implementations, research, etc. https://discuss.pytorch.org
492
+ * GitHub Issues: Bug reports, feature requests, install issues, RFCs, thoughts, etc.
493
+ * Slack: The [PyTorch Slack](https://pytorch.slack.com/) hosts a primary audience of moderate to experienced PyTorch users and developers for general chat, online discussions, collaboration, etc. If you are a beginner looking for help, the primary medium is [PyTorch Forums](https://discuss.pytorch.org). If you need a slack invite, please fill this form: https://goo.gl/forms/PP1AGvNHpSaJP8to1
494
+ * Newsletter: No-noise, a one-way email newsletter with important announcements about PyTorch. You can sign-up here: https://eepurl.com/cbG0rv
495
+ * Facebook Page: Important announcements about PyTorch. https://www.facebook.com/pytorch
496
+ * For brand guidelines, please visit our website at [pytorch.org](https://pytorch.org/)
497
+
498
+ ## Releases and Contributing
499
+
500
+ Typically, PyTorch has three minor releases a year. Please let us know if you encounter a bug by [filing an issue](https://github.com/pytorch/pytorch/issues).
501
+
502
+ We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion.
503
+
504
+ If you plan to contribute new features, utility functions, or extensions to the core, please first open an issue and discuss the feature with us.
505
+ Sending a PR without discussion might end up resulting in a rejected PR because we might be taking the core in a different direction than you might be aware of.
506
+
507
+ To learn more about making a contribution to Pytorch, please see our [Contribution page](CONTRIBUTING.md). For more information about PyTorch releases, see [Release page](RELEASE.md).
508
+
509
+ ## The Team
510
+
511
+ PyTorch is a community-driven project with several skillful engineers and researchers contributing to it.
512
+
513
+ PyTorch is currently maintained by [Soumith Chintala](http://soumith.ch), [Gregory Chanan](https://github.com/gchanan), [Dmytro Dzhulgakov](https://github.com/dzhulgakov), [Edward Yang](https://github.com/ezyang), and [Nikita Shulga](https://github.com/malfet) with major contributions coming from hundreds of talented individuals in various forms and means.
514
+ A non-exhaustive but growing list needs to mention: Trevor Killeen, Sasank Chilamkurthy, Sergey Zagoruyko, Adam Lerer, Francisco Massa, Alykhan Tejani, Luca Antiga, Alban Desmaison, Andreas Koepf, James Bradbury, Zeming Lin, Yuandong Tian, Guillaume Lample, Marat Dukhan, Natalia Gimelshein, Christian Sarofeen, Martin Raison, Edward Yang, Zachary Devito.
515
+
516
+ Note: This project is unrelated to [hughperkins/pytorch](https://github.com/hughperkins/pytorch) with the same name. Hugh is a valuable contributor to the Torch community and has helped with many things Torch and PyTorch.
517
+
518
+ ## License
519
+
520
+ PyTorch has a BSD-style license, as found in the [LICENSE](LICENSE) file.
MLPY/Lib/site-packages/torch-2.3.1.dist-info/NOTICE ADDED
@@ -0,0 +1,456 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ =======================================================================
2
+ Software under third_party
3
+ =======================================================================
4
+ Software libraries under third_party are provided as github submodule
5
+ links, and their content is not part of the Caffe2 codebase. Their
6
+ licences can be found under the respective software repositories.
7
+
8
+ =======================================================================
9
+ Earlier BSD License
10
+ =======================================================================
11
+ Early development of Caffe2 in 2015 and early 2016 is licensed under the
12
+ BSD license. The license is attached below:
13
+
14
+ All contributions by Facebook:
15
+ Copyright (c) 2016 Facebook Inc.
16
+
17
+ All contributions by Google:
18
+ Copyright (c) 2015 Google Inc.
19
+ All rights reserved.
20
+
21
+ All contributions by Yangqing Jia:
22
+ Copyright (c) 2015 Yangqing Jia
23
+ All rights reserved.
24
+
25
+ All contributions by Kakao Brain:
26
+ Copyright 2019-2020 Kakao Brain
27
+
28
+ All other contributions:
29
+ Copyright(c) 2015, 2016 the respective contributors
30
+ All rights reserved.
31
+
32
+ Redistribution and use in source and binary forms, with or without
33
+ modification, are permitted provided that the following conditions are met:
34
+
35
+ 1. Redistributions of source code must retain the above copyright notice, this
36
+ list of conditions and the following disclaimer.
37
+ 2. Redistributions in binary form must reproduce the above copyright notice,
38
+ this list of conditions and the following disclaimer in the documentation
39
+ and/or other materials provided with the distribution.
40
+
41
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
42
+ ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
43
+ WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
44
+ DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
45
+ ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
46
+ (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
47
+ LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
48
+ ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
49
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
50
+ SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
51
+
52
+
53
+ =======================================================================
54
+ Caffe's BSD License
55
+ =======================================================================
56
+ Some parts of the caffe2 code is derived from the original Caffe code, which is
57
+ created by Yangqing Jia and is now a BSD-licensed open-source project. The Caffe
58
+ license is as follows:
59
+
60
+ COPYRIGHT
61
+
62
+ All contributions by the University of California:
63
+ Copyright (c) 2014, The Regents of the University of California (Regents)
64
+ All rights reserved.
65
+
66
+ All other contributions:
67
+ Copyright (c) 2014, the respective contributors
68
+ All rights reserved.
69
+
70
+ Caffe uses a shared copyright model: each contributor holds copyright over
71
+ their contributions to Caffe. The project versioning records all such
72
+ contribution and copyright details. If a contributor wants to further mark
73
+ their specific copyright on a particular contribution, they should indicate
74
+ their copyright solely in the commit message of the change when it is
75
+ committed.
76
+
77
+ LICENSE
78
+
79
+ Redistribution and use in source and binary forms, with or without
80
+ modification, are permitted provided that the following conditions are met:
81
+
82
+ 1. Redistributions of source code must retain the above copyright notice, this
83
+ list of conditions and the following disclaimer.
84
+ 2. Redistributions in binary form must reproduce the above copyright notice,
85
+ this list of conditions and the following disclaimer in the documentation
86
+ and/or other materials provided with the distribution.
87
+
88
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
89
+ ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
90
+ WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
91
+ DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
92
+ ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
93
+ (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
94
+ LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
95
+ ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
96
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
97
+ SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
98
+
99
+ CONTRIBUTION AGREEMENT
100
+
101
+ By contributing to the BVLC/caffe repository through pull-request, comment,
102
+ or otherwise, the contributor releases their content to the
103
+ license and copyright terms herein.
104
+
105
+ =======================================================================
106
+ Caffe2's Apache License
107
+ =======================================================================
108
+
109
+ This repo contains Caffe2 code, which was previously licensed under
110
+ Apache License Version 2.0:
111
+
112
+ Apache License
113
+ Version 2.0, January 2004
114
+ http://www.apache.org/licenses/
115
+
116
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
117
+
118
+ 1. Definitions.
119
+
120
+ "License" shall mean the terms and conditions for use, reproduction,
121
+ and distribution as defined by Sections 1 through 9 of this document.
122
+
123
+ "Licensor" shall mean the copyright owner or entity authorized by
124
+ the copyright owner that is granting the License.
125
+
126
+ "Legal Entity" shall mean the union of the acting entity and all
127
+ other entities that control, are controlled by, or are under common
128
+ control with that entity. For the purposes of this definition,
129
+ "control" means (i) the power, direct or indirect, to cause the
130
+ direction or management of such entity, whether by contract or
131
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
132
+ outstanding shares, or (iii) beneficial ownership of such entity.
133
+
134
+ "You" (or "Your") shall mean an individual or Legal Entity
135
+ exercising permissions granted by this License.
136
+
137
+ "Source" form shall mean the preferred form for making modifications,
138
+ including but not limited to software source code, documentation
139
+ source, and configuration files.
140
+
141
+ "Object" form shall mean any form resulting from mechanical
142
+ transformation or translation of a Source form, including but
143
+ not limited to compiled object code, generated documentation,
144
+ and conversions to other media types.
145
+
146
+ "Work" shall mean the work of authorship, whether in Source or
147
+ Object form, made available under the License, as indicated by a
148
+ copyright notice that is included in or attached to the work
149
+ (an example is provided in the Appendix below).
150
+
151
+ "Derivative Works" shall mean any work, whether in Source or Object
152
+ form, that is based on (or derived from) the Work and for which the
153
+ editorial revisions, annotations, elaborations, or other modifications
154
+ represent, as a whole, an original work of authorship. For the purposes
155
+ of this License, Derivative Works shall not include works that remain
156
+ separable from, or merely link (or bind by name) to the interfaces of,
157
+ the Work and Derivative Works thereof.
158
+
159
+ "Contribution" shall mean any work of authorship, including
160
+ the original version of the Work and any modifications or additions
161
+ to that Work or Derivative Works thereof, that is intentionally
162
+ submitted to Licensor for inclusion in the Work by the copyright owner
163
+ or by an individual or Legal Entity authorized to submit on behalf of
164
+ the copyright owner. For the purposes of this definition, "submitted"
165
+ means any form of electronic, verbal, or written communication sent
166
+ to the Licensor or its representatives, including but not limited to
167
+ communication on electronic mailing lists, source code control systems,
168
+ and issue tracking systems that are managed by, or on behalf of, the
169
+ Licensor for the purpose of discussing and improving the Work, but
170
+ excluding communication that is conspicuously marked or otherwise
171
+ designated in writing by the copyright owner as "Not a Contribution."
172
+
173
+ "Contributor" shall mean Licensor and any individual or Legal Entity
174
+ on behalf of whom a Contribution has been received by Licensor and
175
+ subsequently incorporated within the Work.
176
+
177
+ 2. Grant of Copyright License. Subject to the terms and conditions of
178
+ this License, each Contributor hereby grants to You a perpetual,
179
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
180
+ copyright license to reproduce, prepare Derivative Works of,
181
+ publicly display, publicly perform, sublicense, and distribute the
182
+ Work and such Derivative Works in Source or Object form.
183
+
184
+ 3. Grant of Patent License. Subject to the terms and conditions of
185
+ this License, each Contributor hereby grants to You a perpetual,
186
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
187
+ (except as stated in this section) patent license to make, have made,
188
+ use, offer to sell, sell, import, and otherwise transfer the Work,
189
+ where such license applies only to those patent claims licensable
190
+ by such Contributor that are necessarily infringed by their
191
+ Contribution(s) alone or by combination of their Contribution(s)
192
+ with the Work to which such Contribution(s) was submitted. If You
193
+ institute patent litigation against any entity (including a
194
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
195
+ or a Contribution incorporated within the Work constitutes direct
196
+ or contributory patent infringement, then any patent licenses
197
+ granted to You under this License for that Work shall terminate
198
+ as of the date such litigation is filed.
199
+
200
+ 4. Redistribution. You may reproduce and distribute copies of the
201
+ Work or Derivative Works thereof in any medium, with or without
202
+ modifications, and in Source or Object form, provided that You
203
+ meet the following conditions:
204
+
205
+ (a) You must give any other recipients of the Work or
206
+ Derivative Works a copy of this License; and
207
+
208
+ (b) You must cause any modified files to carry prominent notices
209
+ stating that You changed the files; and
210
+
211
+ (c) You must retain, in the Source form of any Derivative Works
212
+ that You distribute, all copyright, patent, trademark, and
213
+ attribution notices from the Source form of the Work,
214
+ excluding those notices that do not pertain to any part of
215
+ the Derivative Works; and
216
+
217
+ (d) If the Work includes a "NOTICE" text file as part of its
218
+ distribution, then any Derivative Works that You distribute must
219
+ include a readable copy of the attribution notices contained
220
+ within such NOTICE file, excluding those notices that do not
221
+ pertain to any part of the Derivative Works, in at least one
222
+ of the following places: within a NOTICE text file distributed
223
+ as part of the Derivative Works; within the Source form or
224
+ documentation, if provided along with the Derivative Works; or,
225
+ within a display generated by the Derivative Works, if and
226
+ wherever such third-party notices normally appear. The contents
227
+ of the NOTICE file are for informational purposes only and
228
+ do not modify the License. You may add Your own attribution
229
+ notices within Derivative Works that You distribute, alongside
230
+ or as an addendum to the NOTICE text from the Work, provided
231
+ that such additional attribution notices cannot be construed
232
+ as modifying the License.
233
+
234
+ You may add Your own copyright statement to Your modifications and
235
+ may provide additional or different license terms and conditions
236
+ for use, reproduction, or distribution of Your modifications, or
237
+ for any such Derivative Works as a whole, provided Your use,
238
+ reproduction, and distribution of the Work otherwise complies with
239
+ the conditions stated in this License.
240
+
241
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
242
+ any Contribution intentionally submitted for inclusion in the Work
243
+ by You to the Licensor shall be under the terms and conditions of
244
+ this License, without any additional terms or conditions.
245
+ Notwithstanding the above, nothing herein shall supersede or modify
246
+ the terms of any separate license agreement you may have executed
247
+ with Licensor regarding such Contributions.
248
+
249
+ 6. Trademarks. This License does not grant permission to use the trade
250
+ names, trademarks, service marks, or product names of the Licensor,
251
+ except as required for reasonable and customary use in describing the
252
+ origin of the Work and reproducing the content of the NOTICE file.
253
+
254
+ 7. Disclaimer of Warranty. Unless required by applicable law or
255
+ agreed to in writing, Licensor provides the Work (and each
256
+ Contributor provides its Contributions) on an "AS IS" BASIS,
257
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
258
+ implied, including, without limitation, any warranties or conditions
259
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
260
+ PARTICULAR PURPOSE. You are solely responsible for determining the
261
+ appropriateness of using or redistributing the Work and assume any
262
+ risks associated with Your exercise of permissions under this License.
263
+
264
+ 8. Limitation of Liability. In no event and under no legal theory,
265
+ whether in tort (including negligence), contract, or otherwise,
266
+ unless required by applicable law (such as deliberate and grossly
267
+ negligent acts) or agreed to in writing, shall any Contributor be
268
+ liable to You for damages, including any direct, indirect, special,
269
+ incidental, or consequential damages of any character arising as a
270
+ result of this License or out of the use or inability to use the
271
+ Work (including but not limited to damages for loss of goodwill,
272
+ work stoppage, computer failure or malfunction, or any and all
273
+ other commercial damages or losses), even if such Contributor
274
+ has been advised of the possibility of such damages.
275
+
276
+ 9. Accepting Warranty or Additional Liability. While redistributing
277
+ the Work or Derivative Works thereof, You may choose to offer,
278
+ and charge a fee for, acceptance of support, warranty, indemnity,
279
+ or other liability obligations and/or rights consistent with this
280
+ License. However, in accepting such obligations, You may act only
281
+ on Your own behalf and on Your sole responsibility, not on behalf
282
+ of any other Contributor, and only if You agree to indemnify,
283
+ defend, and hold each Contributor harmless for any liability
284
+ incurred by, or claims asserted against, such Contributor by reason
285
+ of your accepting any such warranty or additional liability.
286
+
287
+ =======================================================================
288
+ Cephes's 3-Clause BSD License
289
+ =======================================================================
290
+
291
+ Code derived from implementations in the Cephes Math Library should mention
292
+ its derivation and reference the following license:
293
+
294
+ 3-Clause BSD License for the Cephes Math Library
295
+ Copyright (c) 2018, Steven Moshier
296
+ All rights reserved.
297
+
298
+ Redistribution and use in source and binary forms, with or without
299
+ modification, are permitted provided that the following conditions are met:
300
+
301
+ * Redistributions of source code must retain the above copyright
302
+ notice, this list of conditions and the following disclaimer.
303
+
304
+ * Redistributions in binary form must reproduce the above copyright
305
+ notice, this list of conditions and the following disclaimer in the
306
+ documentation and/or other materials provided with the distribution.
307
+
308
+ * Neither the name of the nor the
309
+ names of its contributors may be used to endorse or promote products
310
+ derived from this software without specific prior written permission.
311
+
312
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
313
+ ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
314
+ WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
315
+ DISCLAIMED. IN NO EVENT SHALL Steven Moshier BE LIABLE FOR ANY
316
+ DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
317
+ (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
318
+ LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
319
+ ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
320
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
321
+ SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
322
+
323
+
324
+ =======================================================================
325
+ SciPy's 3-Clause BSD License
326
+ =======================================================================
327
+
328
+ Code derived from implementations in SciPy should mention its derivation
329
+ and reference the following license:
330
+
331
+ Copyright (c) 2001-2002 Enthought, Inc. 2003-2019, SciPy Developers.
332
+ All rights reserved.
333
+
334
+ Redistribution and use in source and binary forms, with or without
335
+ modification, are permitted provided that the following conditions
336
+ are met:
337
+
338
+ 1. Redistributions of source code must retain the above copyright
339
+ notice, this list of conditions and the following disclaimer.
340
+
341
+ 2. Redistributions in binary form must reproduce the above
342
+ copyright notice, this list of conditions and the following
343
+ disclaimer in the documentation and/or other materials provided
344
+ with the distribution.
345
+
346
+ 3. Neither the name of the copyright holder nor the names of its
347
+ contributors may be used to endorse or promote products derived
348
+ from this software without specific prior written permission.
349
+
350
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
351
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
352
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
353
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
354
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
355
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
356
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
357
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
358
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
359
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
360
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
361
+
362
+ =======================================================================
363
+ Boost's 1.0 Software License
364
+ =======================================================================
365
+
366
+ Code derived from implementations in Boost 1.0 should mention its
367
+ derivation and reference the following license:
368
+
369
+ Boost Software License - Version 1.0 - August 17th, 2003
370
+
371
+ Permission is hereby granted, free of charge, to any person or organization
372
+ obtaining a copy of the software and accompanying documentation covered by
373
+ this license (the "Software") to use, reproduce, display, distribute,
374
+ execute, and transmit the Software, and to prepare derivative works of the
375
+ Software, and to permit third-parties to whom the Software is furnished to
376
+ do so, all subject to the following:
377
+
378
+ The copyright notices in the Software and this entire statement, including
379
+ the above license grant, this restriction and the following disclaimer,
380
+ must be included in all copies of the Software, in whole or in part, and
381
+ all derivative works of the Software, unless such copies or derivative
382
+ works are solely in the form of machine-executable object code generated by
383
+ a source language processor.
384
+
385
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
386
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
387
+ FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT
388
+ SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE
389
+ FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,
390
+ ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
391
+ DEALINGS IN THE SOFTWARE.
392
+
393
+ END OF TERMS AND CONDITIONS
394
+
395
+ APPENDIX: How to apply the Apache License to your work.
396
+
397
+ To apply the Apache License to your work, attach the following
398
+ boilerplate notice, with the fields enclosed by brackets "[]"
399
+ replaced with your own identifying information. (Don't include
400
+ the brackets!) The text should be enclosed in the appropriate
401
+ comment syntax for the file format. We also recommend that a
402
+ file or class name and description of purpose be included on the
403
+ same "printed page" as the copyright notice for easier
404
+ identification within third-party archives.
405
+
406
+ Copyright [yyyy] [name of copyright owner]
407
+
408
+ Licensed under the Apache License, Version 2.0 (the "License");
409
+ you may not use this file except in compliance with the License.
410
+ You may obtain a copy of the License at
411
+
412
+ http://www.apache.org/licenses/LICENSE-2.0
413
+
414
+ Unless required by applicable law or agreed to in writing, software
415
+ distributed under the License is distributed on an "AS IS" BASIS,
416
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
417
+ See the License for the specific language governing permissions and
418
+ limitations under the License.
419
+
420
+ =======================================================================
421
+ PILLOW-SIMD Software License
422
+ =======================================================================
423
+
424
+ Code derived from implementations in PILLOW-SIMD should mention its derivation
425
+ and reference the following license:
426
+
427
+ The Python Imaging Library (PIL) is
428
+
429
+ Copyright © 1997-2011 by Secret Labs AB
430
+ Copyright © 1995-2011 by Fredrik Lundh
431
+
432
+ Pillow is the friendly PIL fork. It is
433
+
434
+ Copyright © 2010-2022 by Alex Clark and contributors
435
+
436
+ Like PIL, Pillow is licensed under the open source HPND License:
437
+
438
+ By obtaining, using, and/or copying this software and/or its associated
439
+ documentation, you agree that you have read, understood, and will comply
440
+ with the following terms and conditions:
441
+
442
+ Permission to use, copy, modify, and distribute this software and its
443
+ associated documentation for any purpose and without fee is hereby granted,
444
+ provided that the above copyright notice appears in all copies, and that
445
+ both that copyright notice and this permission notice appear in supporting
446
+ documentation, and that the name of Secret Labs AB or the author not be
447
+ used in advertising or publicity pertaining to distribution of the software
448
+ without specific, written prior permission.
449
+
450
+ SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS
451
+ SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS.
452
+ IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR BE LIABLE FOR ANY SPECIAL,
453
+ INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
454
+ LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE
455
+ OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
456
+ PERFORMANCE OF THIS SOFTWARE.
MLPY/Lib/site-packages/torch-2.3.1.dist-info/RECORD ADDED
The diff for this file is too large to render. See raw diff
 
MLPY/Lib/site-packages/torch-2.3.1.dist-info/REQUESTED ADDED
File without changes
MLPY/Lib/site-packages/torch-2.3.1.dist-info/WHEEL ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ Wheel-Version: 1.0
2
+ Generator: bdist_wheel (0.43.0)
3
+ Root-Is-Purelib: false
4
+ Tag: cp39-cp39-win_amd64
5
+
MLPY/Lib/site-packages/torch-2.3.1.dist-info/entry_points.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ [console_scripts]
2
+ convert-caffe2-to-onnx = caffe2.python.onnx.bin.conversion:caffe2_to_onnx
3
+ convert-onnx-to-caffe2 = caffe2.python.onnx.bin.conversion:onnx_to_caffe2
4
+ torchrun = torch.distributed.run:main
5
+
6
+ [torchrun.logs_specs]
7
+ default = torch.distributed.elastic.multiprocessing:DefaultLogsSpecs
MLPY/Lib/site-packages/torch-2.3.1.dist-info/top_level.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ functorch
2
+ torch
3
+ torchgen