text
stringlengths 1
2.05k
|
---|
"cal_data = {\n",
" \"input_data\": [torch.cat((x, torch.rand(10, *[3, 8, 8]))).flatten().tolist()],\n",
"}\n",
"\n",
"cal_path = os.path.join('val_data.json')\n",
"
"with open(cal_path, \"w\") as f:\n",
" json.dump(cal_data, f)\n",
"\n",
"res = ezkl.calibrate_settings(cal_path, model_path, settings_path, \"resources\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res = ezkl.compile_circuit(model_path, compiled_model_path, settings_path)\n",
"assert res == True"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"As we use Halo2 with KZG-commitments we need an SRS string from (preferably) a multi-party trusted setup ceremony. For an overview of the procedures for such a ceremony check out [this page](https:
"\n",
"These SRS were generated with [this](https:
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res = ezkl.get_srs( settings_path)\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Here we setup verifying and proving keys for the circuit. As the name suggests the proving key is needed for ... proving and the verifying key is needed for ... verifying. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
" |
"
"
"
"res = ezkl.setup(\n",
" compiled_model_path,\n",
" vk_path,\n",
" pk_path,\n",
" \n",
" )\n",
"\n",
"assert res == True\n",
"assert os.path.isfile(vk_path)\n",
"assert os.path.isfile(pk_path)\n",
"assert os.path.isfile(settings_path)\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We now need to generate the (partial) circuit witness. These are the model outputs (and any hashes) that are generated when feeding the previously generated `input.json` through the circuit / model. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!export RUST_BACKTRACE=1\n",
"\n",
"witness_path = \"witness.json\"\n",
"\n",
"res = ezkl.gen_witness(data_path, compiled_model_path, witness_path, vk_path)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As a sanity check you can \"mock prove\" (i.e check that all the constraints of the circuit match without generate a full proof). "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"\n",
"res = ezkl.mock(witness_path, compiled_model_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we generate a full proof. "
] |
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"proof_path = os.path.join('test.pf')\n",
"\n",
"res = ezkl.prove(\n",
" witness_path,\n",
" compiled_model_path,\n",
" pk_path,\n",
" proof_path,\n",
" \n",
" \"single\",\n",
" )\n",
"\n",
"print(res)\n",
"assert os.path.isfile(proof_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we need to swap out the public commitments inside the corresponding proof bytes"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res = ezkl.swap_proof_commitments(proof_path, witness_path)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"And verify it as a sanity check. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"res \n",
"\n",
"\n",
"res = ezkl.verify(\n",
" proof_path,\n",
" settings_path,\n",
" vk_path,\n",
" \n",
" )\n",
"\n",
"assert res == True\n",
"print(\"verified\")"
]
},
{ |
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"We can now create an EVM / `.sol` verifier that can be deployed on chain to verify submitted proofs using a view function."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"\n",
"abi_path = 'test.abi'\n",
"sol_code_path = 'test.sol'\n",
"\n",
"res = ezkl.create_evm_verifier(\n",
" vk_path,\n",
" \n",
" settings_path,\n",
" sol_code_path,\n",
" abi_path,\n",
" )\n",
"assert res == True\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
"
"
" |
import json\n",
"\n",
"address_path = os.path.join(\"address.json\")\n",
"\n",
"res = ezkl.deploy_evm(\n",
" address_path,\n",
" sol_code_path,\n",
" 'http:
")\n",
"\n",
"assert res == True\n",
"\n",
"with open(address_path, 'r') as file:\n",
" addr = file.read().rstrip()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
"
"\n",
"res = ezkl.verify_evm(\n",
" addr,\n",
" proof_path,\n",
" \"http:
")\n",
"assert res == True"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "ezkl",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.13"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
} |
{
"cells": [
{
"attachments": {
"image-3.png": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAywAAAV+CAYAAACUGLpbAAAMPWlDQ1BJQ0MgUHJvZmlsZQAASImVVwdYU8kWnltSIbQAAlJCb4JIDSAlhBZAercRkgChhBgIKnZ0UcG1iwVs6KqIYqfZETuLYsO+WFBR1sWCXXmTArruK9873zf3/vefM/85c+7cMgCon+CKxbmoBgB5okJJbEgAIzkllUF6ChCgA8hgONDj8grErOjoCABt8Px3e3cDekO76iDT+mf/fzVNvqCABwASDXE6v4CXB/FBAPAqnlhSCABRxptPLhTLMGxAWwIThHiBDGcqcJUMpyvwXrlPfCwb4lYAyKpcriQTALXLkGcU8TKhhlofxE4ivlAEgDoDYt+8vHw+xGkQ20AfMcQyfWb6DzqZf9NMH9LkcjOHsGIuciMHCgvEudyp/2c5/rfl5UoHY1jBppolCY2VzRnW7WZOfrgMq0LcK0qPjIJYC+IPQr7cH2KUmiUNTVD4o4a8AjasGdCF2InPDQyH2BDiYFFuZISST88QBnMghisEnSIs5MRDrAfxAkFBUJzSZ5MkP1YZC63PkLBZSv4cVyKPK4t1X5qTwFLqv84ScJT6mFpxVnwSxFSILYqEiZEQq0HsWJATF670GV2cxY4c9JFIY2X5W0AcKxCFBCj0saIMSXCs0r8sr2BwvtimLCEnUon3F2bFhyrqg7XyuPL84VywywIRK2FQR1CQHDE4F74gMEgxd+yZQJQQp9T5IC4MiFWMxani3GilP24myA2R8WYQuxYUxSnH4omFcEEq9PEMcWF0vCJPvDibGxatyAdfCiIAGwQCBpDClg7yQTYQtvc29MIrRU8w4AIJyAQC4KBkBkckyXtE8BgHisGfEAlAwdC4AHmvABRB/usQqzg6gAx5b5F8RA54AnEeCAe58FoqHyUaipYIHkNG+I/oXNh4MN9c2GT9/54fZL8zLMhEKBnpYESG+qAnMYgYSAwlBhNtcQPcF/fGI+DRHzZnnIl7Ds7juz/hCaGD8JBwndBFuDVRWCL5KcsxoAvqBytrkf5jLXArqOmGB+A+UB0q47q4AXDAXWEcFu4HI7tBlq3MW1YVxk/af5vBD3dD6UdxoqCUYRR/is3PI9Xs1NyGVGS1/rE+ilzTh+rNHur5OT77h+rz4Tn8Z09sAXYAO4udxM5jR7AGwMCOY41YG3ZUhodW12P56hqMFivPJwfqCP8Rb/DOyipZ4FTr1OP0RdFXKJgie0cDdr54qkSYmVXIYMEvgoDBEfEcRzCcnZxdAJB9XxSvrzcx8u8Gotv2nZv7BwA+xwcGBg5/58KOA7DPAz7+Td85Gyb8dKgAcK6JJ5UUKThcdiDAt4Q6fNL0gTEwBzZwPs7AHXgDfxAEwkAUiAcpYALMPguucwmYDKaDOaAUlIOlYBVYBzaCLWAH2A32gwZwBJwEZ8BFcBlcB3fg6ukGL0AfeAc+IwhCQmgIHdFHTBBLxB5xRpiILxKERCCxSAqShmQiIkSKTEfmIuXIcmQdshmpQfYhTchJ5DzSgdxCHiA9yGvkE4qhqqg2aoRaoSNRJspCw9F4dDyaiU5Ci9F56GJ0DVqN7kLr0ZPoRfQ62oW+QPsxgKlgupgp5oAxMTYWhaViGZgEm4mVYRVYNVaHNcP7fBXrwnqxjzgRp+MM3AGu4FA8Aefhk/CZ+CJ8Hb4Dr8db8av4A7wP/0agEQwJ9gQvAoeQTMgkTCaUEioI2wiHCKfhs9RNeEckEnWJ1kQP+CymELOJ04iLiOuJe4gniB3ER8R+EomkT7In+ZCiSFxSIamUtJa0i3ScdIXUTfpAViGbkJ3JweRUsohcQq4g7yQfI18hPyV/pmhQLClelCgKnzKVsoSyldJMuUTppnymalKtqT7UeGo2dQ51DbWOepp6l/pGRUXFTMVTJ |
UZFqDJbZY3KXpVzKg9UPqpqqdqpslXHqUpVF6tuVz2hekv1DY1Gs6L501JphbTFtBraKdp92gc1upqjGkeNrzZLrVKtXu2K2kt1irqlOkt9gnqxeoX6AfVL6r0aFA0rDbYGV2OmRqVGk0anRr8mXXOUZpRmnuYizZ2a5zWfaZG0rLSCtPha87S2aJ3SekTH6OZ0Np1Hn0vfSj9N79Ymaltrc7Sztcu1d2u3a/fpaOm46iTqTNGp1Dmq06WL6VrpcnRzdZfo7te9oftpmNEw1jDBsIXD6oZdGfZeb7iev55Ar0xvj951vU/6DP0g/Rz9ZfoN+vcMcAM7gxiDyQYbDE4b9A7XHu49nDe8bPj+4bcNUUM7w1jDaYZbDNsM+42MjUKMxEZrjU4Z9RrrGvsbZxuvND5m3GNCN/E1EZqsNDlu8pyhw2AxchlrGK2MPlND01BTqelm03bTz2bWZglmJWZ7zO6ZU82Z5hnmK81bzPssTCzGWEy3qLW4bUmxZFpmWa62PGv53sraKslqvlWD1TNrPWuOdbF1rfVdG5qNn80km2qba7ZEW6Ztju1628t2qJ2bXZZdpd0le9Te3V5ov96+YwRhhOcI0YjqEZ0Oqg4shyKHWocHjrqOEY4ljg2OL0dajEwduWzk2ZHfnNyccp22Ot0ZpTUqbFTJqOZRr53tnHnOlc7XXGguwS6zXBpdXrnauwpcN7jedKO7jXGb79bi9tXdw13iXufe42HhkeZR5dHJ1GZGMxcxz3kSPAM8Z3ke8fzo5e5V6LXf6y9vB+8c753ez0ZbjxaM3jr6kY+ZD9dns0+XL8M3zXeTb5efqR/Xr9rvob+5P99/m/9Tli0rm7WL9TLAKUAScCjgPduLPYN9IhALDAksC2wP0gpKCFoXdD/YLDgzuDa4L8QtZFrIiVBCaHjostBOjhGHx6nh9IV5hM0Iaw1XDY8LXxf+MMIuQhLRPAYdEzZmxZi7kZaRosiGKBDFiVoRdS/aOnpS9OEYYkx0TGXMk9hRsdNjz8bR4ybG7Yx7Fx8QvyT+ToJNgjShJVE9cVxiTeL7pMCk5UldySOTZyRfTDFIEaY0ppJSE1O3pfaPDRq7amz3OLdxpeNujLceP2X8+QkGE3InHJ2oPpE78UAaIS0pbWfaF24Ut5rbn85Jr0rv47F5q3kv+P78lfwegY9gueBphk/G8oxnmT6ZKzJ7svyyKrJ6hWzhOuGr7NDsjdnvc6JytucM5Cbl7skj56XlNYm0RDmi1nzj/Cn5HWJ7cam4a5LXpFWT+iThkm0FSMH4gsZCbfgj3ya1kf4ifVDkW1RZ9GFy4uQDUzSniKa0TbWbunDq0+Lg4t+m4dN401qmm06fM/3BDNaMzTORmekzW2aZz5o3q3t2yOwdc6hzcub8XuJUsrzk7dykuc3zjObNnvfol5BfakvVSiWlnfO9529cgC8QLmhf6LJw7cJvZfyyC+VO5RXlXxbxFl34ddSva34dWJyxuH2J+5INS4lLRUtvLPNbtmO55vLi5Y9WjFlRv5Kxsmzl21UTV52vcK3YuJq6Wrq6a03Emsa1FmuXrv2yLmvd9cqAyj1VhlULq96v56+/ssF/Q91Go43lGz9tEm66uTlkc321VXXFFuKWoi1PtiZuPfsb87eabQbbyrd93S7a3rUjdkdrjUdNzU7DnUtq0Vppbc+ucbsu7w7c3VjnULd5j+6e8r1gr3Tv831p+27sD9/fcoB5oO6g5cGqQ/RDZfVI/dT6voashq7GlMaOprCmlmbv5kOHHQ9vP2J6pPKoztElx6jH5h0bOF58vP+E+ETvycyTj1omttw5lXzqWmtMa/vp8NPnzgSfOXWWdfb4OZ9zR857nW+6wLzQcNH9Yn2bW9uh391+P9Tu3l5/yeNS42XPy80dozuOXfG7cvJq4NUz1zjXLl6PvN5xI+HGzc5xnV03+Tef3cq99ep20e3Pd2bfJdwtu6dxr+K+4f3qP2z/2NPl3nX0QeCDtodxD+884j168bjg8ZfueU9oTyqemjyteeb87EhPcM/l52Ofd |
78Qv/jcW/qn5p9VL21eHvzL/6+2vuS+7leSVwOvF73Rf7P9revblv7o/vvv8t59fl/2Qf/Djo/Mj2c/JX16+nnyF9KXNV9tvzZ/C/92dyBvYEDMlXDlvwIYbGhGBgCvtwNASwGADvdn1LGK/Z/cEMWeVY7Af8KKPaLc3AGog
},
"image.png": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAUAAAAEbCAYAAACr2V2eAAABYmlDQ1BJQ0MgUHJvZmlsZQAAKJF1kDFLw1AUhU9stSAVHRwEHQKKUy01rdi1LSKCQxoVqlvyWlMlTR9JRNTFQRengi5uUhd/gS4OjoKDguAgIoKDP0DsoiXeNGpbxft43I/DvYfDBTrCKudGEEDJdCxlOi3mFpfE0Au60ENPQEBlNk/J8iyN4Lu3V+2O5qhuxzyvq5px+bw3PJi1N6Nscmv173xbdecLNqP+QT/BuOUAQoxYXne4x9vE/RaFIj7wWPf5xGPN5/PGzLySIb4h7mNFNU/8RBzRWnS9hUvGGvvK4KUPF8yFOeoD9IeQRgEmshAxhRzimEAM41D+2Uk0djIog2MDFlagowiHtlOkcBjkJmKGHBmiiBBL5Cch7t369w2bWrkKJN+AQKWpaYfA2S7FvG9qI0dA7w5wes1VS/25rFAL2stxyedwGuh8dN3XUSC0D9Qrrvtedd36Mfk/ABfmJ+uTZFvl1hD0AAAAVmVYSWZNTQAqAAAACAABh2kABAAAAAEAAAAaAAAAAAADkoYABwAAABIAAABEoAIABAAAAAEAAAFAoAMABAAAAAEAAAEbAAAAAEFTQ0lJAAAAU2NyZWVuc2hvdP5iyG4AAAHWaVRYdFhNTDpjb20uYWRvYmUueG1wAAAAAAA8eDp4bXBtZXRhIHhtbG5zOng9ImFkb2JlOm5zOm1ldGEvIiB4OnhtcHRrPSJYTVAgQ29yZSA2LjAuMCI+CiAgIDxyZGY6UkRGIHhtbG5zOnJkZj0iaHR0cDovL3d3dy53My5vcmcvMTk5OS8wMi8yMi1yZGYtc3ludGF4LW5zIyI+CiAgICAgIDxyZGY6RGVzY3JpcHRpb24gcmRmOmFib3V0PSIiCiAgICAgICAgICAgIHhtbG5zOmV4aWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20vZXhpZi8xLjAvIj4KICAgICAgICAgPGV4aWY6UGl4ZWxZRGltZW5zaW9uPjI4MzwvZXhpZjpQaXhlbFlEaW1lbnNpb24+CiAgICAgICAgIDxleGlmOlBpeGVsWERpbWVuc2lvbj4zMjA8L2V4aWY6UGl4ZWxYRGltZW5zaW9uPgogICAgICAgICA8ZXhpZjpVc2VyQ29tbWVudD5TY3JlZW5zaG90PC9leGlmOlVzZXJDb21tZW50PgogICAgICA8L3JkZjpEZXNjcmlwdGlvbj4KICAgPC9yZGY6UkRGPgo8L3g6eG1wbWV0YT4Kk67BXQAAJDZJREFUeAHtnQuwVVUZxxfKS+QlIKJg8pSXKJQ8VbxqqKBjJmlaUxra5KTWZPhoSoXR1DR10lIZHUsrRZMJy8hSlLhAoJiIFSqKegUUBHkICEgZ/2X7uO/hnHP3Pufsc/ZZ67dmzr377L3W2uv7ffv+73rvZqNGjfrYECAAAQh4SGAvD23GZAhAAAKWAALIgwABCHhLAAH01vUYDgEIIIA8AxCAgLcEEEBvXY/hEIAAAsgzAAEIeEsAAfTW9RgOAQgggDwDEICAtwQQQG9dj+EQgAACyDMAAQh4SwAB9Nb1GA4BCCCAPAMQgIC3BBBAb12P4RCAAALIMwABCHhLAAH01vUYDgEIIIA8AxCAgLcEEEBvXY/hEIAAAsgzAAEIeEsAAfTW9RgOAQgggDwDEICAtwQQQG9dj+EQgAACyDMAAQh4SwAB9Nb1GA4BCCCAPAMQgIC3BBBAb12P4RCAAALIMwABCHhLAAH01vU |
YDgEIIIA8AxCAgLcEEEBvXY/hEIAAAsgzAAEIeEsAAfTW9RgOAQgggDwDEICAtwQQQG9dj+EQgAACyDMAAQh4SwAB9Nb1GA4BCCCAPAMQgIC3BBBAb12P4RCAAALIMwABCHhLAAH01vUYDgEIIIA8AxCAgLcEEEBvXY/hEIAAAsgzAAEIeEsAAfTW9RgOAQgggDwDEICAtwQQQG9dj+EQgAACyDMAAQh4SwAB9Nb1GA4BCCCAPAMQgIC3BBBAb12P4RCAAALIMwABCHhLAAH01vUYDgEIIIA8AxCAgLcEEEBvXY/hEIAAAsgzAAEIeEsAAfTW9RgOAQgggDwDEICAtwQQQG9dj+EQgAACyDMAAQh4SwAB9Nb1GA4BCCCAPAMQgIC3BBBAb12P4RCAAALIMwABCHhLoLm3lmN4LAIffvih0ee
}
},
"cell_type": "markdown",
"id": "cf69bb3f-94e6-4dba-92cd-ce08df117d67",
"metadata": {},
"source": [
"
"\n",
"\n",
"LightBGM based models are slightly finicky to get into a suitable onnx format. By default most tree based models will export into something that looks like this: \n",
"\n",
"\n",
"\n",
"\n",
"\n",
"Processing such nodes can be difficult and error prone. It would be much better if the operations of the tree were represented as a proper graph, possibly ... like this: \n",
"\n",
"\n",
"\n",
"\n",
"\n",
"This notebook showcases how to do that using the `hummingbird` python package ! "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a60b90d6",
"metadata": {},
"outputs": [],
"source": [
"!python -m pip install hummingbird_ml"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "95613ee9",
"metadata": {},
"outputs": [],
"source": [
"
"try:\n",
"
" |
import google.colab\n",
" |
import subprocess\n",
" |
import sys\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"ezkl\"])\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"onnx\"])\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"hummingbird-ml\"])\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"lightgbm\"])\n",
"\n",
"
"except:\n",
" pass\n",
"\n",
"\n",
"
"\n",
"
" |
import json\n",
" |
import numpy as np\n",
"from sklearn.datasets |
import load_iris\n",
"from sklearn.model_selection |
import train_test_split\n",
"from lightgbm |
import LGBMClassifier as Gbc\n",
" |
import torch\n",
" |
import ezkl\n",
" |
import os\n",
"from torch |
import nn\n",
"from hummingbird.ml |
import convert\n",
"\n",
"NUM_CLASSES = 3\n",
"\n",
"iris = load_iris()\n",
"X, y = iris.data, iris.target\n",
"X = X.astype(np.float32)\n",
"X_train, X_test, y_train, y_test = train_test_split(X, y)\n",
"clr = Gbc(n_estimators=12)\n",
"clr.fit(X_train, y_train)\n",
"\n",
"
"\n",
"\n",
"torch_gbt = convert(clr, 'torch', X_test[:1])\n",
"\n",
"print(torch_gbt)\n",
"
"diffs = []\n",
"\n",
"for i in range(len(X_test)):\n",
" torch_pred = torch_gbt.predict(torch.tensor(X_test[i].reshape(1, -1)))\n",
" sk_pred = clr.predict(X_test[i].reshape(1, -1))\n",
" diffs.append(torch_pred != sk_pred[0])\n",
"\n",
"print(\"num diff: \", sum(diffs))\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b37637c4",
"metadata": {},
"outputs": [],
"source": [
"model_path = os.path.join('network.onnx')\n",
"compiled_model_path = os.path.join('network.compiled')\n",
"pk_path = os.path.join('test.pk')\n",
"vk_path = os.path.join('test.vk')\n",
"settings_path = os.path.join('settings.json')\n",
"\n",
"witness_path = os.path.join('witness.json')\n",
"data_path = os.path.join('input.json')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "82db373a",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"\n",
" |
"\n",
"\n",
"
"shape = X_train.shape[1:]\n",
"x = torch.rand(1, *shape, requires_grad=False)\n",
"torch_out = torch_gbt.predict(x)\n",
"
"torch.onnx.export(torch_gbt.model,
"
" x,\n",
"
" \"network.onnx\",\n",
" export_params=True,
" opset_version=18,
" input_names=['input'],
" output_names=['output'],
" dynamic_axes={'input': {0: 'batch_size'},
" 'output': {0: 'batch_size'}})\n",
"\n",
"d = ((x).detach().numpy()).reshape([-1]).tolist()\n",
"\n",
"data = dict(input_shapes=[shape],\n",
" input_data=[d],\n",
" output_data=[(o).reshape([-1]).tolist() for o in torch_out])\n",
"\n",
"
"json.dump(data, open(\"input.json\", 'w'))\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d5e374a2",
"metadata": {},
"outputs": [],
"source": [
"run_args = ezkl.PyRunArgs()\n",
"run_args.variables = [(\"batch_size\", 1)]\n",
"\n",
"
"res = ezkl.gen_settings(model_path, settings_path, py_run_args=run_args)\n",
"assert res == True\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"cal_path = os.path.join(\"calibratio |
n.json\")\n",
"\n",
"data_array = (torch.randn(20, *shape).detach().numpy()).reshape([-1]).tolist()\n",
"\n",
"data = dict(input_data = [data_array])\n",
"\n",
"
"json.dump(data, open(cal_path, 'w'))\n",
"\n",
"\n",
"res = ezkl.calibrate_settings(data_path, model_path, settings_path, \"resources\")\n",
"assert res == True\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3aa4f090",
"metadata": {},
"outputs": [],
"source": [
"res = ezkl.compile_circuit(model_path, compiled_model_path, settings_path)\n",
"assert res == True"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8b74dcee",
"metadata": {},
"outputs": [],
"source": [
"
"res = ezkl.get_srs( settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "18c8b7c7",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"res = ezkl.gen_witness(data_path, compiled_model_path, witness_path)\n",
"assert os.path.isfile(witness_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b1c561a8",
"metadata": {},
"outputs": [],
"source": [
"\n",
"
"
"
"
"\n",
"\n",
"\n",
"res = ezkl.setup(\n",
" compiled_model_path,\n",
" vk_path,\n", |
" pk_path,\n",
" \n",
" )\n",
"\n",
"assert res == True\n",
"assert os.path.isfile(vk_path)\n",
"assert os.path.isfile(pk_path)\n",
"assert os.path.isfile(settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c384cbc8",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"\n",
"proof_path = os.path.join('test.pf')\n",
"\n",
"res = ezkl.prove(\n",
" witness_path,\n",
" compiled_model_path,\n",
" pk_path,\n",
" proof_path,\n",
" \n",
" \"single\",\n",
" )\n",
"\n",
"print(res)\n",
"assert os.path.isfile(proof_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "76f00d41",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"res = ezkl.verify(\n",
" proof_path,\n",
" settings_path,\n",
" vk_path,\n",
" \n",
" )\n",
"\n",
"assert res == True\n",
"print(\"verified\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1f50a8d0",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"res = ezkl.verify(\n",
" proof_path,\n",
" settings_path,\n",
" |
vk_path,\n",
" \n",
" )\n",
"\n",
"assert res == True\n",
"print(\"verified\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.15"
}
},
"nbformat": 4,
"nbformat_minor": 5
} |
{
"cells": [
{
"cell_type": "markdown",
"id": "cf69bb3f-94e6-4dba-92cd-ce08df117d67",
"metadata": {},
"source": [
"
"\n",
"\n",
"Sklearn based models are slightly finicky to get into a suitable onnx format. \n",
"This notebook showcases how to do so using the `hummingbird-ml` python package ! "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "95613ee9",
"metadata": {},
"outputs": [],
"source": [
"
"try:\n",
"
" |
import google.colab\n",
" |
import subprocess\n",
" |
import sys\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"ezkl\"])\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"onnx\"])\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"hummingbird-ml\"])\n",
"\n",
"
"except:\n",
" pass\n",
"\n",
" |
import os\n",
" |
import torch\n",
" |
import ezkl\n",
" |
import json\n",
"from hummingbird.ml |
import convert\n",
"\n",
"\n",
"
"\n",
"
" |
import numpy as np\n",
"from sklearn.linear_model |
import LinearRegression\n",
"X = np.array([[1, 1], [1, 2], [2, 2], [2, 3]])\n",
"
"y = np.dot(X, np.array([1, 2])) + 3\n",
"reg = LinearRegression().fit(X, y)\n",
"reg.score(X, y)\n",
"\n",
"circuit = convert(reg, \"torch\", X[:1]).model\n",
"\n",
"\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b37637c4",
"metadata": {},
"outputs": [],
"source": [
"model_path = os.path.join('network.onnx')\n",
"compiled_model_path = os.path.join('network.compiled')\n",
"pk_path = os.path.join('test.pk')\n",
"vk_path = os.path.join('test.vk')\n",
"settings_path = os.path.join('settings.json')\n",
"\n",
"witness_path = os.path.join('witness.json')\n",
"data_path = os.path.join('input.json')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "82db373a",
"metadata": {},
"outputs": [],
"source": [
"\n",
"\n",
"
"
"\n",
"
"shape = X.shape[1:]\n",
"x = torch.rand(1, *shape, requires_grad=True)\n",
"torch_out = circuit(x)\n",
"
"torch.onnx.export(circuit,
"
" x,\n",
"
" \"network.onnx\",\n",
" export_params=True,
" opset_version=10,
" do_constant_folding=True,
" |
input_names=['input'],
" output_names=['output'],
" dynamic_axes={'input': {0: 'batch_size'},
" 'output': {0: 'batch_size'}})\n",
"\n",
"d = ((x).detach().numpy()).reshape([-1]).tolist()\n",
"\n",
"data = dict(input_shapes=[shape],\n",
" input_data=[d],\n",
" output_data=[((o).detach().numpy()).reshape([-1]).tolist() for o in torch_out])\n",
"\n",
"
"json.dump(data, open(\"input.json\", 'w'))\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d5e374a2",
"metadata": {},
"outputs": [],
"source": [
"!RUST_LOG=trace\n",
"
"res = ezkl.gen_settings(model_path, settings_path)\n",
"assert res == True\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"cal_path = os.path.join(\"calibration.json\")\n",
"\n",
"data_array = (torch.randn(20, *shape).detach().numpy()).reshape([-1]).tolist()\n",
"\n",
"data = dict(input_data = [data_array])\n",
"\n",
"
"json.dump(data, open(cal_path, 'w'))\n",
"\n",
"res = ezkl.calibrate_settings(data_path, model_path, settings_path, \"resources\")\n",
"assert res == True\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3aa4f090",
"metadata": {},
"outputs": [],
"source": [
"res = ezkl.compile_circuit(mo |
del_path, compiled_model_path, settings_path)\n",
"assert res == True"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8b74dcee",
"metadata": {},
"outputs": [],
"source": [
"
"res = ezkl.get_srs( settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "18c8b7c7",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"res = ezkl.gen_witness(data_path, compiled_model_path, witness_path)\n",
"assert os.path.isfile(witness_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b1c561a8",
"metadata": {},
"outputs": [],
"source": [
"\n",
"
"
"
"
"\n",
"\n",
"\n",
"res = ezkl.setup(\n",
" compiled_model_path,\n",
" vk_path,\n",
" pk_path,\n",
" \n",
" )\n",
"\n",
"assert res == True\n",
"assert os.path.isfile(vk_path)\n",
"assert os.path.isfile(pk_path)\n",
"assert os.path.isfile(settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c384cbc8",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"\n",
"proof_path = os.path.join('test.pf')\n",
"\n",
"res = ezkl.prove(\n",
" witness_path,\n",
" |
compiled_model_path,\n",
" pk_path,\n",
" proof_path,\n",
" \n",
" \"single\",\n",
" )\n",
"\n",
"print(res)\n",
"assert os.path.isfile(proof_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "76f00d41",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"res = ezkl.verify(\n",
" proof_path,\n",
" settings_path,\n",
" vk_path,\n",
" \n",
" )\n",
"\n",
"assert res == True\n",
"print(\"verified\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.15"
}
},
"nbformat": 4,
"nbformat_minor": 5
} |
{
"cells": [
{
"cell_type": "markdown",
"id": "d0a82619",
"metadata": {},
"source": [
"Credits to [geohot](https:
"\n",
"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c22afe46",
"metadata": {},
"outputs": [],
"source": [
"%pip install pytorch_lightning\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "12fb79a8",
"metadata": {},
"outputs": [],
"source": [
" |
import random\n",
" |
import math\n",
" |
import numpy as np\n",
"\n",
" |
import torch\n",
"from torch |
import nn\n",
" |
import torch.nn.functional as F\n",
"\n",
" |
import pytorch_lightning as pl\n",
"\n",
"
"try:\n",
"
" |
import google.colab\n",
" |
import subprocess\n",
" |
import sys\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"ezkl\"])\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"onnx\"])\n",
"\n",
"
"except:\n",
" pass\n",
"\n",
"\n",
"
"
"
"
"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8638e94e",
"metadata": {},
"outputs": [],
"source": [
" |
class BaseDataModule(pl.LightningDataModule):\n",
" def __init__(self, batch_size=32, split=0.8, *args, **kwargs):\n",
" super().__init__()\n",
" self.ds_X, self.ds_Y = self.get_dataset(*args, **kwargs)\n",
" self.split = int(self.ds_X.shape[0]*split)\n",
" self.batch_size = batch_size\n",
"\n",
" def train_dataloader(self):\n",
" ds_X_train, ds_Y_train = self.ds_X[0:self.split], self.ds_Y[0:self.split]\n",
" return torch.utils.data.DataLoader(list(zip(ds_X_train, ds_Y_train)), batch_size=self.batch_size)\n",
"\n",
" def val_dataloader(self):\n",
" ds_X_test, ds_Y_test = self.ds_X[self.split:], self.ds_Y[self.split:]\n",
" return torch.utils.data.DataLoader(list(zip(ds_X_test, ds_Y_test)), batch_size=self.batch_size)\n",
"\n",
" |
class ReverseDataModule(BaseDataModule):\n",
" def get_dataset(self, cnt=10000, seq_len=6):\n",
" ds = np.random.randint(0, 10, size=(cnt, seq_len))\n",
" return ds, ds[:, ::-1].ravel().reshape(cnt, seq_len)\n",
" \n",
"
" |
class AdditionDataModule(BaseDataModule):\n",
" def get_dataset(self):\n",
" ret = []\n",
" for i in range(100):\n",
" for j in range(100):\n",
" s = i+j\n",
" ret.append([i
" ds = np.array(ret)\n",
" return ds[:, 0:6], np.copy(ds[:, 1:]) \n",
"\n",
"
" |
class ParityDataModule(BaseDataModule):\n",
" def get_dataset(self, seq_len=10):\n",
" ds_X, ds_Y = [], []\n",
" for i in range(2**seq_len):\n",
" x = [int(x) for x in list(bin(i)[2:].rjust(seq_len, '0'))]\n",
" ds_X.append(x)\n",
" ds_Y.append((np.cumsum(x)%2).tolist())\n",
" return np.array(ds_X), np.array(ds_Y)\n",
" \n",
" |
class WikipediaDataModule(BaseDataModule):\n",
" def get_dataset(self, seq_len=50):\n",
" global enwik8\n",
" if 'enwik8' not in globals():\n",
" |
import requests\n",
" enwik8_zipped = requests.get(\"https:
" from zipfile |
import ZipFile\n",
" |
import io\n",
" enwik8 = ZipFile(io.BytesIO(enwik8_zipped)).read('enwik8')\n",
" en = np.frombuffer(enwik8, dtype=np.uint8).astype(np.int)\n",
" en = en[0:-seq_len+1]\n",
" en[en>127] = 127\n",
" return en[0:-1].reshape(-1, seq_len), en[1:].reshape(-1, seq_len)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "323554ca",
"metadata": {},
"outputs": [],
"source": [
"def attention(queries, keys, values):\n",
" d = queries.shape[-1]\n",
" scores = torch.matmul(queries, keys.transpose(-2,-1))/math.sqrt(d)\n",
" attention_weights = F.softmax(scores, dim=-1)\n",
" return torch.matmul(attention_weights, values)\n",
"\n",
" |
class MultiHeadAttention(nn.Module):\n",
" def __init__(self, embed_dim, num_heads):\n",
" super(MultiHeadAttention, self).__init__()\n",
" self.embed_dim, self.num_heads = embed_dim, num_heads\n",
" assert embed_dim % num_heads == 0\n",
" self.projection_dim = embed_dim
" \n",
" self.W_q = nn.Linear(embed_dim, embed_dim)\n",
" self.W_k = nn.Linear(embed_dim, embed_dim)\n",
" self.W_v = nn.Linear(embed_dim, embed_dim)\n",
" self.W_o = nn.Linear(embed_dim, embed_dim)\n",
"\n",
" def transpose(self, x):\n",
" x = x.reshape(x.shape[0], x.shape[1], self.num_heads, self.projection_dim)\n",
" return x.permute(0, 2, 1, 3)\n",
" \n",
" def transpose_output(self, x):\n",
" x = x.permute(0, 2, 1, 3)\n",
" return x.reshape(x.shape[0], x.shape[1], self.embed_dim)\n",
" \n",
" def forward(self, q, k, v):\n",
" q = self.transpose(self.W_q(q))\n",
" k = self.transpose(self.W_k(k))\n",
" v = self.transpose(self.W_v(v))\n",
" output = attention(q, k, v)\n",
" return self.W_o(self.transpose_output(output))\n",
" \n",
" |
class TransformerBlock(nn.Module):\n",
" def __init__(self, embed_dim, num_heads, ff_dim, rate=0.1):\n",
" super(TransformerBlock, self).__init__()\n",
" self.att = MultiHeadAttention(embed_dim, num_heads)\n",
" self.ffn = nn.Sequential(\n",
" nn.Linear(embed_dim, ff_dim), nn.ReLU(), nn.Linear(ff_dim, embed_dim)\n",
" )\n",
" self.layernorm1 = nn.LayerNorm(embed_dim)\n",
" self.layernorm2 = nn.LayerNorm(embed_dim)\n",
" self.dropout = nn.Dropout(rate)\n",
" \n",
" def forward(self, x):\n",
" x = self.layernorm1(x + self.dropout(self.att(x, x, x)))\n",
" x = self.layernorm2(x + self.dropout(self.ffn(x)))\n",
" return x\n",
" \n",
" |
class TokenAndPositionEmbedding(nn.Module):\n",
" def __init__(self, maxlen, vocab_size, embed_dim):\n",
" super(TokenAndPositionEmbedding, self).__init__()\n",
" self.token_emb = nn.Embedding(vocab_size, embed_dim)\n",
" self.pos_emb = nn.Embedding(maxlen, embed_dim)\n",
" def forward(self, x):\n",
" pos = torch.arange(0, x.size(1), dtype=torch.int32, device=x.device)\n",
" return self.token_emb(x) + self.pos_emb(pos).view(1, x.size(1), -1)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "167e42e3",
"metadata": {},
"outputs": [],
"source": [
" |
class LittleTransformer(pl.LightningModule):\n",
" def __init__(self, seq_len=6, max_value=10, layer_count=2, embed_dim=128, num_heads=4, ff_dim=32):\n",
" super().__init__()\n",
" self.max_value = max_value\n",
" self.model = nn.Sequential(\n",
" TokenAndPositionEmbedding(seq_len, max_value, embed_dim),\n",
" *[TransformerBlock(embed_dim, num_heads, ff_dim) for x in range(layer_count)],\n",
" nn.Linear(embed_dim, max_value),\n",
" nn.LogSoftmax(dim=-1))\n",
" \n",
" def forward(self, x):\n",
" return self.model(x)\n",
" \n",
" def training_step(self, batch, batch_idx):\n",
" x, y = batch\n",
" output = self.model(x)\n",
" loss = F.nll_loss(output.view(-1, self.max_value), y.view(-1))\n",
" self.log(\"train_loss\", loss)\n",
" return loss\n",
" \n",
" def validation_step(self, val_batch, batch_idx):\n",
" x, y = val_batch\n",
" pred = self.model(x).argmax(dim=2)\n",
" val_accuracy = (pred == y).type(torch.float).mean()\n",
" self.log(\"val_accuracy\", val_accuracy, prog_bar=True)\n",
" \n",
" def configure_optimizers(self):\n",
" if self.device.type == 'cuda':\n",
" |
import apex\n",
" return apex.optimizers.FusedAdam(self.parameters(), lr=3e-4)\n",
" else:\n",
" return torch.optim.Adam(self.parameters(), lr=3e-4)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a2f48c98",
"metadata": {},
"outputs": [],
"source": [
"model = LittleTransformer(seq_len=6)\n",
"trainer = pl.Trainer(enable_progress_bar=True, max_epochs=0)\n",
"data = AdditionDataModule(batch_size=64)\n",
"
"
"trainer.fit(model, data)"
]
},
{
"cell_type": "markdown",
"id": "fa7d277e",
"metadata": {},
"source": [
"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6f339a28",
"metadata": {},
"outputs": [],
"source": [
"\n",
" |
import os \n",
"\n",
"model_path = os.path.join('network.onnx')\n",
"compiled_model_path = os.path.join('network.compiled')\n",
"pk_path = os.path.join('test.pk')\n",
"vk_path = os.path.join('test.vk')\n",
"settings_path = os.path.join('settings.json')\n",
"\n",
"witness_path = os.path.join('witness.json')\n",
"data_path = os.path.join('input.json')\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "27ce542b",
"metadata": {},
"outputs": [],
"source": [
"\n",
" |
import json\n",
"\n",
"\n",
"shape = [1, 6]\n",
"
"x = torch.zeros(shape, dtype=torch.long)\n",
"x = x.reshape(shape)\n",
"\n",
"print(x)\n",
"\n",
"
"model.eval()\n",
"model.to('cpu')\n",
"\n",
"
"torch.onnx.export(model,
" x,
" model_path,
" export_params=True,
" opset_version=10,
" do_constant_folding=True,
" input_names = ['input'],
" output_names = ['output'],
" dynamic_axes={'input' : {0 : 'batch_size'},
" 'output' : {0 : 'batch_size'}})\n",
"\n",
"data_array = ((x).detach().numpy()).reshape([-1]).tolist()\n",
"\n",
"data_json = dict(input_data = [data_array])\n",
"\n",
"print(data_json)\n",
"\n",
"
"json.dump( data_json, open(data_path, 'w' ))\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "36ddc6f9",
"metadata": {},
"outputs": [],
"source": [
" |
import ezkl \n",
"\n",
"!RUST_LOG=trace\n",
"
"res = ezkl.gen_settings(model_path, settings_path)\n",
"assert res == True"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2fe6d972",
"metadata": {},
"outputs": [],
"source": [
"cal_path = os.path.join(\"calibration.json\")\n",
"\n",
"data_array = (torch.randn(20, *shape).detach().numpy()).reshape([-1]).tolist()\n",
"\n",
"data = dict(input_data = [data_array])\n",
"\n",
"
"json.dump(data, open(cal_path, 'w'))\n",
"\n",
"res = ezkl.calibrate_settings(data_path, model_path, settings_path, \"resources\")\n",
"assert res == True\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0990f5a8",
"metadata": {},
"outputs": [],
"source": [
"res = ezkl.compile_circuit(model_path, compiled_model_path, settings_path)\n",
"assert res == True"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1b80dc01",
"metadata": {},
"outputs": [],
"source": [
"
"res = ezkl.get_srs( settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "54cbde29",
"metadata": {},
"outputs": [],
"source": [
"
"witness_path = \"gan_witness.json\"\n",
"\n",
"res = ezkl.gen_witness(data_path, compiled_model_path, witness_path)\n",
"assert os.path.isfile(witness_path)" |
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "28760638",
"metadata": {},
"outputs": [],
"source": [
"res = ezkl.mock(witness_path, compiled_model_path)\n",
"assert res == True"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5e595112",
"metadata": {},
"outputs": [],
"source": [
"\n",
"
"
"
"
"\n",
"res = ezkl.setup(\n",
" compiled_model_path,\n",
" vk_path,\n",
" pk_path,\n",
" \n",
" )\n",
"\n",
"assert res == True\n",
"assert os.path.isfile(vk_path)\n",
"assert os.path.isfile(pk_path)\n",
"assert os.path.isfile(settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d37adaef",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"\n",
"proof_path = os.path.join('test.pf')\n",
"\n",
"res = ezkl.prove(\n",
" witness_path,\n",
" compiled_model_path,\n",
" pk_path,\n",
" proof_path,\n",
" \n",
" \"single\",\n",
" )\n",
"\n",
"print(res)\n",
"assert os.path.isfile(proof_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5b58acd5",
"metadata": {},
"outputs": [], |
"source": [
"
"res = ezkl.verify(\n",
" proof_path,\n",
" settings_path,\n",
" vk_path,\n",
" \n",
" )\n",
"\n",
"assert res == True\n",
"print(\"verified\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.15"
}
},
"nbformat": 4,
"nbformat_minor": 5
} |
{
"cells": [
{
"cell_type": "markdown",
"id": "d0a82619",
"metadata": {},
"source": [
"\n",
"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "12fb79a8",
"metadata": {},
"outputs": [],
"source": [
" |
import numpy as np\n",
"\n",
" |
import torch\n",
"from torch |
import nn\n",
" |
import torch.nn.functional as F\n",
"\n",
"
"try:\n",
"
" |
import google.colab\n",
" |
import subprocess\n",
" |
import sys\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"ezkl\"])\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"onnx\"])\n",
"\n",
"
"except:\n",
" pass"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a2f48c98",
"metadata": {},
"outputs": [],
"source": [
"model = nn.LSTM(3, 3)
"x = torch.randn(1, 3)\n",
"\n",
"
]
},
{
"cell_type": "markdown",
"id": "fa7d277e",
"metadata": {},
"source": [
"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6f339a28",
"metadata": {},
"outputs": [],
"source": [
"\n",
" |
import os \n",
" |
import ezkl\n",
"\n",
"\n",
"model_path = os.path.join('network.onnx')\n",
"compiled_model_path = os.path.join('network.compiled')\n",
"pk_path = os.path.join('test.pk')\n",
"vk_path = os.path.join('test.vk')\n",
"settings_path = os.path.join('settings.json')\n",
"\n",
"witness_path = os.path.join('witness.json')\n",
"data_path = os.path.join('input.json')\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "27ce542b",
"metadata": {},
"outputs": [],
"source": [
"\n",
" |
import json \n",
"\n",
"\n",
"
"model.eval()\n",
"model.to('cpu')\n",
"\n",
"
"torch.onnx.export(model,
" x,
" model_path,
" export_params=True,
" opset_version=10,
" do_constant_folding=True,
" input_names = ['input'],
" output_names = ['output'],
" dynamic_axes={'input' : {0 : 'batch_size'},
" 'output' : {0 : 'batch_size'}})\n",
"\n",
"\n",
"SEQ_LEN = 10\n",
"shape = (SEQ_LEN, 3)\n",
"
"x = torch.randn(*shape)\n",
"\n",
"data_array = ((x).detach().numpy()).reshape([-1]).tolist()\n",
"\n",
"data_json = dict(input_data = [data_array])\n",
"\n",
"print(data_json)\n",
"\n",
"
"json.dump( data_json, open(data_path, 'w' ))\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2fe6d972",
"metadata": {},
"outputs": [],
"source": [
"\n",
"\n",
"run_args = ezkl.PyRunArgs()\n",
"run_args.variables = [(\"batch_size\", SEQ_LEN)]\n",
"\n",
"
"res = ezkl.gen_settings(model_path, settings_path, py_run_args=run_args)\n",
"assert res == True\n",
"\n",
"res = ezkl.calibrate_settings(d |
ata_path, model_path, settings_path, \"resources\")\n",
"assert res == True\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"cal_path = os.path.join(\"calibration.json\")\n",
"\n",
"data_array = (torch.randn(10, *shape).detach().numpy()).reshape([-1]).tolist()\n",
"\n",
"data = dict(input_data = [data_array])\n",
"\n",
"
"json.dump(data, open(cal_path, 'w'))\n",
"\n",
"ezkl.calibrate_settings(cal_path, model_path, settings_path, \"resources\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0990f5a8",
"metadata": {},
"outputs": [],
"source": [
"res = ezkl.compile_circuit(model_path, compiled_model_path, settings_path)\n",
"assert res == True"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1b80dc01",
"metadata": {},
"outputs": [],
"source": [
"
"res = ezkl.get_srs( settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "54cbde29",
"metadata": {},
"outputs": [],
"source": [
"
"witness_path = \"lstmwitness.json\"\n",
"\n",
"res = ezkl.gen_witness(data_path, compiled_model_path, witness_path)\n",
"assert os.path.isfile(witness_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "28760638",
"metadata": {},
"outputs": [], |
"source": [
"res = ezkl.mock(witness_path, compiled_model_path)\n",
"assert res == True"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5e595112",
"metadata": {},
"outputs": [],
"source": [
"\n",
"
"
"
"
"\n",
"res = ezkl.setup(\n",
" compiled_model_path,\n",
" vk_path,\n",
" pk_path,\n",
" \n",
" )\n",
"\n",
"assert res == True\n",
"assert os.path.isfile(vk_path)\n",
"assert os.path.isfile(pk_path)\n",
"assert os.path.isfile(settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d37adaef",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"\n",
"proof_path = os.path.join('test.pf')\n",
"\n",
"res = ezkl.prove(\n",
" witness_path,\n",
" compiled_model_path,\n",
" pk_path,\n",
" proof_path,\n",
" \n",
" \"single\",\n",
" )\n",
"\n",
"print(res)\n",
"assert os.path.isfile(proof_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5b58acd5",
"metadata": {},
"outputs": [],
"source": [
"
"res = ezkl.verify(\n",
" proof_path,\n",
" settings_path,\n",
" |
vk_path,\n",
" \n",
" )\n",
"\n",
"assert res == True\n",
"print(\"verified\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.15"
}
},
"nbformat": 4,
"nbformat_minor": 5
} |
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"
"\n",
"This notebook shows how to calculate the mean of ERC20 transfer amounts, pulling data in from a Postgres database. First we install and get the necessary libraries running. \n",
"The first of which is [shovel](https:
"\n",
"Make sure you install postgres if needed https:
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
" |
import os\n",
" |
import getpass\n",
" |
import json\n",
" |
import time\n",
" |
import subprocess\n",
"\n",
"
"os.system(\"curl -LO https:
"os.system(\"chmod +x shovel\")\n",
"\n",
"\n",
"os.environ[\"PG_URL\"] = \"postgres:
"\n",
"
"config = {\n",
" \"pg_url\": \"$PG_URL\",\n",
" \"eth_sources\": [\n",
" {\"name\": \"mainnet\", \"chain_id\": 1, \"url\": \"https:
" {\"name\": \"base\", \"chain_id\": 8453, \"url\": \"https:
" ],\n",
" \"integrations\": [{\n",
" \"name\": \"usdc_transfer\",\n",
" \"enabled\": True,\n",
" \"sources\": [{\"name\": \"mainnet\"}, {\"name\": \"base\"}],\n",
" \"table\": {\n",
" \"name\": \"usdc\",\n",
" \"columns\": [\n",
" {\"name\": \"log_addr\", \"type\": \"bytea\"},\n",
" {\"name\": \"block_num\", \"type\": \"numeric\"},\n",
" {\"name\": \"f\", \"type\": \"bytea\"},\n",
" {\"name\": \"t\", \"type\": \"bytea\"},\n",
" {\"name\": \"v\", \"type\": \"numeric\"}\n",
" ]\n",
" },\n",
" \"block\": [\n",
" {\"name\": \"block_num\", \"column\": \"block_num\"},\n",
" {\n",
" \"name\": \"log_addr\",\n",
" \"column\": \"log_addr\",\n",
" \"filter_op\": \"contains\",\n",
" \"filter_arg\": [\n",
" \"a0b86991c6218b36c1d19d4a2e9eb0ce3606eb48\",\n",
" \"833589fCD6eDb6E08f4c7C32D4f71b54bdA02913\"\n",
" ]\n",
" }\n",
" ],\n",
" \"event\": {\n",
" \"name\": \"Transfer\",\n",
" \"type\": \"event\",\n",
" \"anonymous\": False,\n",
" \"inputs\": [\n",
" {\"indexed\": True, \"name\": \"from\", \"type\": \"address\", \"column\": \"f\"},\n",
" {\"indexed\": True, \"name\": \"to\", \"type\": \"addr |
ess\", \"column\": \"t\"},\n",
" {\"indexed\": False, \"name\": \"value\", \"type\": \"uint256\", \"column\": \"v\"}\n",
" ]\n",
" }\n",
" }]\n",
"}\n",
"\n",
"
"with open(\"config.json\", \"w\") as f:\n",
" f.write(json.dumps(config))\n",
"\n",
"\n",
"
"os.system(\"echo $PG_URL\")\n",
"\n",
"os.system(\"createdb -h localhost -p 5432 shovel\")\n",
"\n",
"os.system(\"echo shovel is now installed. starting:\")\n",
"\n",
"command = [\"./shovel\", \"-config\", \"config.json\"]\n",
"subprocess.Popen(command)\n",
"\n",
"os.system(\"echo shovel started.\")\n",
"\n",
"time.sleep(5)\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2wIAHwqH2_mo"
},
"source": [
"**Import Dependencies**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "9Byiv2Nc2MsK"
},
"outputs": [],
"source": [
"
"try:\n",
"
" |
import google.colab\n",
" |
import subprocess\n",
" |
import sys\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"ezkl\"])\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"onnx\"])\n",
"\n",
"
"except:\n",
" pass\n",
"\n",
" |
import ezkl\n",
" |
import torch\n",
" |
import datetime\n",
" |
import pandas as pd\n",
" |
import requests\n",
" |
import json\n",
" |
import os\n",
"\n",
" |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.