text
stringlengths 1
2.05k
|
---|
import subprocess\n",
" |
import sys\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"ezkl\"])\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"onnx\"])\n",
"\n",
"
"except:\n",
" pass\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from torch |
import nn\n",
" |
import ezkl\n",
" |
import os\n",
" |
import json\n",
" |
import torch\n",
" |
import math\n",
"\n",
"
"phi = torch.tensor(5 * math.pi / 180)\n",
"s = torch.sin(phi)\n",
"c = torch.cos(phi)\n",
"\n",
"\n",
" |
class RotateStuff(nn.Module):\n",
" def __init__(self):\n",
" super(RotateStuff, self).__init__()\n",
"\n",
"
" self.rot = torch.stack([torch.stack([c, -s]),\n",
" torch.stack([s, c])]).t()\n",
"\n",
" def forward(self, x):\n",
" x_rot = x @ self.rot
" return x_rot\n",
"\n",
"\n",
"circuit = RotateStuff()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This will showcase the principle directions of rotation by plotting the rotation of a single unit vector."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from matplotlib |
import pyplot\n",
"pyplot.figure(figsize=(3, 3))\n",
"pyplot.arrow(0, 0, 1, 0, width=0.02, alpha=0.5)\n",
"pyplot.arrow(0, 0, 0, 1, width=0.02, alpha=0.5)\n",
"pyplot.arrow(0, 0, circuit.rot[0, 0].item(), circuit.rot[0, 1].item(), width=0.02)\n",
"pyplot.arrow(0, 0, circuit.rot[1, 0].item(), circuit.rot[1, 1].item(), width=0.02)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b37637c4",
"metadata": {},
"outputs": [],
"source": [
"model_path = os.path.join('network.onnx')\n",
"compiled_model_path = os.path.join('network.compiled')\n",
"pk_path = os.path.join('test.pk')\n",
"vk_path = os.path.join('test.vk')\n",
"settings_path = os.path.join('settings.json')\n",
"srs_path = os.path.join('kzg.srs')\n",
"witness_path = os.path.join('witness.json')\n",
"data_path = os.path.join('input.json')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "82db373a",
"metadata": {},
"outputs": [],
"source": [
"\n",
"\n",
"
"x = torch.tensor([[1, 0], [0, 1]], dtype=torch.float32)\n",
"\n",
"
"circuit.eval()\n",
"\n",
"
"torch.onnx.export(circuit,
" x,
" model_path,
" export_params=True,
" opset_version=10,
" do_constant_folding=True,
" input_names = ['input'],
" output_names = ['output'],
" )\n",
"\n",
"data_array = ((x).detach().numpy()).reshape([-1]).tolist()\n",
"\n",
"data = dict(input_data = [data_array])\n",
"\n",
"
"json.dump( data, open(data_path, 'w' ))\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For demo purposes we deploy these coordinates |
to a contract running locally using Anvil. This creates our on-chain world. We then rotate the world using the EZKL package and submit the proof to the contract. The contract then updates the world rotation. For demo purposes we do this repeatedly, rotating the world by 1 transform each time."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
" |
import subprocess\n",
" |
import time\n",
" |
import threading\n",
"\n",
"
"
"\n",
"RPC_URL = \"http:
"\n",
"
"anvil_process = None\n",
"\n",
"def start_anvil():\n",
" global anvil_process\n",
" if anvil_process is None:\n",
" anvil_process = subprocess.Popen([\"anvil\", \"-p\", \"3030\", \"--code-size-limit=41943040\"])\n",
" if anvil_process.returncode is not None:\n",
" raise Exception(\"failed to start anvil process\")\n",
" time.sleep(3)\n",
"\n",
"def stop_anvil():\n",
" global anvil_process\n",
" if anvil_process is not None:\n",
" anvil_process.terminate()\n",
" anvil_process = None\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We define our `PyRunArgs` objects which contains the visibility parameters for out model. \n",
"- `input_visibility` defines the visibility of the model inputs\n",
"- `param_visibility` defines the visibility of the model weights and constants and parameters \n",
"- `output_visibility` defines the visibility of the model outputs\n",
"\n",
"Here we create the following setup:\n",
"- `input_visibility`: \"public\"\n",
"- `param_visibility`: \"fixed\"\n",
"- `output_visibility`: public"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d5e374a2",
"metadata": {},
"outputs": [],
"source": [
"py_run_args = ezkl.PyRunArgs()\n",
"py_run_args.input_visibility = \"public\"\n",
"py_run_args.output_visibility = \"public\"\n",
"py_run_args.param_visibility = \"private\"
"py_run_args.scale_rebase_multiplier = 10\n",
"\n",
"res = ezkl.gen_settings(model_path, settings_path, py_run_args=py_run_args)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3aa4f090",
"metadata": {},
"outputs": [],
"source": [
"res = ezkl.compile_circuit(model_path, compiled_model_path, settings_path)\n",
"assert res == True"
]
},
{
"cell_t |
ype": "markdown",
"metadata": {},
"source": [
"We also define a contract that holds out test data. This contract will contain in its storage the data that we will read from and attest to. In production you would not need to set up a local anvil instance. Instead you would replace RPC_URL with the actual RPC endpoint of the chain you are deploying your verifiers too, reading from the data on said chain."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ezkl.setup_test_evm_witness(\n",
" data_path,\n",
" compiled_model_path,\n",
"
" data_path,\n",
" input_source=ezkl.PyTestDataSource.OnChain,\n",
" output_source=ezkl.PyTestDataSource.File,\n",
" rpc_url=RPC_URL)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As we use Halo2 with KZG-commitments we need an SRS string from (preferably) a multi-party trusted setup ceremony. For an overview of the procedures for such a ceremony check out [this page](https:
"\n",
"These SRS were generated with [this](https:
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8b74dcee",
"metadata": {},
"outputs": [],
"source": [
"
"res = ezkl.get_srs( settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "18c8b7c7",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"witness = ezkl.gen_witness(data_path, compiled_model_path, witness_path)\n",
"assert os.path.isfile(witness_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here we setup verifying and proving keys for the circuit. As the name suggests the proving key is needed for ... proving and the verifying key is needed for ... verifying. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b1c561a8",
"metadata": {},
"outputs": [],
"source": [
"res = ezkl.setup(\n",
" compile |
d_model_path,\n",
" vk_path,\n",
" pk_path,\n",
" \n",
" )\n",
"\n",
"assert res == True\n",
"assert os.path.isfile(vk_path)\n",
"assert os.path.isfile(pk_path)\n",
"assert os.path.isfile(settings_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can now create an EVM verifier contract from our circuit. This contract will be deployed to the chain we are using. In this case we are using a local anvil instance."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"abi_path = 'test.abi'\n",
"sol_code_path = 'test.sol'\n",
"\n",
"res = ezkl.create_evm_verifier(\n",
" vk_path,\n",
" \n",
" settings_path,\n",
" sol_code_path,\n",
" abi_path,\n",
" )\n",
"assert res == True"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
" |
import json\n",
"\n",
"addr_path_verifier = \"addr_verifier.txt\"\n",
"\n",
"res = ezkl.deploy_evm(\n",
" addr_path_verifier,\n",
" sol_code_path,\n",
" 'http:
")\n",
"\n",
"assert res == True"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"With the vanilla verifier deployed, we can now create the data attestation contract, which will read in the instances from the calldata to the verifier, attest to them, call the verifier and then return the result. \n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"abi_path = 'test.abi'\n",
"sol_code_path = 'test.sol'\n",
"input_path = 'input.json'\n",
"\n",
"res = ezkl.create_evm_data_attestation(\n",
" input_path,\n",
" settings_path,\n",
" sol_code_path,\n",
" abi_path,\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"addr_path_da = \"addr_da.txt\"\n",
"\n",
"res = ezkl.deploy_da_evm(\n",
" addr_path_da,\n",
" input_path,\n",
" settings_path,\n",
" sol_code_path,\n",
" RPC_URL,\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we can pull in the data from the contract and calculate a new set of coordinates. We then rotate the world by 1 transform and submit the proof to the contract. The contract could then update the world rotation (logic not inserted here). For demo purposes we do this repeatedly, rotating the world by 1 transform. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c384cbc8",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"\n",
"proof_path = os.path.join('test.pf' |
)\n",
"\n",
"res = ezkl.prove(\n",
" witness_path,\n",
" compiled_model_path,\n",
" pk_path,\n",
" proof_path,\n",
" \n",
" \"single\",\n",
" )\n",
"\n",
"print(res)\n",
"assert os.path.isfile(proof_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Call the view only verify method on the contract to verify the proof. Since it is a view function this is safe to use in production since you don't have to pass your private key."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "76f00d41",
"metadata": {},
"outputs": [],
"source": [
"
"addr_verifier = None\n",
"with open(addr_path_verifier, 'r') as f:\n",
" addr = f.read()\n",
"
"addr_da = None\n",
"with open(addr_path_da, 'r') as f:\n",
" addr_da = f.read()\n",
"\n",
"res = ezkl.verify_evm(\n",
" addr,\n",
" proof_path,\n",
" RPC_URL,\n",
" addr_da,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As a sanity check lets plot the rotations of the unit vectors. We can see that the unit vectors rotate as expected by the output of the circuit. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"witness['outputs'][0][0]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"settings = json.load(open(settings_path, 'r'))\n",
"out_scale = settings[\"model_output_scales\"][0]\n",
"\n",
"from matplotlib |
import pyplot\n",
"pyplot.figure(figsize=(3, 3))\n",
"pyplot.arrow(0, 0, 1, 0, width=0.02, alpha=0.5)\n",
"pyplot.arrow(0, 0, 0, 1, width=0.02, alpha=0.5)\n",
"\n",
"arrow_x = ezkl.felt_to_float(witness['outputs'][0][0], out_scale)\n",
"arrow_y = ezkl.felt_to_float(witness['outputs'][0][1], out_scale)\n",
"pyplot.arrow(0, 0, arrow_x, arrow_y, width=0.02)\n",
"arrow_x = ezkl.felt_to_float(witness['outputs'][0][2], out_scale)\n",
"arrow_y = ezkl.felt_to_float(witness['outputs'][0][3], out_scale)\n",
"pyplot.arrow(0, 0, arrow_x, arrow_y, width=0.02)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.15"
}
},
"nbformat": 4,
"nbformat_minor": 5
} |
{
"cells": [
{
"attachments": {
"image-3.png": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAywAAAV+CAYAAACUGLpbAAAMPWlDQ1BJQ0MgUHJvZmlsZQAASImVVwdYU8kWnltSIbQAAlJCb4JIDSAlhBZAercRkgChhBgIKnZ0UcG1iwVs6KqIYqfZETuLYsO+WFBR1sWCXXmTArruK9873zf3/vefM/85c+7cMgCon+CKxbmoBgB5okJJbEgAIzkllUF6ChCgA8hgONDj8grErOjoCABt8Px3e3cDekO76iDT+mf/fzVNvqCABwASDXE6v4CXB/FBAPAqnlhSCABRxptPLhTLMGxAWwIThHiBDGcqcJUMpyvwXrlPfCwb4lYAyKpcriQTALXLkGcU8TKhhlofxE4ivlAEgDoDYt+8vHw+xGkQ20AfMcQyfWb6DzqZf9NMH9LkcjOHsGIuciMHCgvEudyp/2c5/rfl5UoHY1jBppolCY2VzRnW7WZOfrgMq0LcK0qPjIJYC+IPQr7cH2KUmiUNTVD4o4a8AjasGdCF2InPDQyH2BDiYFFuZISST88QBnMghisEnSIs5MRDrAfxAkFBUJzSZ5MkP1YZC63PkLBZSv4cVyKPK4t1X5qTwFLqv84ScJT6mFpxVnwSxFSILYqEiZEQq0HsWJATF670GV2cxY4c9JFIY2X5W0AcKxCFBCj0saIMSXCs0r8sr2BwvtimLCEnUon3F2bFhyrqg7XyuPL84VywywIRK2FQR1CQHDE4F74gMEgxd+yZQJQQp9T5IC4MiFWMxani3GilP24myA2R8WYQuxYUxSnH4omFcEEq9PEMcWF0vCJPvDibGxatyAdfCiIAGwQCBpDClg7yQTYQtvc29MIrRU8w4AIJyAQC4KBkBkckyXtE8BgHisGfEAlAwdC4AHmvABRB/usQqzg6gAx5b5F8RA54AnEeCAe58FoqHyUaipYIHkNG+I/oXNh4MN9c2GT9/54fZL8zLMhEKBnpYESG+qAnMYgYSAwlBhNtcQPcF/fGI+DRHzZnnIl7Ds7juz/hCaGD8JBwndBFuDVRWCL5KcsxoAvqBytrkf5jLXArqOmGB+A+UB0q47q4AXDAXWEcFu4HI7tBlq3MW1YVxk/af5vBD3dD6UdxoqCUYRR/is3PI9Xs1NyGVGS1/rE+ilzTh+rNHur5OT77h+rz4Tn8Z09sAXYAO4udxM5jR7AGwMCOY41YG3ZUhodW12P56hqMFivPJwfqCP8Rb/DOyipZ4FTr1OP0RdFXKJgie0cDdr54qkSYmVXIYMEvgoDBEfEcRzCcnZxdAJB9XxSvrzcx8u8Gotv2nZv7BwA+xwcGBg5/58KOA7DPAz7+Td85Gyb8dKgAcK6JJ5UUKThcdiDAt4Q6fNL0gTEwBzZwPs7AHXgDfxAEwkAUiAcpYALMPguucwmYDKaDOaAUlIOlYBVYBzaCLWAH2A32gwZwBJwEZ8BFcBlcB3fg6ukGL0AfeAc+IwhCQmgIHdFHTBBLxB5xRpiILxKERCCxSAqShmQiIkSKTEfmIuXIcmQdshmpQfYhTchJ5DzSgdxCHiA9yGvkE4qhqqg2aoRaoSNRJspCw9F4dDyaiU5Ci9F56GJ0DVqN7kLr0ZPoRfQ62oW+QPsxgKlgupgp5oAxMTYWhaViGZgEm4mVYRVYNVaHNcP7fBXrwnqxjzgRp+MM3AGu4FA8Aefhk/CZ+CJ8Hb4Dr8db8av4A7wP/0agEQwJ9gQvAoeQTMgkTCaUEioI2wiHCKfhs9RNeEckEnWJ1kQP+CymELOJ04iLiOuJe4gniB3ER8R+EomkT7In+ZCiSFxSIamUtJa0i3ScdIXUTfpAViGbkJ3JweRUsohcQq4g7yQfI18hPyV/pmhQLClelCgKnzKVsoSyldJMuUTppnymalKtqT7UeGo2dQ51DbWOepp6l/pGRUXFTMVTJ |
UZFqDJbZY3KXpVzKg9UPqpqqdqpslXHqUpVF6tuVz2hekv1DY1Gs6L501JphbTFtBraKdp92gc1upqjGkeNrzZLrVKtXu2K2kt1irqlOkt9gnqxeoX6AfVL6r0aFA0rDbYGV2OmRqVGk0anRr8mXXOUZpRmnuYizZ2a5zWfaZG0rLSCtPha87S2aJ3SekTH6OZ0Np1Hn0vfSj9N79Ymaltrc7Sztcu1d2u3a/fpaOm46iTqTNGp1Dmq06WL6VrpcnRzdZfo7te9oftpmNEw1jDBsIXD6oZdGfZeb7iev55Ar0xvj951vU/6DP0g/Rz9ZfoN+vcMcAM7gxiDyQYbDE4b9A7XHu49nDe8bPj+4bcNUUM7w1jDaYZbDNsM+42MjUKMxEZrjU4Z9RrrGvsbZxuvND5m3GNCN/E1EZqsNDlu8pyhw2AxchlrGK2MPlND01BTqelm03bTz2bWZglmJWZ7zO6ZU82Z5hnmK81bzPssTCzGWEy3qLW4bUmxZFpmWa62PGv53sraKslqvlWD1TNrPWuOdbF1rfVdG5qNn80km2qba7ZEW6Ztju1628t2qJ2bXZZdpd0le9Te3V5ov96+YwRhhOcI0YjqEZ0Oqg4shyKHWocHjrqOEY4ljg2OL0dajEwduWzk2ZHfnNyccp22Ot0ZpTUqbFTJqOZRr53tnHnOlc7XXGguwS6zXBpdXrnauwpcN7jedKO7jXGb79bi9tXdw13iXufe42HhkeZR5dHJ1GZGMxcxz3kSPAM8Z3ke8fzo5e5V6LXf6y9vB+8c753ez0ZbjxaM3jr6kY+ZD9dns0+XL8M3zXeTb5efqR/Xr9rvob+5P99/m/9Tli0rm7WL9TLAKUAScCjgPduLPYN9IhALDAksC2wP0gpKCFoXdD/YLDgzuDa4L8QtZFrIiVBCaHjostBOjhGHx6nh9IV5hM0Iaw1XDY8LXxf+MMIuQhLRPAYdEzZmxZi7kZaRosiGKBDFiVoRdS/aOnpS9OEYYkx0TGXMk9hRsdNjz8bR4ybG7Yx7Fx8QvyT+ToJNgjShJVE9cVxiTeL7pMCk5UldySOTZyRfTDFIEaY0ppJSE1O3pfaPDRq7amz3OLdxpeNujLceP2X8+QkGE3InHJ2oPpE78UAaIS0pbWfaF24Ut5rbn85Jr0rv47F5q3kv+P78lfwegY9gueBphk/G8oxnmT6ZKzJ7svyyKrJ6hWzhOuGr7NDsjdnvc6JytucM5Cbl7skj56XlNYm0RDmi1nzj/Cn5HWJ7cam4a5LXpFWT+iThkm0FSMH4gsZCbfgj3ya1kf4ifVDkW1RZ9GFy4uQDUzSniKa0TbWbunDq0+Lg4t+m4dN401qmm06fM/3BDNaMzTORmekzW2aZz5o3q3t2yOwdc6hzcub8XuJUsrzk7dykuc3zjObNnvfol5BfakvVSiWlnfO9529cgC8QLmhf6LJw7cJvZfyyC+VO5RXlXxbxFl34ddSva34dWJyxuH2J+5INS4lLRUtvLPNbtmO55vLi5Y9WjFlRv5Kxsmzl21UTV52vcK3YuJq6Wrq6a03Emsa1FmuXrv2yLmvd9cqAyj1VhlULq96v56+/ssF/Q91Go43lGz9tEm66uTlkc321VXXFFuKWoi1PtiZuPfsb87eabQbbyrd93S7a3rUjdkdrjUdNzU7DnUtq0Vppbc+ucbsu7w7c3VjnULd5j+6e8r1gr3Tv831p+27sD9/fcoB5oO6g5cGqQ/RDZfVI/dT6voashq7GlMaOprCmlmbv5kOHHQ9vP2J6pPKoztElx6jH5h0bOF58vP+E+ETvycyTj1omttw5lXzqWmtMa/vp8NPnzgSfOXWWdfb4OZ9zR857nW+6wLzQcNH9Yn2bW9uh391+P9Tu3l5/yeNS42XPy80dozuOXfG7cvJq4NUz1zjXLl6PvN5xI+HGzc5xnV03+Tef3cq99ep20e3Pd2bfJdwtu6dxr+K+4f3qP2z/2NPl3nX0QeCDtodxD+884j168bjg8ZfueU9oTyqemjyteeb87EhPcM/l52Ofd |
78Qv/jcW/qn5p9VL21eHvzL/6+2vuS+7leSVwOvF73Rf7P9revblv7o/vvv8t59fl/2Qf/Djo/Mj2c/JX16+nnyF9KXNV9tvzZ/C/92dyBvYEDMlXDlvwIYbGhGBgCvtwNASwGADvdn1LGK/Z/cEMWeVY7Af8KKPaLc3AGog
},
"image.png": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAUAAAAEbCAYAAACr2V2eAAABYmlDQ1BJQ0MgUHJvZmlsZQAAKJF1kDFLw1AUhU9stSAVHRwEHQKKUy01rdi1LSKCQxoVqlvyWlMlTR9JRNTFQRengi5uUhd/gS4OjoKDguAgIoKDP0DsoiXeNGpbxft43I/DvYfDBTrCKudGEEDJdCxlOi3mFpfE0Au60ENPQEBlNk/J8iyN4Lu3V+2O5qhuxzyvq5px+bw3PJi1N6Nscmv173xbdecLNqP+QT/BuOUAQoxYXne4x9vE/RaFIj7wWPf5xGPN5/PGzLySIb4h7mNFNU/8RBzRWnS9hUvGGvvK4KUPF8yFOeoD9IeQRgEmshAxhRzimEAM41D+2Uk0djIog2MDFlagowiHtlOkcBjkJmKGHBmiiBBL5Cch7t369w2bWrkKJN+AQKWpaYfA2S7FvG9qI0dA7w5wes1VS/25rFAL2stxyedwGuh8dN3XUSC0D9Qrrvtedd36Mfk/ABfmJ+uTZFvl1hD0AAAAVmVYSWZNTQAqAAAACAABh2kABAAAAAEAAAAaAAAAAAADkoYABwAAABIAAABEoAIABAAAAAEAAAFAoAMABAAAAAEAAAEbAAAAAEFTQ0lJAAAAU2NyZWVuc2hvdP5iyG4AAAHWaVRYdFhNTDpjb20uYWRvYmUueG1wAAAAAAA8eDp4bXBtZXRhIHhtbG5zOng9ImFkb2JlOm5zOm1ldGEvIiB4OnhtcHRrPSJYTVAgQ29yZSA2LjAuMCI+CiAgIDxyZGY6UkRGIHhtbG5zOnJkZj0iaHR0cDovL3d3dy53My5vcmcvMTk5OS8wMi8yMi1yZGYtc3ludGF4LW5zIyI+CiAgICAgIDxyZGY6RGVzY3JpcHRpb24gcmRmOmFib3V0PSIiCiAgICAgICAgICAgIHhtbG5zOmV4aWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20vZXhpZi8xLjAvIj4KICAgICAgICAgPGV4aWY6UGl4ZWxZRGltZW5zaW9uPjI4MzwvZXhpZjpQaXhlbFlEaW1lbnNpb24+CiAgICAgICAgIDxleGlmOlBpeGVsWERpbWVuc2lvbj4zMjA8L2V4aWY6UGl4ZWxYRGltZW5zaW9uPgogICAgICAgICA8ZXhpZjpVc2VyQ29tbWVudD5TY3JlZW5zaG90PC9leGlmOlVzZXJDb21tZW50PgogICAgICA8L3JkZjpEZXNjcmlwdGlvbj4KICAgPC9yZGY6UkRGPgo8L3g6eG1wbWV0YT4Kk67BXQAAJDZJREFUeAHtnQuwVVUZxxfKS+QlIKJg8pSXKJQ8VbxqqKBjJmlaUxra5KTWZPhoSoXR1DR10lIZHUsrRZMJy8hSlLhAoJiIFSqKegUUBHkICEgZ/2X7uO/hnHP3Pufsc/ZZ67dmzr377L3W2uv7ffv+73rvZqNGjfrYECAAAQh4SGAvD23GZAhAAAKWAALIgwABCHhLAAH01vUYDgEIIIA8AxCAgLcEEEBvXY/hEIAAAsgzAAEIeEsAAfTW9RgOAQgggDwDEICAtwQQQG9dj+EQgAACyDMAAQh4SwAB9Nb1GA4BCCCAPAMQgIC3BBBAb12P4RCAAALIMwABCHhLAAH01vUYDgEIIIA8AxCAgLcEEEBvXY/hEIAAAsgzAAEIeEsAAfTW9RgOAQgggDwDEICAtwQQQG9dj+EQgAACyDMAAQh4SwAB9Nb1GA4BCCCAPAMQgIC3BBBAb12P4RCAAALIMwABCHhLAAH01vU |
YDgEIIIA8AxCAgLcEEEBvXY/hEIAAAsgzAAEIeEsAAfTW9RgOAQgggDwDEICAtwQQQG9dj+EQgAACyDMAAQh4SwAB9Nb1GA4BCCCAPAMQgIC3BBBAb12P4RCAAALIMwABCHhLAAH01vUYDgEIIIA8AxCAgLcEEEBvXY/hEIAAAsgzAAEIeEsAAfTW9RgOAQgggDwDEICAtwQQQG9dj+EQgAACyDMAAQh4SwAB9Nb1GA4BCCCAPAMQgIC3BBBAb12P4RCAAALIMwABCHhLAAH01vUYDgEIIIA8AxCAgLcEEEBvXY/hEIAAAsgzAAEIeEsAAfTW9RgOAQgggDwDEICAtwQQQG9dj+EQgAACyDMAAQh4SwAB9Nb1GA4BCCCAPAMQgIC3BBBAb12P4RCAAALIMwABCHhLoLm3lmN4LAIffvih0ee
}
},
"cell_type": "markdown",
"id": "cf69bb3f-94e6-4dba-92cd-ce08df117d67",
"metadata": {},
"source": [
"
"\n",
"\n",
"XGBoost based models are slightly finicky to get into a suitable onnx format. By default most tree based models will export into something that looks like this: \n",
"\n",
"\n",
"\n",
"\n",
"\n",
"Processing such nodes can be difficult and error prone. It would be much better if the operations of the tree were represented as a proper graph, possibly ... like this: \n",
"\n",
"\n",
"\n",
"\n",
"\n",
"This notebook showcases how to do that using the `hummingbird` python package ! "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a60b90d6",
"metadata": {},
"outputs": [],
"source": [
"!python -m pip install hummingbird_ml"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "95613ee9",
"metadata": {},
"outputs": [],
"source": [
"
"try:\n",
"
" |
import google.colab\n",
" |
import subprocess\n",
" |
import sys\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"ezkl\"])\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"onnx\"])\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"hummingbird-ml\"])\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"xgboost\"])\n",
"\n",
"
"except:\n",
" pass\n",
"\n",
"\n",
"
"\n",
"
" |
import json\n",
" |
import numpy as np\n",
"from sklearn.datasets |
import load_iris\n",
"from sklearn.model_selection |
import train_test_split\n",
"from xgboost |
import XGBClassifier as Gbc\n",
" |
import torch\n",
" |
import ezkl\n",
" |
import os\n",
"from torch |
import nn\n",
" |
import xgboost as xgb\n",
"from hummingbird.ml |
import convert\n",
"\n",
"NUM_CLASSES = 3\n",
"\n",
"iris = load_iris()\n",
"X, y = iris.data, iris.target\n",
"X = X.astype(np.float32)\n",
"X_train, X_test, y_train, y_test = train_test_split(X, y)\n",
"clr = Gbc(n_estimators=12)\n",
"clr.fit(X_train, y_train)\n",
"\n",
"
"\n",
"\n",
"torch_gbt = convert(clr, 'torch')\n",
"\n",
"print(torch_gbt)\n",
"
"diffs = []\n",
"\n",
"for i in range(len(X_test)):\n",
" torch_pred = torch_gbt.predict(torch.tensor(X_test[i].reshape(1, -1)))\n",
" sk_pred = clr.predict(X_test[i].reshape(1, -1))\n",
" diffs.append(torch_pred != sk_pred[0])\n",
"\n",
"print(\"num diff: \", sum(diffs))\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b37637c4",
"metadata": {},
"outputs": [],
"source": [
"model_path = os.path.join('network.onnx')\n",
"compiled_model_path = os.path.join('network.compiled')\n",
"pk_path = os.path.join('test.pk')\n",
"vk_path = os.path.join('test.vk')\n",
"settings_path = os.path.join('settings.json')\n",
"\n",
"witness_path = os.path.join('witness.json')\n",
"data_path = os.path.join('input.json')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "82db373a",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"\n",
"
" |
\n",
"\n",
"
"shape = X_train.shape[1:]\n",
"x = torch.rand(1, *shape, requires_grad=False)\n",
"torch_out = torch_gbt.predict(x)\n",
"
"torch.onnx.export(torch_gbt.model,
"
" x,\n",
"
" \"network.onnx\",\n",
" export_params=True,
" opset_version=18,
" input_names=['input'],
" output_names=['output'],
" dynamic_axes={'input': {0: 'batch_size'},
" 'output': {0: 'batch_size'}})\n",
"\n",
"d = ((x).detach().numpy()).reshape([-1]).tolist()\n",
"\n",
"data = dict(input_shapes=[shape],\n",
" input_data=[d],\n",
" output_data=[(o).reshape([-1]).tolist() for o in torch_out])\n",
"\n",
"
"json.dump(data, open(\"input.json\", 'w'))\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run_args = ezkl.PyRunArgs()\n",
"run_args.variables = [(\"batch_size\", 1)]\n",
"\n",
"
"res = ezkl.gen_settings(model_path, settings_path, py_run_args=run_args)\n",
"assert res == True\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
"cal_data = {\n",
" \"input_data\": [(torc |
h.rand(20, *shape)).flatten().tolist()],\n",
"}\n",
"\n",
"cal_path = os.path.join('calibration.json')\n",
"
"with open(cal_path, \"w\") as f:\n",
" json.dump(cal_data, f)\n",
"\n",
"res = ezkl.calibrate_settings(cal_path, model_path, settings_path, \"resources\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3aa4f090",
"metadata": {},
"outputs": [],
"source": [
"res = ezkl.compile_circuit(model_path, compiled_model_path, settings_path)\n",
"assert res == True"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8b74dcee",
"metadata": {},
"outputs": [],
"source": [
"
"res = ezkl.get_srs( settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "18c8b7c7",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"res = ezkl.gen_witness(data_path, compiled_model_path, witness_path)\n",
"assert os.path.isfile(witness_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b1c561a8",
"metadata": {},
"outputs": [],
"source": [
"\n",
"
"
"
"
"\n",
"\n",
"\n",
"res = ezkl.setup(\n",
" compiled_model_path,\n",
" vk_path,\n",
" pk_path,\n",
" \n",
" )\n",
"\n", |
"assert res == True\n",
"assert os.path.isfile(vk_path)\n",
"assert os.path.isfile(pk_path)\n",
"assert os.path.isfile(settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c384cbc8",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"\n",
"proof_path = os.path.join('test.pf')\n",
"\n",
"res = ezkl.prove(\n",
" witness_path,\n",
" compiled_model_path,\n",
" pk_path,\n",
" proof_path,\n",
" \n",
" \"single\",\n",
" )\n",
"\n",
"print(res)\n",
"assert os.path.isfile(proof_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "76f00d41",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"res = ezkl.verify(\n",
" proof_path,\n",
" settings_path,\n",
" vk_path,\n",
" \n",
" )\n",
"\n",
"assert res == True\n",
"print(\"verified\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1f50a8d0",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"res = ezkl.verify(\n",
" proof_path,\n",
" settings_path,\n",
" vk_path,\n",
" \n",
" )\n",
"\n", |
"assert res == True\n",
"print(\"verified\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.15"
}
},
"nbformat": 4,
"nbformat_minor": 5
} |
from torch import nn
from ezkl import export
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.layer = nn.AvgPool2d(2, 1, (1, 1))
def forward(self, x):
return self.layer(x)[0]
circuit = Model()
export(circuit, input_shape=[3, 2, 2])
|
from torch import nn
from ezkl import export
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.layer = nn.BatchNorm2d(3)
def forward(self, x):
return self.layer(x)
circuit = Model()
export(circuit, input_shape=[3, 2, 2])
|
import torch
from torch import nn
from ezkl import export
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, x):
return torch.concat([x, x], 2)
circuit = MyModel()
export(circuit, input_shape=[3, 2, 3, 2, 2])
|
from torch import nn
from ezkl import export
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.layer = nn.Conv2d(3, 1, (1, 1), 1, 1)
def forward(self, x):
return self.layer(x)
circuit = Model()
export(circuit, input_shape = [3, 1, 1])
|
import io
import numpy as np
from torch import nn
import torch.onnx
import torch.nn as nn
import torch.nn.init as init
import json
class Circuit(nn.Module):
def __init__(self, inplace=False):
super(Circuit, self).__init__()
self.convtranspose = nn.ConvTranspose2d(3, 3, (5, 5), stride=2, padding=2, output_padding=1)
def forward(self, x):
y = self.convtranspose(x)
return y
def main():
torch_model = Circuit()
# Input to the model
shape = [3, 5, 5]
x = 0.1*torch.rand(1,*shape, requires_grad=True)
torch_out = torch_model(x)
# Export the model
torch.onnx.export(torch_model, # model being run
(x), # model input (or a tuple for multiple inputs)
"network.onnx", # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=10, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names = ['input'], # the model's input names
output_names = ['output'], # the model's output names
dynamic_axes={'input' : {0 : 'batch_size'}, # variable length axes
'output' : {0 : 'batch_size'}})
d = ((x).detach().numpy()).reshape([-1]).tolist()
data = dict(input_shapes = [shape],
input_data = [d],
output_data = [((o).detach().numpy()).reshape([-1]).tolist() for o in torch_out])
# Serialize data into file:
json.dump(data, open("input.json", 'w'))
if __name__ == "__main__":
main() |
from torch import nn
from ezkl import export
class Circuit(nn.Module):
def __init__(self, inplace=False):
super(Circuit, self).__init__()
def forward(self, x):
return x/ 10
circuit = Circuit()
export(circuit, input_shape = [1])
|
from torch import nn
from ezkl import export
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.layer = nn.Conv2d(3, 1, (1, 1), 2, 1)
def forward(self, x):
return self.layer(x)
circuit = Model()
export(circuit, input_shape=[3, 6, 6])
|
from torch import nn
from ezkl import export
class Circuit(nn.Module):
def __init__(self, inplace=False):
super(Circuit, self).__init__()
def forward(self, x):
return x/(2*x)
circuit = Circuit()
export(circuit, input_shape=[1])
|
from torch import nn
import torch
import json
class Circuit(nn.Module):
def __init__(self):
super(Circuit, self).__init__()
self.layer = nn.ELU()
def forward(self, x):
return self.layer(x)
def main():
torch_model = Circuit()
# Input to the model
shape = [3, 2, 3]
x = 0.1*torch.rand(1, *shape, requires_grad=True)
torch_out = torch_model(x)
# Export the model
torch.onnx.export(torch_model, # model being run
# model input (or a tuple for multiple inputs)
x,
# where to save the model (can be a file or file-like object)
"network.onnx",
export_params=True, # store the trained parameter weights inside the model file
opset_version=10, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names=['input'], # the model's input names
output_names=['output'], # the model's output names
dynamic_axes={'input': {0: 'batch_size'}, # variable length axes
'output': {0: 'batch_size'}})
d = ((x).detach().numpy()).reshape([-1]).tolist()
data = dict(input_shapes=[shape, shape, shape],
input_data=[d],
output_data=[((o).detach().numpy()).reshape([-1]).tolist() for o in torch_out])
# Serialize data into file:
json.dump(data, open("input.json", 'w'))
if __name__ == "__main__":
main()
|
import json
import torch
from torch import nn
class Circuit(nn.Module):
def __init__(self):
super(Circuit, self).__init__()
def forward(self, x):
return torch.special.erf(x)
def main():
torch_model = Circuit()
# Input to the model
shape = [3]
x = torch.rand(1,*shape, requires_grad=True)
torch_out = torch_model(x)
# Export the model
torch.onnx.export(torch_model, # model being run
x, # model input (or a tuple for multiple inputs)
"network.onnx", # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=10, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names = ['input'], # the model's input names
output_names = ['output'], # the model's output names
dynamic_axes={'input' : {0 : 'batch_size'}, # variable length axes
'output' : {0 : 'batch_size'}})
d = ((x).detach().numpy()).reshape([-1]).tolist()
data = dict(input_shapes = [shape],
input_data = [d],
output_data = [((o).detach().numpy()).reshape([-1]).tolist() for o in torch_out])
# Serialize data into file:
json.dump( data, open( "input.json", 'w' ) )
if __name__ == "__main__":
main()
|
import io
import numpy as np
from torch import nn
import torch.onnx
import torch.nn as nn
import torch.nn.init as init
import json
class Circuit(nn.Module):
def __init__(self):
super(Circuit, self).__init__()
self.flatten = nn.Flatten()
def forward(self, x):
return self.flatten(x)
def main():
torch_model = Circuit()
# Input to the model
shape = [3, 2, 3]
x = 0.1*torch.rand(1,*shape, requires_grad=True)
torch_out = torch_model(x)
# Export the model
torch.onnx.export(torch_model, # model being run
x, # model input (or a tuple for multiple inputs)
"network.onnx", # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=10, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names = ['input'], # the model's input names
output_names = ['output'], # the model's output names
dynamic_axes={'input' : {0 : 'batch_size'}, # variable length axes
'output' : {0 : 'batch_size'}})
d = ((x).detach().numpy()).reshape([-1]).tolist()
data = dict(input_shapes = [shape, shape, shape],
input_data = [d],
output_data = [((o).detach().numpy()).reshape([-1]).tolist() for o in torch_out])
# Serialize data into file:
json.dump( data, open( "input.json", 'w' ) )
if __name__ == "__main__":
main()
|
import json
import torch
from torch import nn
class Circuit(nn.Module):
def __init__(self):
super(Circuit, self).__init__()
self.layer = nn.GELU() # approximation = false in our case
def forward(self, x):
return self.layer(x)
def main():
torch_model = Circuit()
# Input to the model
shape = [3]
x = torch.rand(1,*shape, requires_grad=True)
torch_out = torch_model(x)
# Export the model
torch.onnx.export(torch_model, # model being run
x, # model input (or a tuple for multiple inputs)
"network.onnx", # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=10, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names = ['input'], # the model's input names
output_names = ['output'], # the model's output names
dynamic_axes={'input' : {0 : 'batch_size'}, # variable length axes
'output' : {0 : 'batch_size'}})
d = ((x).detach().numpy()).reshape([-1]).tolist()
data = dict(input_shapes = [shape],
input_data = [d],
output_data = [((o).detach().numpy()).reshape([-1]).tolist() for o in torch_out])
# Serialize data into file:
json.dump( data, open( "input.json", 'w' ) )
if __name__ == "__main__":
main()
|
import json
import torch
from torch import nn
class Circuit(nn.Module):
def __init__(self):
super(Circuit, self).__init__()
self.layer = nn.GELU('tanh') # approximation = false in our case
def forward(self, x):
return self.layer(x)
def main():
torch_model = Circuit()
# Input to the model
shape = [3]
x = torch.rand(1,*shape, requires_grad=True)
torch_out = torch_model(x)
# Export the model
torch.onnx.export(torch_model, # model being run
x, # model input (or a tuple for multiple inputs)
"network.onnx", # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=10, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names = ['input'], # the model's input names
output_names = ['output'], # the model's output names
dynamic_axes={'input' : {0 : 'batch_size'}, # variable length axes
'output' : {0 : 'batch_size'}})
d = ((x).detach().numpy()).reshape([-1]).tolist()
data = dict(input_shapes = [shape],
input_data = [d],
output_data = [((o).detach().numpy()).reshape([-1]).tolist() for o in torch_out])
# Serialize data into file:
json.dump( data, open( "input.json", 'w' ) )
if __name__ == "__main__":
main() |
import torch
from torch import nn
import json
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, x):
return x
circuit = MyModel()
x = 0.1*torch.rand(1, *[2], requires_grad=True)
# Flips the neural net into inference mode
circuit.eval()
# Export the model
torch.onnx.export(circuit, # model being run
# model input (or a tuple for multiple inputs)
x,
# where to save the model (can be a file or file-like object)
"network.onnx",
export_params=True, # store the trained parameter weights inside the model file
opset_version=10, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names=['input'], # the model's input names
output_names=['output'], # the model's output names
dynamic_axes={'input': {0: 'batch_size'}, # variable length axes
'output': {0: 'batch_size'}})
data_array = ((x).detach().numpy()).reshape([-1]).tolist()
data = dict(input_data=[data_array])
# Serialize data into file:
json.dump(data, open("input.json", 'w'))
|
from torch import nn
from ezkl import export
import torch
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.layer = nn.InstanceNorm2d(3).eval()
def forward(self, x):
return [self.layer(x)]
circuit = MyModel()
export(circuit, input_shape=[3, 2, 2])
|
from torch import nn
from ezkl import export
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.layer = nn.LeakyReLU(negative_slope=0.05)
def forward(self, x):
return self.layer(x)
circuit = Model()
export(circuit, input_shape = [3])
|
import random
import math
import numpy as np
import torch
from torch import nn
import torch.nn.functional as F
import json
model = nn.Linear(1, 1)
x = torch.randn(1, 1)
print(x)
# Flips the neural net into inference mode
model.eval()
model.to('cpu')
# Export the model
torch.onnx.export(model, # model being run
# model input (or a tuple for multiple inputs)
x,
# where to save the model (can be a file or file-like object)
"network.onnx",
export_params=True, # store the trained parameter weights inside the model file
opset_version=10, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names=['input'], # the model's input names
output_names=['output'], # the model's output names
dynamic_axes={'input': {0: 'batch_size'}, # variable length axes
'output': {0: 'batch_size'}})
data_array = ((x).detach().numpy()).reshape([-1]).tolist()
data_json = dict(input_data=[data_array])
print(data_json)
# Serialize data into file:
json.dump(data_json, open("input.json", 'w'))
|
from torch import nn
import torch
import json
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.layer = nn.LPPool2d(2, 1, (1, 1))
def forward(self, x):
return self.layer(x)[0]
circuit = Model()
x = torch.empty(1, 3, 2, 2).uniform_(0, 1)
out = circuit(x)
print(out)
torch.onnx.export(circuit, x, "network.onnx",
export_params=True, # store the trained parameter weights inside the model file
opset_version=17, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names=['input'], # the model's input names
output_names=['output'], # the model's output names
dynamic_axes={'input': {0: 'batch_size'}, # variable length axes
'output': {0: 'batch_size'}})
d1 = ((x).detach().numpy()).reshape([-1]).tolist()
data = dict(
input_data=[d1],
)
# Serialize data into file:
json.dump(data, open("input.json", 'w'))
|
from torch import nn
from ezkl import export
import torch
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.layer = nn.MaxPool2d(2, 1, (1, 1), 1, 1)
def forward(self, x):
return self.layer(x)[0]
circuit = Model()
# Input to the model
shape = [3, 2, 2]
x = torch.rand(1, *shape, requires_grad=False)
torch_out = circuit(x)
# Export the model
torch.onnx.export(circuit, # model being run
# model input (or a tuple for multiple inputs)
x,
# where to save the model (can be a file or file-like object)
"network.onnx",
export_params=True, # store the trained parameter weights inside the model file
opset_version=14, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names=['input'], # the model's input names
output_names=['output'], # the model's output names
dynamic_axes={'input': {0: 'batch_size'}, # variable length axes
'output': {0: 'batch_size'}})
d = ((x).detach().numpy()).reshape([-1]).tolist()
data = dict(input_shapes=[shape],
input_data=[d],
output_data=[(o).reshape([-1]).tolist() for o in torch_out])
|
from torch import nn
from ezkl import export
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
def forward(self, x):
return nn.functional.pad(x, (1,1))
circuit = Model()
export(circuit, input_shape = [3, 2, 2])
|
from torch import nn
import torch
import json
import numpy as np
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, x):
return torch.pow(x, -0.1)
circuit = MyModel()
x = torch.rand(1, 4)
torch.onnx.export(circuit, (x), "network.onnx",
export_params=True, # store the trained parameter weights inside the model file
opset_version=15, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names=['input'], # the model's input names
output_names=['output'], # the model's output names
dynamic_axes={'input': {0: 'batch_size'}, # variable length axes
'output': {0: 'batch_size'}})
d = ((x).detach().numpy()).reshape([-1]).tolist()
data = dict(
input_data=[d],
)
# Serialize data into file:
json.dump(data, open("input.json", 'w'))
|
from torch import nn
from ezkl import export
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.layer = nn.PReLU(num_parameters=3, init=0.25)
def forward(self, x):
return self.layer(x)
circuit = Model()
export(circuit, input_shape = [3, 2, 2])
|
from torch import nn
from ezkl import export
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.layer = nn.ReLU()
def forward(self, x):
return self.layer(x)
circuit = MyModel()
export(circuit, input_shape = [3])
|
import torch
from torch import nn
from ezkl import export
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, x):
x = x.reshape([-1, 6])
return x
circuit = MyModel()
export(circuit, input_shape=[1, 3, 2])
|
from torch import nn
from ezkl import export
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.layer = nn.Sigmoid()
def forward(self, x):
return self.layer(x)
circuit = MyModel()
export(circuit, input_shape = [3])
|
import torch
from torch import nn
from ezkl import export
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, x):
return x[:,1:2]
circuit = MyModel()
export(circuit, input_shape=[3, 2])
|
import torch
from torch import nn
from ezkl import export
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.layer = nn.Softmax(dim=1)
def forward(self, x):
return self.layer(x)
circuit = MyModel()
export(circuit, input_shape=[3])
|
import torch
from torch import nn
from ezkl import export
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, x):
return torch.sqrt(x)
circuit = MyModel()
export(circuit, input_shape = [3])
|
from torch import nn
import torch
import json
class Circuit(nn.Module):
def __init__(self, inplace=False):
super(Circuit, self).__init__()
def forward(self, x):
return x/ 10000
circuit = Circuit()
x = torch.empty(1, 8).random_(0, 2)
out = circuit(x)
print(out)
torch.onnx.export(circuit, x, "network.onnx",
export_params=True, # store the trained parameter weights inside the model file
opset_version=17, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names=['input'], # the model's input names
output_names=['output'], # the model's output names
dynamic_axes={'input': {0: 'batch_size'}, # variable length axes
'output': {0: 'batch_size'}})
d1 = ((x).detach().numpy()).reshape([-1]).tolist()
data = dict(
input_data=[d1],
)
# Serialize data into file:
json.dump(data, open("input.json", 'w'))
|
from torch import nn
import torch
import json
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, x):
topk_largest = torch.topk(x, 4)
topk_smallest = torch.topk(x, 4, largest=False)
print(topk_largest)
print(topk_smallest)
return topk_largest.values + topk_smallest.values
circuit = MyModel()
x = torch.randint(10, (1, 6))
y = circuit(x)
torch.onnx.export(circuit, x, "network.onnx",
export_params=True, # store the trained parameter weights inside the model file
opset_version=14, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names=['input'], # the model's input names
output_names=['output'], # the model's output names
dynamic_axes={'input': {0: 'batch_size'}, # variable length axes
'output': {0: 'batch_size'}})
d = ((x).detach().numpy()).reshape([-1]).tolist()
data = dict(
input_data=[d],
output_data=[((o).detach().numpy()).reshape([-1]).tolist() for o in y]
)
# Serialize data into file:
json.dump(data, open("input.json", 'w'))
|
import io
import numpy as np
from torch import nn
import torch.onnx
import torch.nn as nn
import torch.nn.init as init
import json
class Circuit(nn.Module):
def __init__(self, inplace=False):
super(Circuit, self).__init__()
self.upsample = nn.Upsample(scale_factor=2, mode='nearest')
def forward(self, x):
y = self.upsample(x)
return y
def main():
torch_model = Circuit()
# Input to the model
shape = [3, 5, 5]
x = 0.1*torch.rand(1,*shape, requires_grad=True)
torch_out = torch_model(x)
# Export the model
torch.onnx.export(torch_model, # model being run
(x), # model input (or a tuple for multiple inputs)
"network.onnx", # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=11, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names = ['input'], # the model's input names
output_names = ['output'], # the model's output names
dynamic_axes={'input' : {0 : 'batch_size'}, # variable length axes
'output' : {0 : 'batch_size'}})
d = ((x).detach().numpy()).reshape([-1]).tolist()
data = dict(input_shapes = [shape],
input_data = [d],
output_data = [((o).detach().numpy()).reshape([-1]).tolist() for o in torch_out])
# Serialize data into file:
json.dump(data, open("input.json", 'w'))
if __name__ == "__main__":
main() |
from torch import nn
from ezkl import export
import torch
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, x):
return [torch.var(x, unbiased=False, dim=[1,2])]
circuit = MyModel()
export(circuit, input_shape = [1,3,3])
|
from torch import nn
import torch
import json
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, x):
return [torch.where(x >= 0.0, 3.0, 5.0)]
circuit = MyModel()
x = torch.randint(1, (1, 64))
torch.onnx.export(circuit, x, "network.onnx",
export_params=True, # store the trained parameter weights inside the model file
opset_version=10, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names=['input'], # the model's input names
output_names=['output'], # the model's output names
dynamic_axes={'input': {0: 'batch_size'}, # variable length axes
'output': {0: 'batch_size'}})
d = ((x).detach().numpy()).reshape([-1]).tolist()
data = dict(
input_data=[d],
)
# Serialize data into file:
json.dump(data, open("input.json", 'w'))
|
from torch import nn
import torch.nn.init as init
from ezkl import export
class Model(nn.Module):
def __init__(self, inplace=False):
super(Model, self).__init__()
self.aff1 = nn.Linear(3,1)
self.relu = nn.ReLU()
self._initialize_weights()
def forward(self, x):
x = self.aff1(x)
x = self.relu(x)
return (x)
def _initialize_weights(self):
init.orthogonal_(self.aff1.weight)
circuit = Model()
export(circuit, input_shape = [3]) |
from torch import nn
from ezkl import export
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.relu = nn.ReLU()
self.sigmoid = nn.Sigmoid()
self.fc = nn.Linear(3, 2)
def forward(self, x):
x = self.sigmoid(x)
x = self.sigmoid(x)
return x
circuit = MyModel()
export(circuit, input_shape = [1])
|
from torch import nn
from ezkl import export
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.conv1 = nn.Conv2d(1,4, kernel_size=5, stride=2)
self.conv2 = nn.Conv2d(4,4, kernel_size=5, stride=2)
self.relu = nn.ReLU()
self.fc = nn.Linear(4*4*4, 10)
def forward(self, x):
x = x.view(-1,1,28,28)
x = self.relu(self.conv1(x))
x = self.relu(self.conv2(x))
x = x.view(-1,4*4*4)
x = self.fc(x)
return x
circuit = MyModel()
export(circuit, input_shape = [1,28,28])
|
from torch import nn
from ezkl import export
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.conv1 = nn.Conv2d(in_channels=1, out_channels=2, kernel_size=5, stride=2)
self.conv2 = nn.Conv2d(in_channels=2, out_channels=3, kernel_size=5, stride=2)
self.relu = nn.ReLU()
self.d1 = nn.Linear(48, 48)
self.d2 = nn.Linear(48, 10)
def forward(self, x):
# 32x1x28x28 => 32x32x26x26
x = self.conv1(x)
x = self.relu(x)
x = self.conv2(x)
x = self.relu(x)
# flatten => 32 x (32*26*26)
x = x.flatten(start_dim = 1)
# x = x.flatten()
# 32 x (32*26*26) => 32x128
x = self.d1(x)
x = self.relu(x)
# logits => 32x10
logits = self.d2(x)
return logits
circuit = MyModel()
export(circuit, input_shape = [1,28,28])
|
from torch import nn
import torch
import json
import numpy as np
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self):
return torch.arange(0, 10, 2)
circuit = MyModel()
torch.onnx.export(circuit, (), "network.onnx",
export_params=True, # store the trained parameter weights inside the model file
opset_version=15, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
output_names=['output'], # the model's output names
)
data = dict(
input_data=[[]],
)
# Serialize data into file:
json.dump(data, open("input.json", 'w'))
|
from torch import nn
import torch
import json
import numpy as np
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, w, x, y, z):
return x << 2, y >> 3, z << 1, w >> 4
circuit = MyModel()
# random integers between 0 and 100
x = torch.empty(1, 3).uniform_(0, 100).to(torch.int32)
y = torch.empty(1, 3).uniform_(0, 100).to(torch.int32)
z = torch.empty(1, 3).uniform_(0, 100).to(torch.int32)
w = torch.empty(1, 3).uniform_(0, 100).to(torch.int32)
torch.onnx.export(circuit, (w, x, y, z), "network.onnx",
export_params=True, # store the trained parameter weights inside the model file
opset_version=16, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names=['input', 'input1', 'input2',
'input3'], # the model's input names
output_names=['output'], # the model's output names
dynamic_axes={'input': {0: 'batch_size'}, # variable length axes
'input1': {0: 'batch_size'},
'input2': {0: 'batch_size'},
'input3': {0: 'batch_size'},
'output': {0: 'batch_size'},
'output1': {0: 'batch_size'},
'output2': {0: 'batch_size'},
'output3': {0: 'batch_size'}})
d = ((w).detach().numpy()).reshape([-1]).tolist()
d1 = ((x).detach().numpy()).reshape([-1]).tolist()
d2 = ((y).detach().numpy()).reshape([-1]).tolist()
d3 = ((z).detach().numpy()).reshape([-1]).tolist()
data = dict(
input_data=[d, d1, d2, d3],
)
# Serialize data into file:
json.dump(data, open("input.json", 'w'))
|
from torch import nn
import torch
import json
import numpy as np
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, w, x, y, z):
# bitwise and
and_xy = torch.bitwise_and(x, y)
# bitwise or
or_yz = torch.bitwise_or(y, z)
# bitwise not
not_wyz = torch.bitwise_not(or_yz)
return not_wyz
circuit = MyModel()
a = torch.empty(1, 3).uniform_(0, 1)
w = torch.bernoulli(a).to(torch.bool)
x = torch.bernoulli(a).to(torch.bool)
y = torch.bernoulli(a).to(torch.bool)
z = torch.bernoulli(a).to(torch.bool)
torch.onnx.export(circuit, (w, x, y, z), "network.onnx",
export_params=True, # store the trained parameter weights inside the model file
opset_version=16, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names=['input', 'input1', 'input2',
'input3'], # the model's input names
output_names=['output'], # the model's output names
dynamic_axes={'input': {0: 'batch_size'}, # variable length axes
'input1': {0: 'batch_size'},
'input2': {0: 'batch_size'},
'input3': {0: 'batch_size'},
'output': {0: 'batch_size'}})
d = ((w).detach().numpy()).reshape([-1]).tolist()
d1 = ((x).detach().numpy()).reshape([-1]).tolist()
d2 = ((y).detach().numpy()).reshape([-1]).tolist()
d3 = ((z).detach().numpy()).reshape([-1]).tolist()
data = dict(
input_data=[d, d1, d2, d3],
)
# Serialize data into file:
json.dump(data, open("input.json", 'w'))
|
from torch import nn
import torch
import json
import numpy as np
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, x):
# bitwise and
bmw = torch.blackman_window(8) + x
return bmw
circuit = MyModel()
x = torch.empty(1, 8).uniform_(0, 1)
torch.onnx.export(circuit, x, "network.onnx",
export_params=True, # store the trained parameter weights inside the model file
opset_version=16, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names=['input'], # the model's input names
output_names=['output'], # the model's output names
dynamic_axes={'input': {0: 'batch_size'}, # variable length axes
'output': {0: 'batch_size'}})
d1 = ((x).detach().numpy()).reshape([-1]).tolist()
data = dict(
input_data=[d1],
)
# Serialize data into file:
json.dump(data, open("input.json", 'w'))
|
from torch import nn
import torch
import json
import numpy as np
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, w, x, y, z):
return [((x & y)) == (x & (y | (z ^ w)))]
circuit = MyModel()
a = torch.empty(1, 3).uniform_(0, 1)
w = torch.bernoulli(a).to(torch.bool)
x = torch.bernoulli(a).to(torch.bool)
y = torch.bernoulli(a).to(torch.bool)
z = torch.bernoulli(a).to(torch.bool)
torch.onnx.export(circuit, (w, x, y, z), "network.onnx",
export_params=True, # store the trained parameter weights inside the model file
opset_version=15, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names=['input', 'input1', 'input2',
'input3'], # the model's input names
output_names=['output'], # the model's output names
dynamic_axes={'input': {0: 'batch_size'}, # variable length axes
'input1': {0: 'batch_size'},
'input2': {0: 'batch_size'},
'input3': {0: 'batch_size'},
'output': {0: 'batch_size'}})
d = ((w).detach().numpy()).reshape([-1]).tolist()
d1 = ((x).detach().numpy()).reshape([-1]).tolist()
d2 = ((y).detach().numpy()).reshape([-1]).tolist()
d3 = ((z).detach().numpy()).reshape([-1]).tolist()
data = dict(
input_data=[d, d1, d2, d3],
)
# Serialize data into file:
json.dump(data, open("input.json", 'w'))
|
from torch import nn
import torch
import json
import numpy as np
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, x):
return [x]
circuit = MyModel()
a = torch.empty(1, 3).uniform_(0, 1)
x = torch.bernoulli(a).to(torch.bool)
torch.onnx.export(circuit, x, "network.onnx",
export_params=True, # store the trained parameter weights inside the model file
opset_version=15, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names=['input'], # the model's input names
output_names=['output'], # the model's output names
dynamic_axes={'input': {0: 'batch_size'}, # variable length axes
'output': {0: 'batch_size'}})
d1 = ((x).detach().numpy()).reshape([-1]).tolist()
data = dict(
input_data=[d1],
)
# Serialize data into file:
json.dump(data, open("input.json", 'w'))
|
from torch import nn
import torch
import json
import numpy as np
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, x):
m = nn.CELU()(x)
return m
circuit = MyModel()
x = torch.empty(1, 8).uniform_(0, 1)
out = circuit(x)
print(out)
torch.onnx.export(circuit, x, "network.onnx",
export_params=True, # store the trained parameter weights inside the model file
opset_version=17, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names=['input'], # the model's input names
output_names=['output'], # the model's output names
dynamic_axes={'input': {0: 'batch_size'}, # variable length axes
'output': {0: 'batch_size'}})
d1 = ((x).detach().numpy()).reshape([-1]).tolist()
data = dict(
input_data=[d1],
)
# Serialize data into file:
json.dump(data, open("input.json", 'w'))
|
from torch import nn
import torch
import json
import numpy as np
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, x):
m = torch.clamp(x, min=0.4, max=0.8)
return m
circuit = MyModel()
x = torch.empty(1, 2, 2, 8).uniform_(0, 1)
out = circuit(x)
print(out)
torch.onnx.export(circuit, x, "network.onnx",
export_params=True, # store the trained parameter weights inside the model file
opset_version=17, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names=['input'], # the model's input names
output_names=['output'], # the model's output names
dynamic_axes={'input': {0: 'batch_size'}, # variable length axes
'output': {0: 'batch_size'}})
d1 = ((x).detach().numpy()).reshape([-1]).tolist()
data = dict(
input_data=[d1],
)
# Serialize data into file:
json.dump(data, open("input.json", 'w'))
|
# Train a model.
import json
import onnxruntime as rt
from skl2onnx import to_onnx
import numpy as np
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier as De
import sk2torch
import torch
iris = load_iris()
X, y = iris.data, iris.target
X = X.astype(np.float32)
X_train, X_test, y_train, y_test = train_test_split(X, y)
clr = De()
clr.fit(X_train, y_train)
torch_model = sk2torch.wrap(clr)
# Convert into ONNX format.
# export to onnx format
# Input to the model
shape = X_train.shape[1:]
x = torch.rand(1, *shape, requires_grad=True)
torch_out = torch_model(x)
# Export the model
torch.onnx.export(torch_model, # model being run
# model input (or a tuple for multiple inputs)
x,
# where to save the model (can be a file or file-like object)
"network.onnx",
export_params=True, # store the trained parameter weights inside the model file
opset_version=10, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names=['input'], # the model's input names
output_names=['output'], # the model's output names
dynamic_axes={'input': {0: 'batch_size'}, # variable length axes
'output': {0: 'batch_size'}})
d = ((x).detach().numpy()).reshape([-1]).tolist()
data = dict(input_shapes=[shape],
input_data=[d],
output_data=[((o).detach().numpy()).reshape([-1]).tolist() for o in torch_out])
# Serialize data into file:
json.dump(data, open("input.json", 'w'))
|
from torch import nn
from ezkl import export
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, x):
return x
circuit = MyModel()
export(circuit, input_shape=[1, 64, 64])
|
from torch import nn
import torch
import json
import numpy as np
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, x):
m = x @ torch.eye(8)
return m
circuit = MyModel()
x = torch.empty(1, 8).uniform_(0, 1)
out = circuit(x)
print(out)
torch.onnx.export(circuit, x, "network.onnx",
export_params=True, # store the trained parameter weights inside the model file
opset_version=17, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names=['input'], # the model's input names
output_names=['output'], # the model's output names
dynamic_axes={'input': {0: 'batch_size'}, # variable length axes
'output': {0: 'batch_size'}})
d1 = ((x).detach().numpy()).reshape([-1]).tolist()
data = dict(
input_data=[d1],
)
# Serialize data into file:
json.dump(data, open("input.json", 'w'))
|
from torch import nn
import torch
import json
import numpy as np
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, w, x):
return torch.gather(w, 1, x)
circuit = MyModel()
w = torch.rand(1, 15, 18)
x = torch.randint(0, 15, (1, 15, 2))
torch.onnx.export(circuit, (w, x), "network.onnx",
export_params=True, # store the trained parameter weights inside the model file
opset_version=15, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names=['input', 'input1'], # the model's input names
output_names=['output'], # the model's output names
dynamic_axes={'input': {0: 'batch_size'}, # variable length axes
'input1': {0: 'batch_size'},
'output': {0: 'batch_size'}})
d = ((w).detach().numpy()).reshape([-1]).tolist()
d1 = ((x).detach().numpy()).reshape([-1]).tolist()
data = dict(
input_data=[d, d1],
)
# Serialize data into file:
json.dump(data, open("input.json", 'w'))
|
from torch import nn
import json
import numpy as np
import tf2onnx
import tensorflow as tf
from tensorflow.keras.layers import *
from tensorflow.keras.models import Model
# gather_nd in tf then export to onnx
x = in1 = Input((15, 18,))
w = in2 = Input((15, 1), dtype=tf.int32)
x = tf.gather_nd(x, w, batch_dims=1)
tm = Model((in1, in2), x )
tm.summary()
tm.compile(optimizer='adam', loss='mse')
shape = [1, 15, 18]
index_shape = [1, 15, 1]
# After training, export to onnx (network.onnx) and create a data file (input.json)
x = 0.1*np.random.rand(1,*shape)
# w = random int tensor
w = np.random.randint(0, 10, index_shape)
spec = tf.TensorSpec(shape, tf.float32, name='input_0')
index_spec = tf.TensorSpec(index_shape, tf.int32, name='input_1')
model_path = "network.onnx"
tf2onnx.convert.from_keras(tm, input_signature=[spec, index_spec], inputs_as_nchw=['input_0', 'input_1'], opset=12, output_path=model_path)
d = x.reshape([-1]).tolist()
d1 = w.reshape([-1]).tolist()
data = dict(
input_data=[d, d1],
)
# Serialize data into file:
json.dump(data, open("input.json", 'w'))
|
import json |
import numpy as np
from sklearn.datasets |
import load_iris
from sklearn.model_selection |
import train_test_split
from sklearn.ensemble |
import GradientBoostingClassifier as Gbc |
import sk2torch |
import torch |
import ezkl |
import os
from torch |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.