text
stringlengths 1
2.05k
|
---|
import nn\n",
" |
import ezkl\n",
" |
import os\n",
" |
import json\n",
" |
import logging\n",
"\n",
"
"
"
"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we define our model. It is a humble model with but a conv layer and a $ReLU$ non-linearity, but it is a model nonetheless"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
" |
import torch\n",
"
"
"
"\n",
" |
class MyModel(nn.Module):\n",
" def __init__(self):\n",
" super(MyModel, self).__init__()\n",
"\n",
" self.conv1 = nn.Conv2d(in_channels=3, out_channels=1, kernel_size=2, stride=4)\n",
" self.conv2 = nn.Conv2d(in_channels=1, out_channels=1, kernel_size=2, stride=4)\n",
" self.relu = nn.ReLU()\n",
"\n",
" def forward(self, x):\n",
" x = self.conv1(x)\n",
" x = self.relu(x)\n",
" x = self.conv2(x)\n",
" x = self.relu(x)\n",
"\n",
" return x\n",
" \n",
" def split_1(self, x):\n",
" x = self.conv1(x)\n",
" x = self.relu(x)\n",
" return x\n",
"\n",
"\n",
"circuit = MyModel()\n",
"\n",
"
"\n",
"\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"We omit training for purposes of this demonstration. We've marked where training would happen in the cell above. \n",
"Now we export the model to onnx and create a corresponding (randomly generated) input file.\n",
"\n",
"You can replace the random `x` with real data if you so wish. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"x = torch.rand(1,*[3, 8, 8], requires_grad=True)\n",
"\n",
"
"circuit.eval()\n",
"\n",
"
"torch.onnx.export(circuit, |
" x,
" \"network.onnx\",
" export_params=True,
" opset_version=10,
" do_constant_folding=True,
" input_names = ['input'],
" output_names = ['output'],
" dynamic_axes={'input' : {0 : 'batch_size'},
" 'output' : {0 : 'batch_size'}})\n",
"\n",
"\n",
"data_path = os.path.join(os.getcwd(), \"input_0.json\")\n",
"data = dict(input_data = [((x).detach().numpy()).reshape([-1]).tolist()])\n",
"json.dump( data, open(data_path, 'w' ))\n",
"\n",
"inter_1 = circuit.split_1(x)\n",
"data_path = os.path.join(os.getcwd(), \"input_1.json\")\n",
"data = dict(input_data = [((inter_1).detach().numpy()).reshape([-1]).tolist()])\n",
"json.dump( data, open(data_path, 'w' ))\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we split the model into two parts. The first part is the first conv layer and the second part is the rest of the model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
" |
import onnx\n",
"\n",
"input_path = \"network.onnx\"\n",
"output_path = \"network_split_0.onnx\"\n",
"input_names = [\"input\"]\n",
"output_names = [\"/relu/Relu_output_0\"]\n",
"
"onnx.utils.extract_model(input_path, output_path, input_names, output_names)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
" |
import onnx\n",
"\n",
"input_path = \"network.onnx\"\n",
"output_path = \"network_split_1.onnx\"\n",
"input_names = [\"/relu/Relu_output_0\"]\n",
"output_names = [\"output\"]\n",
"
"onnx.utils.extract_model(input_path, output_path, input_names, output_names)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"This is where the magic happens. We define our `PyRunArgs` objects which contains the visibility parameters for out model. \n",
"- `input_visibility` defines the visibility of the model inputs\n",
"- `param_visibility` defines the visibility of the model weights and constants and parameters \n",
"- `output_visibility` defines the visibility of the model outputs\n",
"\n",
"There are currently 5 visibility settings:\n",
"- `public`: known to both the verifier and prover (a subtle nuance is that this may not be the case for model parameters but until we have more rigorous theoretical results we don't want to make strong claims as to this). \n",
"- `private`: known only to the prover\n",
"- `hashed`: the hash pre-image is known to the prover, the prover and verifier know the hash. The prover proves that the they know the pre-image to the hash. \n",
"- `encrypted`: the non-encrypted element and the secret key used for decryption are known to the prover. The prover and the verifier know the encrypted element, the public key used to encrypt, and the hash of the decryption hey. The prover proves that they know the pre-image of the hashed decryption key and that this key can in fact decrypt the encrypted mes |
sage.\n",
"- `polycommit`: unblinded advice column which generates a kzg commitment. This doesn't appear in the instances of the circuit and must instead be modified directly within the proof bytes. \n",
"\n",
"Here we create the following setup:\n",
"- `input_visibility`: \"public\"\n",
"- `param_visibility`: \"public\"\n",
"- `output_visibility`: public\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
" |
import ezkl\n",
"\n",
"\n",
"data_path = os.path.join('input.json')\n",
"\n",
"run_args = ezkl.PyRunArgs()\n",
"run_args.input_visibility = \"public\"\n",
"run_args.param_visibility = \"fixed\"\n",
"run_args.output_visibility = \"public\"\n",
"run_args.input_scale = 2\n",
"run_args.logrows = 8\n",
"\n",
"ezkl.get_srs(logrows=run_args.logrows, commitment=ezkl.PyCommitments.KZG)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we generate a settings file. This file basically instantiates a bunch of parameters that determine their circuit shape, size etc... Because of the way we represent nonlinearities in the circuit (using Halo2's [lookup tables](https:
"\n",
"You can pass a dataset for calibration that will be representative of real inputs you might find if and when you deploy the prover. Here we create a dummy calibration dataset for demonstration purposes. \n",
"\n",
"As we use Halo2 with KZG-commitments we need an SRS string from (preferably) a multi-party trusted setup ceremony. For an overview of the procedures for such a ceremony check out [this page](https:
"\n",
"These SRS were generated with [this](https:
"\n",
"We also need to generate the (partial) circuit witness. These are the model outputs (and any hashes) that are generated when feeding the previously generated `input.json` through the circuit / model. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"def setup(i):\n", |
"
" model_path = os.path.join('network_split_'+str(i)+'.onnx')\n",
" settings_path = os.path.join('settings_split_'+str(i)+'.json')\n",
" data_path = os.path.join('input_'+str(i)+'.json')\n",
" compiled_model_path = os.path.join('network_split_'+str(i)+'.compiled')\n",
" pk_path = os.path.join('test_split_'+str(i)+'.pk')\n",
" vk_path = os.path.join('test_split_'+str(i)+'.vk')\n",
" witness_path = os.path.join('witness_split_'+str(i)+'.json')\n",
"\n",
" if i > 0:\n",
" prev_witness_path = os.path.join('witness_split_'+str(i-1)+'.json')\n",
" witness = json.load(open(prev_witness_path, 'r'))\n",
" data = dict(input_data = witness['outputs'])\n",
"
" json.dump(data, open(data_path, 'w' ))\n",
" else:\n",
" data_path = os.path.join('input_0.json')\n",
"\n",
"
" res = ezkl.gen_settings(model_path, settings_path, py_run_args=run_args)\n",
" res = ezkl.calibrate_settings(data_path, model_path, settings_path, \"resources\", scales=[run_args.input_scale], max_logrows=run_args.logrows)\n",
" assert res == True\n",
"\n",
"
" settings = json.load(open(settings_path, 'r'))\n",
" settings['run_args']['logrows'] = run_args.logrows\n",
" json.dump(settings, open(settings_path, 'w' ))\n",
"\n",
" res = ezkl.compile_circuit(model_path, compiled_model_path, settings_path)\n",
"\n",
"\n",
" res = ezkl.setup(\n",
" compiled_model_path,\n",
" vk_path,\n",
" |
pk_path,\n",
" )\n",
"\n",
" assert res == True\n",
" assert os.path.isfile(vk_path)\n",
" assert os.path.isfile(pk_path)\n",
"\n",
" res = ezkl.gen_witness(data_path, compiled_model_path, witness_path, vk_path)\n",
" run_args.input_scale = settings[\"model_output_scales\"][0]\n",
"\n",
"for i in range(2):\n",
" setup(i)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
"def prove_model(i):\n",
" proof_path = os.path.join('proof_split_'+str(i)+'.json')\n",
" witness_path = os.path.join('witness_split_'+str(i)+'.json')\n",
" compiled_model_path = os.path.join('network_split_'+str(i)+'.compiled')\n",
" pk_path = os.path.join('test_split_'+str(i)+'.pk')\n",
" vk_path = os.path.join('test_split_'+str(i)+'.vk')\n",
" settings_path = os.path.join('settings_split_'+str(i)+'.json')\n",
"\n",
" res = ezkl.prove(\n",
" witness_path,\n",
" compiled_model_path,\n",
" pk_path,\n",
" proof_path,\n",
" \"for-aggr\",\n",
" )\n",
"\n",
" print(res)\n",
" res_1_proof = res[\"proof\"]\n",
" assert os.path.isfile(proof_path)\n",
"\n",
"
" if i > 0:\n",
" print(\"swapping commitments\")\n",
"
" prev_witness_path = os.path.join('witness_split_'+str(i-1)+'.json')\n", |
" prev_witness = json.load(open(prev_witness_path, 'r'))\n",
"\n",
" witness = json.load(open(witness_path, 'r'))\n",
"\n",
" print(prev_witness[\"processed_outputs\"])\n",
" print(witness[\"processed_inputs\"])\n",
" witness[\"processed_inputs\"] = prev_witness[\"processed_outputs\"]\n",
"\n",
"
" with open(witness_path, \"w\") as f:\n",
" json.dump(witness, f)\n",
"\n",
" res = ezkl.swap_proof_commitments(proof_path, witness_path)\n",
" print(res)\n",
" \n",
"
" proof = json.load(open(proof_path, 'r'))\n",
" res_2_proof = proof[\"hex_proof\"]\n",
"
" print(res_1_proof)\n",
" print(res_2_proof)\n",
" assert res_1_proof == res_2_proof\n",
"\n",
" res = ezkl.verify(\n",
" proof_path,\n",
" settings_path,\n",
" vk_path,\n",
" )\n",
"\n",
" assert res == True\n",
" print(\"verified\")\n",
"\n",
"for i in range(2):\n",
" prove_model(i)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"
"\n",
"This time the visibility parameters are:\n",
"- `input_visibility`: \"polycommit\"\n",
"- `param_visibility`: \"public\"\n",
"- `output_visibility`: polycommit"
]
},
{
"cell_type": "code",
"exec |
ution_count": null,
"metadata": {},
"outputs": [],
"source": [
" |
import ezkl\n",
"\n",
"run_args = ezkl.PyRunArgs()\n",
"run_args.input_visibility = \"polycommit\"\n",
"run_args.param_visibility = \"fixed\"\n",
"run_args.output_visibility = \"polycommit\"\n",
"run_args.variables = [(\"batch_size\", 1)]\n",
"run_args.input_scale = 2\n",
"run_args.logrows = 8\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"for i in range(2):\n",
" setup(i)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"for i in range(2):\n",
" prove_model(i)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also mock aggregate the split proofs into a single proof. This is useful if you want to verify the proof on chain at a lower cost. Here we mock aggregate the proofs to save time. You can use other notebooks to see how to aggregate in full ! "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
"proofs = []\n",
"for i in range(2):\n",
" proof_path = os.path.join('proof_split_'+str(i)+'.json')\n",
" proofs.append(proof_path)\n",
"\n",
"ezkl.mock_aggregate(proofs, logrows=22, split_proofs = True)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "ezkl",
"language": "python",
"name": "python3"
},
"language_inf |
o": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.15"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
} |
{
"cells": [
{
"attachments": {
"image-2.png": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAokAAARDCAYAAAAEdLvJAAABYmlDQ1BJQ0MgUHJvZmlsZQAAKJF1kDFLw1AUhU9stSAVHRwEHQKKUy01rdi1LSKCQxoVqlvyWlMlTR9JRNTFQRengi5uUhd/gS4OjoKDguAgIoKDP0DsoiXeNGpbxft43I/DvYfDBTrCKudGEEDJdCxlOi3mFpfE0Au60ENPQEBlNk/J8iyN4Lu3V+2O5qhuxzyvq5px+bw3PJi1N6Nscmv173xbdecLNqP+QT/BuOUAQoxYXne4x9vE/RaFIj7wWPf5xGPN5/PGzLySIb4h7mNFNU/8RBzRWnS9hUvGGvvK4KUPF8yFOeoD9IeQRgEmshAxhRzimEAM41D+2Uk0djIog2MDFlagowiHtlOkcBjkJmKGHBmiiBBL5Cch7t369w2bWrkKJN+AQKWpaYfA2S7FvG9qI0dA7w5wes1VS/25rFAL2stxyedwGuh8dN3XUSC0D9Qrrvtedd36Mfk/ABfmJ+uTZFvl1hD0AAAAVmVYSWZNTQAqAAAACAABh2kABAAAAAEAAAAaAAAAAAADkoYABwAAABIAAABEoAIABAAAAAEAAAKJoAMABAAAAAEAAARDAAAAAEFTQ0lJAAAAU2NyZWVuc2hvdIWiHYkAAAHXaVRYdFhNTDpjb20uYWRvYmUueG1wAAAAAAA8eDp4bXBtZXRhIHhtbG5zOng9ImFkb2JlOm5zOm1ldGEvIiB4OnhtcHRrPSJYTVAgQ29yZSA2LjAuMCI+CiAgIDxyZGY6UkRGIHhtbG5zOnJkZj0iaHR0cDovL3d3dy53My5vcmcvMTk5OS8wMi8yMi1yZGYtc3ludGF4LW5zIyI+CiAgICAgIDxyZGY6RGVzY3JpcHRpb24gcmRmOmFib3V0PSIiCiAgICAgICAgICAgIHhtbG5zOmV4aWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20vZXhpZi8xLjAvIj4KICAgICAgICAgPGV4aWY6UGl4ZWxZRGltZW5zaW9uPjEwOTE8L2V4aWY6UGl4ZWxZRGltZW5zaW9uPgogICAgICAgICA8ZXhpZjpQaXhlbFhEaW1lbnNpb24+NjQ5PC9leGlmOlBpeGVsWERpbWVuc2lvbj4KICAgICAgICAgPGV4aWY6VXNlckNvbW1lbnQ+U2NyZWVuc2hvdDwvZXhpZjpVc2VyQ29tbWVudD4KICAgICAgPC9yZGY6RGVzY3JpcHRpb24+CiAgIDwvcmRmOlJERj4KPC94OnhtcG1ldGE+CvSCr3YAAEAASURBVHgB7N0HnJxVvf/x3yabZNOzm03vvYeQgksCgtwAERCu8kfEgqKgIIkXEBWuoOhFAa/8QSkq/lVQrBcvoFIEpBMCCSQB0sum97LZJJtNNtn88z34rJOdLdPnKZ/zeg0z+8xTznmfUX6cWlBWVnbUSAgggAACCCCAAAIIxAi0iPnMRwQQQAABBBBAAAEEnABBIj8EBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAgggg |
ABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggEAhBAgggAAC/hI4ePCgVVdXW21trbVs2TLnmTty5Ih7buvWra2oqCjnz+eBCCDgDwGCRH/UA7lAAAEErKamxnbv3u0kWrRokZcA0auGQ4cOuUB1z549VlxcbAoYSQggEC0BgsRo1TelRQABnwqo9VABmYKxLl26+CaXFRUVplfnzp2tTZs2vskXGUEAgewLMCYx+8Y8AQEEEGhWYO/evVZYWOirAFGZVsDaqlUrq6ysbLYMnIAAAuESIEgMV31SGgQQCKDA4cOHTS2JJSUlvsy9upvVFa4XCQEEoiNAkBiduqakCCDgUwEFiWqt83NS/pRPEgIIREeAIDE6dU1JEUDAxwIFBQU+zp2Z3/Pnazwyh0BABQgSA1pxZBsBBKIjMH78eDdeMTolpqQIIOAHAYJEP9QCeUAAAQSaEPjwhz+clQktah288MILm3gyXyGAQJQFCBKjXPuUHQEEAiFwxx132I4dOzKeVwWJZ599dsbvyw0RQCAcAi379u17SziKQikQQACBYApoQogWr27fvn2DBbj55pvtrbfeso4dO9qsWbNs+PDh9qlPfcqGDh1qixcvdrOO9feAAQPc8fPOO8/tlLJs2TJ3v9tvv92ee+459 |
7m0tNS+/OUv2+zZs+3rX/+69erVyyZMmGArV65scpmbqqoqt4aj3yfYNAjIQQQQSEmAlsSU2LgIAQQQyJ1Ap06d3MQRbdE3atQoe/bZZ+0b3/iG27Zv8uTJLiMKME844QS79dZb7T
},
"image.png": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAUAAAAEbCAYAAACr2V2eAAABYmlDQ1BJQ0MgUHJvZmlsZQAAKJF1kDFLw1AUhU9stSAVHRwEHQKKUy01rdi1LSKCQxoVqlvyWlMlTR9JRNTFQRengi5uUhd/gS4OjoKDguAgIoKDP0DsoiXeNGpbxft43I/DvYfDBTrCKudGEEDJdCxlOi3mFpfE0Au60ENPQEBlNk/J8iyN4Lu3V+2O5qhuxzyvq5px+bw3PJi1N6Nscmv173xbdecLNqP+QT/BuOUAQoxYXne4x9vE/RaFIj7wWPf5xGPN5/PGzLySIb4h7mNFNU/8RBzRWnS9hUvGGvvK4KUPF8yFOeoD9IeQRgEmshAxhRzimEAM41D+2Uk0djIog2MDFlagowiHtlOkcBjkJmKGHBmiiBBL5Cch7t369w2bWrkKJN+AQKWpaYfA2S7FvG9qI0dA7w5wes1VS/25rFAL2stxyedwGuh8dN3XUSC0D9Qrrvtedd36Mfk/ABfmJ+uTZFvl1hD0AAAAVmVYSWZNTQAqAAAACAABh2kABAAAAAEAAAAaAAAAAAADkoYABwAAABIAAABEoAIABAAAAAEAAAFAoAMABAAAAAEAAAEbAAAAAEFTQ0lJAAAAU2NyZWVuc2hvdP5iyG4AAAHWaVRYdFhNTDpjb20uYWRvYmUueG1wAAAAAAA8eDp4bXBtZXRhIHhtbG5zOng9ImFkb2JlOm5zOm1ldGEvIiB4OnhtcHRrPSJYTVAgQ29yZSA2LjAuMCI+CiAgIDxyZGY6UkRGIHhtbG5zOnJkZj0iaHR0cDovL3d3dy53My5vcmcvMTk5OS8wMi8yMi1yZGYtc3ludGF4LW5zIyI+CiAgICAgIDxyZGY6RGVzY3JpcHRpb24gcmRmOmFib3V0PSIiCiAgICAgICAgICAgIHhtbG5zOmV4aWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20vZXhpZi8xLjAvIj4KICAgICAgICAgPGV4aWY6UGl4ZWxZRGltZW5zaW9uPjI4MzwvZXhpZjpQaXhlbFlEaW1lbnNpb24+CiAgICAgICAgIDxleGlmOlBpeGVsWERpbWVuc2lvbj4zMjA8L2V4aWY6UGl4ZWxYRGltZW5zaW9uPgogICAgICAgICA8ZXhpZjpVc2VyQ29tbWVudD5TY3JlZW5zaG90PC9leGlmOlVzZXJDb21tZW50PgogICAgICA8L3JkZjpEZXNjcmlwdGlvbj4KICAgPC9yZGY6UkRGPgo8L3g6eG1wbWV0YT4Kk67BXQAAJDZJREFUeAHtnQuwVVUZxxfKS+QlIKJg8pSXKJQ8VbxqqKBjJmlaUxra5KTWZPhoSoXR1DR10lIZHUsrRZMJy8hSlLhAoJiIFSqKegUUBHkICEgZ/2X7uO/hnHP3Pufsc/ZZ67dmzr377L3W2uv7ffv+73rvZqNGjfrYECAAAQh4SGAvD23GZAhAAAKWAALIgwABCHhLAAH01vUYDgEIIIA8AxCAgLcEEEBvXY/hEIAAAsgzAAEIeEsAAfTW9RgOAQgggDwDEICAtwQQQG9dj+EQgAACyDMAAQh4SwAB9Nb1GA4BCCCAPAMQgIC3BBBAb12P4RCAAALIMwABCHhLAAH01vUYDgEIIIA8AxCAgLcEEEBvXY/hEIAAAsgzAAEIeEsAAfTW9RgOAQgggDwDEICAtwQQQG9dj+EQgAACyDMAAQh4SwAB9Nb1GA4BCCCAPAMQgIC3BBBAb12P4RCAAALIMwABCHhLAAH01vUYDgEIIIA8AxCAgLcEEEBvXY/hEIAAAs |
gzAAEIeEsAAfTW9RgOAQgggDwDEICAtwQQQG9dj+EQgAACyDMAAQh4SwAB9Nb1GA4BCCCAPAMQgIC3BBBAb12P4RCAAALIMwABCHhLAAH01vUYDgEIIIA8AxCAgLcEEEBvXY/hEIAAAsgzAAEIeEsAAfTW9RgOAQgggDwDEICAtwQQQG9dj+EQgAACyDMAAQh4SwAB9Nb1GA4BCCCAPAMQgIC3BBBAb12P4RCAAALIMwABCHhLAAH01vUYDgEIIIA8AxCAgLcEEEBvXY/hEIAAAsgzAAEIeEsAAfTW9RgOAQgggDwDEICAtwQQQG9dj+EQgAACyDMAAQh4SwAB9Nb1GA4BCCCAPAMQgIC3BBBAb12P4RCAAALIMwABCHhLoLm3lmN4LAIffvih0ee
}
},
"cell_type": "markdown",
"id": "cf69bb3f-94e6-4dba-92cd-ce08df117d67",
"metadata": {},
"source": [
"
"\n",
"\n",
"Sklearn based models are slightly finicky to get into a suitable onnx format. By default most tree based models will export into something that looks like this: \n",
"\n",
"\n",
"\n",
"\n",
"\n",
"Processing such nodes can be difficult and error prone. It would be much better if the operations of the tree were represented as a proper graph, possibly ... like this: \n",
"\n",
"\n",
"\n",
"\n",
"\n",
"This notebook showcases how to do that using the `sk2torch` python package ! "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "95613ee9",
"metadata": {},
"outputs": [],
"source": [
"
"try:\n",
"
" |
import google.colab\n",
" |
import subprocess\n",
" |
import sys\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"ezkl\"])\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"onnx\"])\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"sk2torch\"])\n",
"\n",
"
"except:\n",
" pass\n",
"\n",
"\n",
"
"\n",
"
" |
import json\n",
" |
import numpy as np\n",
"from sklearn.datasets |
import load_iris\n",
"from sklearn.model_selection |
import train_test_split\n",
"from sklearn.ensemble |
import RandomForestClassifier as Rf\n",
" |
import torch\n",
" |
import ezkl\n",
" |
import os\n",
"from hummingbird.ml |
import convert\n",
"\n",
"\n",
"\n",
"iris = load_iris()\n",
"X, y = iris.data, iris.target\n",
"X = X.astype(np.float32)\n",
"X_train, X_test, y_train, y_test = train_test_split(X, y)\n",
"clr = Rf()\n",
"clr.fit(X_train, y_train)\n",
"\n",
"\n",
"\n",
"torch_rf = convert(clr, 'torch')\n",
"
"diffs = []\n",
"for i in range(len(X_test)):\n",
" torch_pred = torch_rf.predict(torch.tensor(X_test[i].reshape(1, -1)))\n",
" sk_pred = clr.predict(X_test[i].reshape(1, -1))\n",
" diffs.append(torch_pred[0].round() - sk_pred[0])\n",
"\n",
"print(\"num diffs\", sum(diffs))\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b37637c4",
"metadata": {},
"outputs": [],
"source": [
"model_path = os.path.join('network.onnx')\n",
"compiled_model_path = os.path.join('network.compiled')\n",
"pk_path = os.path.join('test.pk')\n",
"vk_path = os.path.join('test.vk')\n",
"settings_path = os.path.join('settings.json')\n",
"\n",
"witness_path = os.path.join('witness.json')\n",
"data_path = os.path.join('input.json')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "82db373a",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"\n",
"
"\n",
"
"shape = X_train.shape[1:]\n",
"x = torch.rand(1, *shape, requires_grad=False)\n", |
"torch_out = torch_rf.predict(x)\n",
"
"torch.onnx.export(torch_rf.model,
"
" x,\n",
"
" \"network.onnx\",\n",
" export_params=True,
" opset_version=11,
" do_constant_folding=True,
" input_names=['input'],
" output_names=['output'],
" dynamic_axes={'input': {0: 'batch_size'},
" 'output': {0: 'batch_size'}})\n",
"\n",
"d = ((x).detach().numpy()).reshape([-1]).tolist()\n",
"\n",
"data = dict(input_shapes=[shape],\n",
" input_data=[d],\n",
" output_data=[o.reshape([-1]).tolist() for o in torch_out])\n",
"\n",
"
"json.dump(data, open(\"input.json\", 'w'))\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d5e374a2",
"metadata": {},
"outputs": [],
"source": [
"!RUST_LOG=trace\n",
"
"res = ezkl.gen_settings(model_path, settings_path)\n",
"assert res == True\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"cal_path = os.path.join(\"calibration.json\")\n",
"\n",
"data_array = (torch.rand(20, *shape, requires_grad=True).detach().numpy()).reshape([-1]).tolist()\n",
"\n",
"data = dict(input_data = [data_array])\n", |
"\n",
"
"json.dump(data, open(cal_path, 'w'))\n",
"\n",
"\n",
"ezkl.calibrate_settings(cal_path, model_path, settings_path, \"resources\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3aa4f090",
"metadata": {},
"outputs": [],
"source": [
"res = ezkl.compile_circuit(model_path, compiled_model_path, settings_path)\n",
"assert res == True"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8b74dcee",
"metadata": {},
"outputs": [],
"source": [
"
"res = ezkl.get_srs( settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "18c8b7c7",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"res = ezkl.gen_witness(data_path, compiled_model_path, witness_path)\n",
"assert os.path.isfile(witness_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b1c561a8",
"metadata": {},
"outputs": [],
"source": [
"\n",
"
"
"
"
"\n",
"\n",
"\n",
"res = ezkl.setup(\n",
" compiled_model_path,\n",
" vk_path,\n",
" pk_path,\n",
" \n",
" )\n",
"\n",
"assert res == True\n",
"assert os.path.isfile(vk_path)\n",
"assert os.path.isfile(pk_path)\n",
"assert |
os.path.isfile(settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c384cbc8",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"\n",
"proof_path = os.path.join('test.pf')\n",
"\n",
"res = ezkl.prove(\n",
" witness_path,\n",
" compiled_model_path,\n",
" pk_path,\n",
" proof_path,\n",
" \n",
" \"single\",\n",
" )\n",
"\n",
"print(res)\n",
"assert os.path.isfile(proof_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "76f00d41",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"res = ezkl.verify(\n",
" proof_path,\n",
" settings_path,\n",
" vk_path,\n",
" \n",
" )\n",
"\n",
"assert res == True\n",
"print(\"verified\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.2"
}
},
"nbformat": 4,
"nbformat_minor": 5
} |
{
"cells": [
{
"cell_type": "markdown",
"id": "cf69bb3f-94e6-4dba-92cd-ce08df117d67",
"metadata": {
"id": "cf69bb3f-94e6-4dba-92cd-ce08df117d67"
},
"source": [
"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "95613ee9",
"metadata": {
"id": "95613ee9"
},
"outputs": [],
"source": [
"
"try:\n",
"
" |
import google.colab\n",
" |
import subprocess\n",
" |
import sys\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"ezkl\"])\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"onnx\"])\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"pytest\"])\n",
"\n",
"
"except:\n",
" pass\n",
"\n",
" |
import logging\n",
"FORMAT = '%(levelname)s %(name)s %(asctime)-15s %(filename)s:%(lineno)d %(message)s'\n",
"logging.basicConfig(format=FORMAT)\n",
"logging.getLogger().setLevel(logging.DEBUG)\n",
"\n",
"
"\n",
"
"from torch |
import nn\n",
" |
import ezkl\n",
" |
import os\n",
" |
import json\n",
" |
import torch\n",
"\n",
" |
class MyModel(nn.Module):\n",
" def __init__(self):\n",
" super(MyModel, self).__init__()\n",
"\n",
" def forward(self, x, y):\n",
" diff = (x - y)\n",
" membership_test = torch.prod(diff, dim=1)\n",
" return (membership_test,y)\n",
"\n",
"\n",
"circuit = MyModel()\n",
"\n",
"
"\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b37637c4",
"metadata": {
"id": "b37637c4"
},
"outputs": [],
"source": [
"model_path = os.path.join('network.onnx')\n",
"compiled_model_path = os.path.join('network.compiled')\n",
"pk_path = os.path.join('test.pk')\n",
"vk_path = os.path.join('test.vk')\n",
"settings_path = os.path.join('settings.json')\n",
"\n",
"witness_path = os.path.join('witness.json')\n",
"data_path = os.path.join('input.json')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c833f08c",
"metadata": {
"colab": {
"base_uri": "https:
},
"id": "c833f08c",
"outputId": "b5c794e1-c787-4b65-e267-c005e661df1b"
},
"outputs": [],
"source": [
"
"print(torch.__version__)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "82db373a",
"metadata": {
"id": "82db373a"
},
"outputs": [],
"source": [
"\n",
"\n",
"x = torch.zeros(1,*[1], requires_grad=True)\n",
"y = torch.tensor([0.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0], requires_grad=True)\n",
"\n",
"y_input = [0.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0]\n",
"\n",
"
"result = []\n",
"\n",
"
"for e in y_input:\n",
"
" print(ezkl.float_to_felt(e,7))\n",
" result.append(ezkl.p |
oseidon_hash([ezkl.float_to_felt(e, 7)])[0])\n",
"\n",
"y = y.unsqueeze(0)\n",
"y = y.reshape(1, 9)\n",
"\n",
"
"circuit.eval()\n",
"\n",
"
"torch.onnx.export(circuit,
" (x,y),
" model_path,
" export_params=True,
" opset_version=14,
" do_constant_folding=True,
" input_names = ['input'],
" output_names = ['output'],
" dynamic_axes={'input' : {0 : 'batch_size'},
" 'output' : {0 : 'batch_size'}})\n",
"\n",
"data_array_x = ((x).detach().numpy()).reshape([-1]).tolist()\n",
"data_array_y = result\n",
"print(data_array_y)\n",
"\n",
"data = dict(input_data = [data_array_x, data_array_y])\n",
"\n",
"print(data)\n",
"\n",
"
"json.dump( data, open(data_path, 'w' ))\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d5e374a2",
"metadata": {
"id": "d5e374a2"
},
"outputs": [],
"source": [
"run_args = ezkl.PyRunArgs()\n",
"
"run_args.input_visibility = \"hashed/private/0\"\n",
"
"run_args.param_visibility = \"fixed\"\n",
"
"run_args.output_visibility = \"fixed\"\n",
"run_args.variables = [(\"batch_size\", 1)]\n",
"
"run_args.scale_rebase_multiplier = 1000\n",
"
"run_args.logrows = 11\n",
"\n",
"
"
"
"
"\n",
"\n",
"
"res = ezkl.gen_settings(model_path, settings_path, py_run_args=run_args)\n",
"assert res == True\n"
]
},
{
"cell_type": "code",
"execu |
tion_count": null,
"id": "3aa4f090",
"metadata": {
"id": "3aa4f090"
},
"outputs": [],
"source": [
"res = ezkl.compile_circuit(model_path, compiled_model_path, settings_path)\n",
"assert res == True"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8b74dcee",
"metadata": {
"colab": {
"base_uri": "https:
},
"id": "8b74dcee",
"outputId": "f7b9198c-2b3d-48bb-c67e-8478333cedb5"
},
"outputs": [],
"source": [
"
"res = ezkl.get_srs( settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "18c8b7c7",
"metadata": {
"id": "18c8b7c7"
},
"outputs": [],
"source": [
"
"\n",
"res = ezkl.gen_witness(data_path, compiled_model_path, witness_path)\n",
"assert os.path.isfile(witness_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "Y94vCo5Znrim",
"metadata": {
"id": "Y94vCo5Znrim"
},
"outputs": [],
"source": [
"
"\n",
"data_path_faulty = os.path.join('input_faulty.json')\n",
"\n",
"witness_path_faulty = os.path.join('witness_faulty.json')\n",
"\n",
"x = torch.ones(1,*[1], requires_grad=True)\n",
"y = torch.tensor([0.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0], requires_grad=True)\n",
"\n",
"y = y.unsqueeze(0)\n",
"y = y.reshape(1, 9)\n",
"\n",
"data_array_x = ((x).detach().numpy()).reshape([-1]).tolist()\n",
"data_array_y = result\n",
"print(data_array_y)\n",
"\n",
"data = dict(input_data = [data_array_x, data_array_y])\n",
"\n",
"print(data)\n",
"\n",
"
"json.dump( data, open(data_path_faulty, 'w' ))\n",
"\n",
"res = ezkl.gen_witness(data_path_faulty, compiled_model_path, w |
itness_path_faulty)\n",
"assert os.path.isfile(witness_path_faulty)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "FQfGdcUNpvuK",
"metadata": {
"id": "FQfGdcUNpvuK"
},
"outputs": [],
"source": [
"
" |
import random\n",
"\n",
"
"random_value = random.randint(1, 8)\n",
"\n",
"data_path_truthy = os.path.join('input_truthy.json')\n",
"\n",
"witness_path_truthy = os.path.join('witness_truthy.json')\n",
"\n",
"set = [0.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0]\n",
"\n",
"x = torch.tensor([set[random_value]])\n",
"y = torch.tensor(set, requires_grad=True)\n",
"\n",
"y = y.unsqueeze(0)\n",
"y = y.reshape(1, 9)\n",
"\n",
"x = x.unsqueeze(0)\n",
"x = x.reshape(1,1)\n",
"\n",
"data_array_x = ((x).detach().numpy()).reshape([-1]).tolist()\n",
"data_array_y = result\n",
"print(data_array_y)\n",
"\n",
"data = dict(input_data = [data_array_x, data_array_y])\n",
"\n",
"print(data)\n",
"\n",
"
"json.dump( data, open(data_path_truthy, 'w' ))\n",
"\n",
"res = ezkl.gen_witness(data_path_truthy, compiled_model_path, witness_path_truthy)\n",
"assert os.path.isfile(witness_path_truthy)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "41fd15a8",
"metadata": {},
"outputs": [],
"source": [
"witness = json.load(open(witness_path, \"r\"))\n",
"witness"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b1c561a8",
"metadata": {
"id": "b1c561a8"
},
"outputs": [],
"source": [
"\n",
"
"
"
"
"\n",
"
"
"witness = json.load(open(witness_path, \"r\"))\n",
"witness[\"outputs\"][0] = [\"0000000000000000000000000000000000000000000000000000000000000000\"]\n",
"json.dump(witness, open(witness_path, \"w\"))\n",
"\n",
"witness = json.load(open(witness_path, \"r\"))\n",
"print(witness[\"outputs\"][0])\n",
"\n",
"res = ezkl.setup |
(\n",
" compiled_model_path,\n",
" vk_path,\n",
" pk_path,\n",
" witness_path = witness_path,\n",
" )\n",
"\n",
"assert res == True\n",
"assert os.path.isfile(vk_path)\n",
"assert os.path.isfile(pk_path)\n",
"assert os.path.isfile(settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c384cbc8",
"metadata": {
"id": "c384cbc8"
},
"outputs": [],
"source": [
"
"\n",
"\n",
"proof_path = os.path.join('test.pf')\n",
"\n",
"res = ezkl.prove(\n",
" witness_path,\n",
" compiled_model_path,\n",
" pk_path,\n",
" proof_path,\n",
" \n",
" \"single\",\n",
" )\n",
"\n",
"print(res)\n",
"assert os.path.isfile(proof_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "XAC73EvtpM-W",
"metadata": {
"id": "XAC73EvtpM-W"
},
"outputs": [],
"source": [
"
"\n",
"\n",
"proof_path_faulty = os.path.join('test_faulty.pf')\n",
"\n",
"res = ezkl.prove(\n",
" witness_path_faulty,\n",
" compiled_model_path,\n",
" pk_path,\n",
" proof_path_faulty,\n",
" \n",
" \"single\",\n",
" )\n",
"\n",
"print(res)\n",
"assert os.path.isfile(proof_path_faulty)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "_x19Q4FUrKb6",
"metadata": {
"id": "_x19Q4FUrKb6"
},
"outputs": [],
"source": [
"
"\n",
"\n",
"proof_path_truthy = os.path.join('test_truthy.pf')\n",
"\n",
"res = ezkl.prove(\n",
" witness_path_truthy,\n", |
" compiled_model_path,\n",
" pk_path,\n",
" proof_path_truthy,\n",
" \n",
" \"single\",\n",
" )\n",
"\n",
"print(res)\n",
"assert os.path.isfile(proof_path_truthy)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "76f00d41",
"metadata": {
"id": "76f00d41"
},
"outputs": [],
"source": [
"
"\n",
"res = ezkl.verify(\n",
" proof_path,\n",
" settings_path,\n",
" vk_path,\n",
" \n",
" )\n",
"assert res == True\n",
"\n",
"res = ezkl.verify(\n",
" proof_path_truthy,\n",
" settings_path,\n",
" vk_path,\n",
" \n",
" )\n",
"assert res == True"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4nqEx7-qpciQ",
"metadata": {
"id": "4nqEx7-qpciQ"
},
"outputs": [],
"source": [
" |
import pytest\n",
"def test_verification():\n",
" with pytest.raises(RuntimeError, match='Failed to run verify: The constraint system is not satisfied'):\n",
" ezkl.verify(\n",
" proof_path_faulty,\n",
" settings_path,\n",
" vk_path,\n",
" \n",
" )\n",
"\n",
"
"test_verification()"
]
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.15"
}
},
"nbformat": 4,
"nbformat_minor": 5
} |
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "cf69bb3f-94e6-4dba-92cd-ce08df117d67",
"metadata": {},
"source": [
"
"\n",
"Demonstrates how to use EZKL with aggregated proofs"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "95613ee9",
"metadata": {},
"outputs": [],
"source": [
"
"try:\n",
"
" |
import google.colab\n",
" |
import subprocess\n",
" |
import sys\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"ezkl\"])\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"onnx\"])\n",
"\n",
"
"except:\n",
" pass\n",
"\n",
"\n",
"
"\n",
"
"from torch |
import nn\n",
" |
import ezkl\n",
" |
import os\n",
" |
import json\n",
" |
import torch\n",
"\n",
"\n",
"
"
"
"\n",
" |
class MyModel(nn.Module):\n",
" def __init__(self):\n",
" super(MyModel, self).__init__()\n",
"\n",
" self.conv1 = nn.Conv2d(in_channels=1, out_channels=2, kernel_size=5, stride=2)\n",
" self.conv2 = nn.Conv2d(in_channels=2, out_channels=3, kernel_size=5, stride=2)\n",
"\n",
" self.relu = nn.ReLU()\n",
"\n",
" self.d1 = nn.Linear(48, 48)\n",
" self.d2 = nn.Linear(48, 10)\n",
"\n",
" def forward(self, x):\n",
"
" x = self.conv1(x)\n",
" x = self.relu(x)\n",
" x = self.conv2(x)\n",
" x = self.relu(x)\n",
"\n",
"
" x = x.flatten(start_dim = 1)\n",
"\n",
"
" x = self.d1(x)\n",
" x = self.relu(x)\n",
"\n",
"
" logits = self.d2(x)\n",
"\n",
" return logits\n",
"\n",
"\n",
"circuit = MyModel()\n",
"\n",
"
"\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b37637c4",
"metadata": {},
"outputs": [],
"source": [
"model_path = os.path.join('network.onnx')\n",
"compiled_model_path = os.path.join('network.compiled')\n",
"pk_path = os.path.join('test.pk')\n",
"vk_path = os.path.join('test.vk')\n",
"proof_path = os.path.join('test.pf')\n",
"settings_path = os.path.join('settings.json')\n",
"srs_p |
ath = os.path.join('kzg.srs')\n",
"witness_path = os.path.join('witness.json')\n",
"data_path = os.path.join('input.json')\n",
"aggregate_proof_path = os.path.join('aggr.pf')\n",
"aggregate_vk_path = os.path.join('aggr.vk')\n",
"aggregate_pk_path = os.path.join('aggr.pk')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "82db373a",
"metadata": {},
"outputs": [],
"source": [
"\n",
"shape = [1, 28, 28]\n",
"
"x = 0.1*torch.rand(1,*shape, requires_grad=True)\n",
"\n",
"
"circuit.eval()\n",
"\n",
"
"torch.onnx.export(circuit,
" x,
" model_path,
" export_params=True,
" opset_version=10,
" do_constant_folding=True,
" input_names = ['input'],
" output_names = ['output'],
" dynamic_axes={'input' : {0 : 'batch_size'},
" 'output' : {0 : 'batch_size'}})\n",
"\n",
"data_array = ((x).detach().numpy()).reshape([-1]).tolist()\n",
"\n",
"data = dict(input_data = [data_array])\n",
"\n",
"
"json.dump( data, open(data_path, 'w' ))\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d5e374a2",
"metadata": {},
"outputs": [],
"source": [ |
"!RUST_LOG=trace\n",
"
"res = ezkl.gen_settings(model_path, settings_path)\n",
"assert res == True\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"cal_path = os.path.join(\"calibration.json\")\n",
"\n",
"data_array = (torch.rand(20, *shape, requires_grad=True).detach().numpy()).reshape([-1]).tolist()\n",
"\n",
"data = dict(input_data = [data_array])\n",
"\n",
"
"json.dump(data, open(cal_path, 'w'))\n",
"\n",
"\n",
"ezkl.calibrate_settings(cal_path, model_path, settings_path, \"resources\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3aa4f090",
"metadata": {},
"outputs": [],
"source": [
"res = ezkl.compile_circuit(model_path, compiled_model_path, settings_path)\n",
"assert res == True"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8b74dcee",
"metadata": {},
"outputs": [],
"source": [
"
"res = ezkl.get_srs( settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "18c8b7c7",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"res = ezkl.gen_witness(data_path, compiled_model_path, witness_path)\n",
"assert os.path.isfile(witness_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b1c561a8" |
,
"metadata": {},
"outputs": [],
"source": [
"\n",
"
"
"
"
"\n",
"\n",
"\n",
"res = ezkl.setup(\n",
" compiled_model_path,\n",
" vk_path,\n",
" pk_path,\n",
" \n",
" )\n",
"\n",
"assert res == True\n",
"assert os.path.isfile(vk_path)\n",
"assert os.path.isfile(pk_path)\n",
"assert os.path.isfile(settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c384cbc8",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"\n",
"proof_path = os.path.join('test.pf')\n",
"\n",
"res = ezkl.prove(\n",
" witness_path,\n",
" compiled_model_path,\n",
" pk_path,\n",
" proof_path,\n",
" \n",
" \"for-aggr\",
" )\n",
"\n",
"print(res)\n",
"assert os.path.isfile(proof_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "76f00d41",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"res = ezkl.verify(\n",
" proof_path,\n",
" settings_path,\n",
" vk_path,\n",
" )\n",
"\n",
"assert res == True\n",
"print(\"verified\")"
]
},
{
"c |
ell_type": "code",
"execution_count": null,
"id": "0832b909",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"res = ezkl.get_srs(settings_path=None, logrows=21, commitment=ezkl.PyCommitments.KZG)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c5a64be6",
"metadata": {},
"outputs": [],
"source": [
"
"
"\n",
"res = ezkl.mock_aggregate([proof_path], 21)\n",
"assert res == True"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fee8acc6",
"metadata": {},
"outputs": [],
"source": [
"
"res = ezkl.setup_aggregate(\n",
" [proof_path],\n",
" aggregate_vk_path,\n",
" aggregate_pk_path,\n",
" 21\n",
")\n",
"\n",
"assert os.path.isfile(aggregate_vk_path)\n",
"assert os.path.isfile(aggregate_pk_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "171702d3",
"metadata": {},
"outputs": [],
"source": [
"
"res = ezkl.aggregate(\n",
" [proof_path],\n",
" aggregate_proof_path,\n",
" aggregate_pk_path,\n",
" \"evm\",\n",
" 21,\n",
" \"safe\"\n",
")\n",
"\n",
"assert os.path.isfile(aggregate_proof_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "671dfdd5",
"metada |
ta": {},
"outputs": [],
"source": [
"
"res = ezkl.verify_aggr(\n",
" aggregate_proof_path,\n",
" aggregate_vk_path,\n",
" 21,\n",
")\n",
"assert res == True"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "50eba2f4",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"sol_code_path = os.path.join(\"Verifier.sol\")\n",
"abi_path = os.path.join(\"Verifier_ABI.json\")\n",
"\n",
"res = ezkl.create_evm_verifier_aggr(\n",
" [settings_path],\n",
" aggregate_vk_path,\n",
" sol_code_path,\n",
" abi_path,\n",
" logrows=21)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.15"
}
},
"nbformat": 4,
"nbformat_minor": 5
} |
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "cf69bb3f-94e6-4dba-92cd-ce08df117d67",
"metadata": {},
"source": [
"
"\n",
"Here we demonstrate the use of the EZKL package in a Jupyter notebook whereby all components of the circuit are public or pre-committed to. This is the simplest case of using EZKL (proof of computation)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "95613ee9",
"metadata": {},
"outputs": [],
"source": [
"
"try:\n",
"
" |
import google.colab\n",
" |
import subprocess\n",
" |
import sys\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"ezkl\"])\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"onnx\"])\n",
"\n",
"
"except:\n",
" pass\n",
"\n",
"\n",
"
"\n",
"
"from torch |
import nn\n",
" |
import ezkl\n",
" |
import os\n",
" |
import json\n",
" |
import torch\n",
"\n",
"\n",
"
"
"
"\n",
" |
class MyModel(nn.Module):\n",
" def __init__(self):\n",
" super(MyModel, self).__init__()\n",
"\n",
" self.conv1 = nn.Conv2d(in_channels=1, out_channels=2, kernel_size=5, stride=2)\n",
" self.conv2 = nn.Conv2d(in_channels=2, out_channels=3, kernel_size=5, stride=2)\n",
"\n",
" self.relu = nn.ReLU()\n",
"\n",
" self.d1 = nn.Linear(48, 48)\n",
" self.d2 = nn.Linear(48, 10)\n",
"\n",
" def forward(self, x):\n",
"
" x = self.conv1(x)\n",
" x = self.relu(x)\n",
" x = self.conv2(x)\n",
" x = self.relu(x)\n",
"\n",
"
" x = x.flatten(start_dim = 1)\n",
"\n",
"
" x = self.d1(x)\n",
" x = self.relu(x)\n",
"\n",
"
" logits = self.d2(x)\n",
"\n",
" return logits\n",
"\n",
"\n",
"circuit = MyModel()\n",
"\n",
"
"\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b37637c4",
"metadata": {},
"outputs": [],
"source": [
"model_path = os.path.join('network.onnx')\n",
"compiled_model_path = os.path.join('network.compiled')\n",
"pk_path = os.path.join('test.pk')\n",
"vk_path = os.path.join('test.vk')\n",
"settings_path = os.path.join('settings.json')\n",
"\n",
"witness_path = os.path.join('witness.json |
')\n",
"data_path = os.path.join('input.json')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "82db373a",
"metadata": {},
"outputs": [],
"source": [
"\n",
"shape = [1, 28, 28]\n",
"
"x = 0.1*torch.rand(1,*shape, requires_grad=True)\n",
"\n",
"
"circuit.eval()\n",
"\n",
"
"torch.onnx.export(circuit,
" x,
" model_path,
" export_params=True,
" opset_version=10,
" do_constant_folding=True,
" input_names = ['input'],
" output_names = ['output'],
" dynamic_axes={'input' : {0 : 'batch_size'},
" 'output' : {0 : 'batch_size'}})\n",
"\n",
"data_array = ((x).detach().numpy()).reshape([-1]).tolist()\n",
"\n",
"data = dict(input_data = [data_array])\n",
"\n",
"
"json.dump( data, open(data_path, 'w' ))\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d5e374a2",
"metadata": {},
"outputs": [],
"source": [
"\n",
"py_run_args = ezkl.PyRunArgs()\n",
"py_run_args.input_visibility = \"public\"\n",
"py_run_args.output_visibility = \"public\"\n",
"py_run_args.param_visibility = \"fixed\"
"\n",
"res |
= ezkl.gen_settings(model_path, settings_path, py_run_args=py_run_args)\n",
"assert res == True\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"cal_path = os.path.join(\"calibration.json\")\n",
"\n",
"data_array = (torch.rand(20, *shape, requires_grad=True).detach().numpy()).reshape([-1]).tolist()\n",
"\n",
"data = dict(input_data = [data_array])\n",
"\n",
"
"json.dump(data, open(cal_path, 'w'))\n",
"\n",
"\n",
"ezkl.calibrate_settings(cal_path, model_path, settings_path, \"resources\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3aa4f090",
"metadata": {},
"outputs": [],
"source": [
"res = ezkl.compile_circuit(model_path, compiled_model_path, settings_path)\n",
"assert res == True"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8b74dcee",
"metadata": {},
"outputs": [],
"source": [
"
"res = ezkl.get_srs( settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "18c8b7c7",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"res = ezkl.gen_witness(data_path, compiled_model_path, witness_path)\n",
"assert os.path.isfile(witness_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b1c561a8",
"metadata": {},
"outputs": [], |
"source": [
"\n",
"
"
"
"
"\n",
"\n",
"\n",
"res = ezkl.setup(\n",
" compiled_model_path,\n",
" vk_path,\n",
" pk_path,\n",
" \n",
" )\n",
"\n",
"assert res == True\n",
"assert os.path.isfile(vk_path)\n",
"assert os.path.isfile(pk_path)\n",
"assert os.path.isfile(settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c384cbc8",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"\n",
"proof_path = os.path.join('test.pf')\n",
"\n",
"res = ezkl.prove(\n",
" witness_path,\n",
" compiled_model_path,\n",
" pk_path,\n",
" proof_path,\n",
" \n",
" \"single\",\n",
" )\n",
"\n",
"print(res)\n",
"assert os.path.isfile(proof_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "76f00d41",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"res = ezkl.verify(\n",
" proof_path,\n",
" settings_path,\n",
" vk_path,\n",
" \n",
" )\n",
"\n",
"assert res == True\n",
"print(\"verified\")"
]
}
],
"metadata": {
"kernelspec": { |
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.15"
}
},
"nbformat": 4,
"nbformat_minor": 5
} |
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "cf69bb3f-94e6-4dba-92cd-ce08df117d67",
"metadata": {},
"source": [
"
"\n",
"Here we demonstrate how to use the EZKL package to run a private network on public data to produce a public output.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "95613ee9",
"metadata": {},
"outputs": [],
"source": [
"
"try:\n",
"
" |
import google.colab\n",
" |
import subprocess\n",
" |
import sys\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"ezkl\"])\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"onnx\"])\n",
"\n",
"
"except:\n",
" pass\n",
"\n",
"\n",
"
"\n",
"
"from torch |
import nn\n",
" |
import ezkl\n",
" |
import os\n",
" |
import json\n",
" |
import torch\n",
"\n",
"\n",
"
"
"
"\n",
" |
class MyModel(nn.Module):\n",
" def __init__(self):\n",
" super(MyModel, self).__init__()\n",
"\n",
" self.conv1 = nn.Conv2d(in_channels=1, out_channels=2, kernel_size=5, stride=2)\n",
" self.conv2 = nn.Conv2d(in_channels=2, out_channels=3, kernel_size=5, stride=2)\n",
"\n",
" self.relu = nn.ReLU()\n",
"\n",
" self.d1 = nn.Linear(48, 48)\n",
" self.d2 = nn.Linear(48, 10)\n",
"\n",
" def forward(self, x):\n",
"
" x = self.conv1(x)\n",
" x = self.relu(x)\n",
" x = self.conv2(x)\n",
" x = self.relu(x)\n",
"\n",
"
" x = x.flatten(start_dim = 1)\n",
"\n",
"
" x = self.d1(x)\n",
" x = self.relu(x)\n",
"\n",
"
" logits = self.d2(x)\n",
"\n",
" return logits\n",
"\n",
"\n",
"circuit = MyModel()\n",
"\n",
"
"\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b37637c4",
"metadata": {},
"outputs": [],
"source": [
"model_path = os.path.join('network.onnx')\n",
"compiled_model_path = os.path.join('network.compiled')\n",
"pk_path = os.path.join('test.pk')\n",
"vk_path = os.path.join('test.vk')\n",
"settings_path = os.path.join('settings.json')\n",
"\n",
"witness_path = os.path.join('witness.json |
')\n",
"data_path = os.path.join('input.json')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "82db373a",
"metadata": {},
"outputs": [],
"source": [
"\n",
"shape = [1, 28, 28]\n",
"
"x = 0.1*torch.rand(1,*shape, requires_grad=True)\n",
"\n",
"
"circuit.eval()\n",
"\n",
"
"torch.onnx.export(circuit,
" x,
" model_path,
" export_params=True,
" opset_version=10,
" do_constant_folding=True,
" input_names = ['input'],
" output_names = ['output'],
" dynamic_axes={'input' : {0 : 'batch_size'},
" 'output' : {0 : 'batch_size'}})\n",
"\n",
"data_array = ((x).detach().numpy()).reshape([-1]).tolist()\n",
"\n",
"data = dict(input_data = [data_array])\n",
"\n",
"
"json.dump( data, open(data_path, 'w' ))\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d5e374a2",
"metadata": {},
"outputs": [],
"source": [
"py_run_args = ezkl.PyRunArgs()\n",
"py_run_args.input_visibility = \"public\"\n",
"py_run_args.output_visibility = \"public\"\n",
"py_run_args.param_visibility = \"private\"
"\n",
"res = ezkl.gen_settings( |
model_path, settings_path, py_run_args=py_run_args)\n",
"assert res == True\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"cal_path = os.path.join(\"calibration.json\")\n",
"\n",
"data_array = (torch.rand(20, *shape, requires_grad=True).detach().numpy()).reshape([-1]).tolist()\n",
"\n",
"data = dict(input_data = [data_array])\n",
"\n",
"
"json.dump(data, open(cal_path, 'w'))\n",
"\n",
"\n",
"ezkl.calibrate_settings(cal_path, model_path, settings_path, \"resources\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3aa4f090",
"metadata": {},
"outputs": [],
"source": [
"res = ezkl.compile_circuit(model_path, compiled_model_path, settings_path)\n",
"assert res == True"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8b74dcee",
"metadata": {},
"outputs": [],
"source": [
"
"res = ezkl.get_srs( settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "18c8b7c7",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"res = ezkl.gen_witness(data_path, compiled_model_path, witness_path)\n",
"assert os.path.isfile(witness_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b1c561a8",
"metadata": {},
"outputs": [],
"source": [ |
"\n",
"
"
"
"
"\n",
"\n",
"\n",
"res = ezkl.setup(\n",
" compiled_model_path,\n",
" vk_path,\n",
" pk_path,\n",
" \n",
" )\n",
"\n",
"assert res == True\n",
"assert os.path.isfile(vk_path)\n",
"assert os.path.isfile(pk_path)\n",
"assert os.path.isfile(settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c384cbc8",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"\n",
"proof_path = os.path.join('test.pf')\n",
"\n",
"res = ezkl.prove(\n",
" witness_path,\n",
" compiled_model_path,\n",
" pk_path,\n",
" proof_path,\n",
" \n",
" \"single\",\n",
" )\n",
"\n",
"print(res)\n",
"assert os.path.isfile(proof_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "76f00d41",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"res = ezkl.verify(\n",
" proof_path,\n",
" settings_path,\n",
" vk_path,\n",
" \n",
" )\n",
"\n",
"assert res == True\n",
"print(\"verified\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name" |
: "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.15"
}
},
"nbformat": 4,
"nbformat_minor": 5
} |
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "cf69bb3f-94e6-4dba-92cd-ce08df117d67",
"metadata": {},
"source": [
"
"\n",
"Here we demonstrate how to use the EZKL package to run a publicly known / committed to network on some private data, producing a public output.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "95613ee9",
"metadata": {},
"outputs": [],
"source": [
"
"try:\n",
"
" |
import google.colab\n",
" |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.