text
stringlengths 1
2.05k
|
---|
import torch\n",
"
"\n",
" |
class MyModel(nn.Module):\n",
" def __init__(self):\n",
" super(MyModel, self).__init__()\n",
" self.layer = nn.AvgPool2d(2, 1, (1, 1))\n",
"\n",
" def forward(self, x):\n",
" return self.layer(x)[0]\n",
"\n",
"\n",
"circuit = MyModel()\n",
"\n",
"
"\n",
"\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"We omit training for purposes of this demonstration. We've marked where training would happen in the cell above. \n",
"Now we export the model to onnx and create a corresponding (randomly generated) input. This input data will eventually be stored on chain and read from according to the call_data field in the graph input.\n",
"\n",
"You can replace the random `x` with real data if you so wish. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"x = 0.1*torch.rand(1,*[3, 2, 2], requires_grad=True)\n",
"\n",
"
"circuit.eval()\n",
"\n",
"
"torch.onnx.export(circuit,
" x,
" \"network.onnx\",
" export_params=True,
" opset_version=10,
" do_constant_folding=True,
" input_names = ['input'],
" output_names = ['output'],
" |
dynamic_axes={'input' : {0 : 'batch_size'},
" 'output' : {0 : 'batch_size'}})\n",
"\n",
"data_array = ((x).detach().numpy()).reshape([-1]).tolist()\n",
"\n",
"data = dict(input_data = [data_array])\n",
"\n",
"
"json.dump(data, open(\"input.json\", 'w' ))\n",
"\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"We now define a function that will create a new anvil instance which we will deploy our test contract too. This contract will contain in its storage the data that we will read from and attest to. In production you would not need to set up a local anvil instance. Instead you would replace RPC_URL with the actual RPC endpoint of the chain you are deploying your verifiers too, reading from the data on said chain."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
" |
import subprocess\n",
" |
import time\n",
" |
import threading\n",
"\n",
"
"
"\n",
"RPC_URL = \"http:
"\n",
"
"anvil_process = None\n",
"\n",
"def start_anvil():\n",
" global anvil_process\n",
" if anvil_process is None:\n",
" anvil_process = subprocess.Popen([\"anvil\", \"-p\", \"3030\", \"--code-size-limit=41943040\"])\n",
" if anvil_process.returncode is not None:\n",
" raise Exception(\"failed to start anvil process\")\n",
" time.sleep(3)\n",
"\n",
"def stop_anvil():\n",
" global anvil_process\n",
" if anvil_process is not None:\n",
" anvil_process.terminate()\n",
" anvil_process = None\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"We define our `PyRunArgs` objects which contains the visibility parameters for out model. \n",
"- `input_visibility` defines the visibility of the model inputs\n",
"- `param_visibility` defines the visibility of the model weights and constants and parameters \n",
"- `output_visibility` defines the visibility of the model outputs\n",
"\n",
"Here we create the following setup:\n",
"- `input_visibility`: \"private\"\n",
"- `param_visibility`: \"private\"\n",
"- `output_visibility`: hashed\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
" |
import ezkl\n",
"\n",
"model_path = os.path.join('network.onnx')\n",
"compiled_model_path = os.path.join('network.compiled')\n",
"pk_path = os.path.join('test.pk')\n",
"vk_path = os.path.join('test.vk')\n",
"settings_path = os.path.join('settings.json')\n",
"srs_path = os.path.join('kzg.srs')\n",
"data_path = os.path.join('input.json')\n",
"\n",
"run_args = ezkl.PyRunArgs()\n",
"run_args.input_visibility = \"private\"\n",
"run_args.param_visibility = \"private\"\n",
"run_args.output_visibility = \"hashed\"\n",
"run_args.variables = [(\"batch_size\", 1)]\n",
"\n",
"\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we generate a settings file. This file basically instantiates a bunch of parameters that determine their circuit shape, size etc... Because of the way we represent nonlinearities in the circuit (using Halo2's [lookup tables](https:
"\n",
"You can pass a dataset for calibration that will be representative of real inputs you might find if and when you deploy the prover. Here we create a dummy calibration dataset for demonstration purposes. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!RUST_LOG=trace\n",
"
"res = ezkl.gen_settings(model_path, settings_path, py_run_args=run_args)\n",
"assert res == True"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [ |
"
"cal_data = {\n",
" \"input_data\": [(0.1*torch.rand(2, *[3, 2, 2])).flatten().tolist()],\n",
"}\n",
"\n",
"cal_path = os.path.join('val_data.json')\n",
"
"with open(cal_path, \"w\") as f:\n",
" json.dump(cal_data, f)\n",
"\n",
"res = ezkl.calibrate_settings(cal_path, model_path, settings_path, \"resources\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res = ezkl.compile_circuit(model_path, compiled_model_path, settings_path)\n",
"assert res == True"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"As we use Halo2 with KZG-commitments we need an SRS string from (preferably) a multi-party trusted setup ceremony. For an overview of the procedures for such a ceremony check out [this page](https:
"\n",
"These SRS were generated with [this](https:
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res = ezkl.get_srs( settings_path)\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"We now need to generate the circuit witness. These are the model outputs (and any hashes) that are generated when feeding the previously generated `input.json` through the circuit / model. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [ |
"!export RUST_BACKTRACE=1\n",
"\n",
"witness_path = \"witness.json\"\n",
"\n",
"res = ezkl.gen_witness(data_path, compiled_model_path, witness_path)\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(ezkl.felt_to_big_endian(res['processed_outputs']['poseidon_hash'][0]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We now post the hashes of the outputs to the chain. This is the data that will be read from and attested to."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from web3 |
import Web3, HTTPProvider\n",
"from solcx |
import compile_standard\n",
"from decimal |
import Decimal\n",
" |
import json\n",
" |
import os\n",
" |
import torch\n",
"\n",
"\n",
"
"w3 = Web3(HTTPProvider(RPC_URL))\n",
"\n",
"def test_on_chain_data(res):\n",
"
" data = [int(ezkl.felt_to_big_endian(res['processed_outputs']['poseidon_hash'][0]), 0)]\n",
"\n",
"
"
"
"
" contract_source_code = '''\n",
"
" pragma solidity ^0.8.17;\n",
"\n",
" contract TestReads {\n",
"\n",
" uint[] public arr;\n",
" constructor(uint256[] memory _numbers) {\n",
" for(uint256 i = 0; i < _numbers.length; i++) {\n",
" arr.push(_numbers[i]);\n",
" }\n",
" }\n",
" }\n",
" '''\n",
"\n",
" compiled_sol = compile_standard({\n",
" \"language\": \"Solidity\",\n",
" \"sources\": {\"testreads.sol\": {\"content\": contract_source_code}},\n",
" \"settings\": {\"outputSelection\": {\"*\": {\"*\": [\"metadata\", \"evm.bytecode\", \"abi\"]}}}\n",
" })\n",
"\n",
"
" bytecode = compiled_sol['contracts']['testreads.sol']['TestReads']['evm']['bytecode']['object']\n",
"\n",
"
"
"
" abi = json.loads(compiled_sol['contracts']['testreads.sol']['TestReads']['metadata'])['output']['abi']\n",
"\n",
"
" TestReads = w3.eth.contract(abi=abi, bytecode=bytecode)\n",
" tx_hash = TestReads.constructor(data).transact()\n",
" tx_receipt = |
w3.eth.wait_for_transaction_receipt(tx_hash)\n",
"
"
" contract = w3.eth.contract(address=tx_receipt['contractAddress'], abi=abi)\n",
"\n",
"
" calldata = []\n",
" for i, _ in enumerate(data):\n",
" call = contract.functions.arr(i).build_transaction()\n",
" calldata.append((call['data'][2:], 0))\n",
"\n",
"
"
"
"
" calls_to_account = [{\n",
" 'call_data': calldata,\n",
" 'address': contract.address[2:],
" }]\n",
"\n",
" print(f'calls_to_account: {calls_to_account}')\n",
"\n",
" return calls_to_account\n",
"\n",
"
"start_anvil()\n",
"\n",
"
"calls_to_account = test_on_chain_data(res)\n",
"\n",
"data = dict(input_data = [data_array], output_data = {'rpc': RPC_URL, 'calls': calls_to_account })\n",
"\n",
"
"json.dump(data, open(\"input.json\", 'w'))\n",
"\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Here we setup verifying and proving keys for the circuit. As the name suggests the proving key is needed for ... proving and the verifying key is needed for ... verifying. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
"
"
"
"res = ezkl.setup(\n",
" |
compiled_model_path,\n",
" vk_path,\n",
" pk_path,\n",
" \n",
" )\n",
"\n",
"assert res == True\n",
"assert os.path.isfile(vk_path)\n",
"assert os.path.isfile(pk_path)\n",
"assert os.path.isfile(settings_path)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we generate a full proof. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"proof_path = os.path.join('test.pf')\n",
"\n",
"res = ezkl.prove(\n",
" witness_path,\n",
" compiled_model_path,\n",
" pk_path,\n",
" proof_path,\n",
" \n",
" \"single\",\n",
" )\n",
"\n",
"print(res)\n",
"assert os.path.isfile(proof_path)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"And verify it as a sanity check. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"res = ezkl.verify(\n",
" proof_path,\n",
" settings_path,\n",
" vk_path,\n",
" \n",
" )\n",
"\n",
"assert res == True\n",
"print(\"v |
erified\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can now create and then deploy a vanilla evm verifier."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"abi_path = 'test.abi'\n",
"sol_code_path = 'test.sol'\n",
"\n",
"res = ezkl.create_evm_verifier(\n",
" vk_path,\n",
" \n",
" settings_path,\n",
" sol_code_path,\n",
" abi_path,\n",
" )\n",
"assert res == True"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
" |
import json\n",
"\n",
"addr_path_verifier = \"addr_verifier.txt\"\n",
"\n",
"res = ezkl.deploy_evm(\n",
" addr_path_verifier,\n",
" sol_code_path,\n",
" 'http:
")\n",
"\n",
"assert res == True"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"With the vanilla verifier deployed, we can now create the data attestation contract, which will read in the instances from the calldata to the verifier, attest to them, call the verifier and then return the result. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"\n",
"abi_path = 'test.abi'\n",
"sol_code_path = 'test.sol'\n",
"input_path = 'input.json'\n",
"\n",
"res = ezkl.create_evm_data_attestation(\n",
" input_path,\n",
" settings_path,\n",
" sol_code_path,\n",
" abi_path,\n",
" )"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we can deploy the data attest verifier contract. For security reasons, this binding will only deploy to a local anvil instance, using accounts generated by anvil. \n",
"So should only be used for testing purposes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"addr_path_da = \"addr_da.txt\"\n",
"\n", |
"res = ezkl.deploy_da_evm(\n",
" addr_path_da,\n",
" input_path,\n",
" settings_path,\n",
" sol_code_path,\n",
" RPC_URL,\n",
" )\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Call the view only verify method on the contract to verify the proof. Since it is a view function this is safe to use in production since you don't have to pass your private key."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
"addr_verifier = None\n",
"with open(addr_path_verifier, 'r') as f:\n",
" addr = f.read()\n",
"
"addr_da = None\n",
"with open(addr_path_da, 'r') as f:\n",
" addr_da = f.read()\n",
"\n",
"res = ezkl.verify_evm(\n",
" addr,\n",
" proof_path,\n",
" RPC_URL,\n",
" addr_da,\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "ezkl",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.15"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
} |
{
"cells": [
{
"attachments": {
"image-2.png": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAokAAARDCAYAAAAEdLvJAAABYmlDQ1BJQ0MgUHJvZmlsZQAAKJF1kDFLw1AUhU9stSAVHRwEHQKKUy01rdi1LSKCQxoVqlvyWlMlTR9JRNTFQRengi5uUhd/gS4OjoKDguAgIoKDP0DsoiXeNGpbxft43I/DvYfDBTrCKudGEEDJdCxlOi3mFpfE0Au60ENPQEBlNk/J8iyN4Lu3V+2O5qhuxzyvq5px+bw3PJi1N6Nscmv173xbdecLNqP+QT/BuOUAQoxYXne4x9vE/RaFIj7wWPf5xGPN5/PGzLySIb4h7mNFNU/8RBzRWnS9hUvGGvvK4KUPF8yFOeoD9IeQRgEmshAxhRzimEAM41D+2Uk0djIog2MDFlagowiHtlOkcBjkJmKGHBmiiBBL5Cch7t369w2bWrkKJN+AQKWpaYfA2S7FvG9qI0dA7w5wes1VS/25rFAL2stxyedwGuh8dN3XUSC0D9Qrrvtedd36Mfk/ABfmJ+uTZFvl1hD0AAAAVmVYSWZNTQAqAAAACAABh2kABAAAAAEAAAAaAAAAAAADkoYABwAAABIAAABEoAIABAAAAAEAAAKJoAMABAAAAAEAAARDAAAAAEFTQ0lJAAAAU2NyZWVuc2hvdIWiHYkAAAHXaVRYdFhNTDpjb20uYWRvYmUueG1wAAAAAAA8eDp4bXBtZXRhIHhtbG5zOng9ImFkb2JlOm5zOm1ldGEvIiB4OnhtcHRrPSJYTVAgQ29yZSA2LjAuMCI+CiAgIDxyZGY6UkRGIHhtbG5zOnJkZj0iaHR0cDovL3d3dy53My5vcmcvMTk5OS8wMi8yMi1yZGYtc3ludGF4LW5zIyI+CiAgICAgIDxyZGY6RGVzY3JpcHRpb24gcmRmOmFib3V0PSIiCiAgICAgICAgICAgIHhtbG5zOmV4aWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20vZXhpZi8xLjAvIj4KICAgICAgICAgPGV4aWY6UGl4ZWxZRGltZW5zaW9uPjEwOTE8L2V4aWY6UGl4ZWxZRGltZW5zaW9uPgogICAgICAgICA8ZXhpZjpQaXhlbFhEaW1lbnNpb24+NjQ5PC9leGlmOlBpeGVsWERpbWVuc2lvbj4KICAgICAgICAgPGV4aWY6VXNlckNvbW1lbnQ+U2NyZWVuc2hvdDwvZXhpZjpVc2VyQ29tbWVudD4KICAgICAgPC9yZGY6RGVzY3JpcHRpb24+CiAgIDwvcmRmOlJERj4KPC94OnhtcG1ldGE+CvSCr3YAAEAASURBVHgB7N0HnJxVvf/x3yabZNOzm03vvYeQgksCgtwAERCu8kfEgqKgIIkXEBWuoOhFAa/8QSkq/lVQrBcvoFIEpBMCCSQB0sum97LZJJtNNtn88z34rJOdLdPnKZ/zeg0z+8xTznmfUX6cWlBWVnbUSAgggAACCCCAAAIIxAi0iPnMRwQQQAABBBBAAAEEnABBIj8EBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAgggg |
ABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggABBIr8BBBBAAAEEEEAAgTgBgsQ4Eg4ggAACCCCAAAIIECTyG0AAAQQQQAABBBCIEyBIjCPhAAIIIIAAAggggEAhBAgggAAC/hI4ePCgVVdXW21trbVs2TLnmTty5Ih7buvWra2oqCjnz+eBCCDgDwGCRH/UA7lAAAEErKamxnbv3u0kWrRokZcA0auGQ4cOuUB1z549VlxcbAoYSQggEC0BgsRo1TelRQABnwqo9VABmYKxLl26+CaXFRUVplfnzp2tTZs2vskXGUEAgewLMCYx+8Y8AQEEEGhWYO/evVZYWOirAFGZVsDaqlUrq6ysbLYMnIAAAuESIEgMV31SGgQQCKDA4cOHTS2JJSUlvsy9upvVFa4XCQEEoiNAkBiduqakCCDgUwEFiWqt83NS/pRPEgIIREeAIDE6dU1JEUDAxwIFBQU+zp2Z3/Pnazwyh0BABQgSA1pxZBsBBKIjMH78eDdeMTolpqQIIOAHAYJEP9QCeUAAAQSaEPjwhz+clQktah288MILm3gyXyGAQJQFCBKjXPuUHQEEAiFwxx132I4dOzKeVwWJZ599dsbvyw0RQCAcAi379u17SziKQikQQACBYApoQogWr27fvn2DBbj55pvtrbfeso4dO9qsWbNs+PDh9qlPfcqGDh1qixcvdrOO9feAAQPc8fPOO8/tlLJs2TJ3v9tvv92ee+459 |
7m0tNS+/OUv2+zZs+3rX/+69erVyyZMmGArV65scpmbqqoqt4aj3yfYNAjIQQQQSEmAlsSU2LgIAQQQyJ1Ap06d3MQRbdE3atQoe/bZZ+0b3/iG27Zv8uTJLiMKME844QS79dZb7T
},
"image.png": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAUAAAAEbCAYAAACr2V2eAAABYmlDQ1BJQ0MgUHJvZmlsZQAAKJF1kDFLw1AUhU9stSAVHRwEHQKKUy01rdi1LSKCQxoVqlvyWlMlTR9JRNTFQRengi5uUhd/gS4OjoKDguAgIoKDP0DsoiXeNGpbxft43I/DvYfDBTrCKudGEEDJdCxlOi3mFpfE0Au60ENPQEBlNk/J8iyN4Lu3V+2O5qhuxzyvq5px+bw3PJi1N6Nscmv173xbdecLNqP+QT/BuOUAQoxYXne4x9vE/RaFIj7wWPf5xGPN5/PGzLySIb4h7mNFNU/8RBzRWnS9hUvGGvvK4KUPF8yFOeoD9IeQRgEmshAxhRzimEAM41D+2Uk0djIog2MDFlagowiHtlOkcBjkJmKGHBmiiBBL5Cch7t369w2bWrkKJN+AQKWpaYfA2S7FvG9qI0dA7w5wes1VS/25rFAL2stxyedwGuh8dN3XUSC0D9Qrrvtedd36Mfk/ABfmJ+uTZFvl1hD0AAAAVmVYSWZNTQAqAAAACAABh2kABAAAAAEAAAAaAAAAAAADkoYABwAAABIAAABEoAIABAAAAAEAAAFAoAMABAAAAAEAAAEbAAAAAEFTQ0lJAAAAU2NyZWVuc2hvdP5iyG4AAAHWaVRYdFhNTDpjb20uYWRvYmUueG1wAAAAAAA8eDp4bXBtZXRhIHhtbG5zOng9ImFkb2JlOm5zOm1ldGEvIiB4OnhtcHRrPSJYTVAgQ29yZSA2LjAuMCI+CiAgIDxyZGY6UkRGIHhtbG5zOnJkZj0iaHR0cDovL3d3dy53My5vcmcvMTk5OS8wMi8yMi1yZGYtc3ludGF4LW5zIyI+CiAgICAgIDxyZGY6RGVzY3JpcHRpb24gcmRmOmFib3V0PSIiCiAgICAgICAgICAgIHhtbG5zOmV4aWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20vZXhpZi8xLjAvIj4KICAgICAgICAgPGV4aWY6UGl4ZWxZRGltZW5zaW9uPjI4MzwvZXhpZjpQaXhlbFlEaW1lbnNpb24+CiAgICAgICAgIDxleGlmOlBpeGVsWERpbWVuc2lvbj4zMjA8L2V4aWY6UGl4ZWxYRGltZW5zaW9uPgogICAgICAgICA8ZXhpZjpVc2VyQ29tbWVudD5TY3JlZW5zaG90PC9leGlmOlVzZXJDb21tZW50PgogICAgICA8L3JkZjpEZXNjcmlwdGlvbj4KICAgPC9yZGY6UkRGPgo8L3g6eG1wbWV0YT4Kk67BXQAAJDZJREFUeAHtnQuwVVUZxxfKS+QlIKJg8pSXKJQ8VbxqqKBjJmlaUxra5KTWZPhoSoXR1DR10lIZHUsrRZMJy8hSlLhAoJiIFSqKegUUBHkICEgZ/2X7uO/hnHP3Pufsc/ZZ67dmzr377L3W2uv7ffv+73rvZqNGjfrYECAAAQh4SGAvD23GZAhAAAKWAALIgwABCHhLAAH01vUYDgEIIIA8AxCAgLcEEEBvXY/hEIAAAsgzAAEIeEsAAfTW9RgOAQgggDwDEICAtwQQQG9dj+EQgAACyDMAAQh4SwAB9Nb1GA4BCCCAPAMQgIC3BBBAb12P4RCAAALIMwABCHhLAAH01vUYDgEIIIA8AxCAgLcEEEBvXY/hEIAAAsgzAAEIeEsAAfTW9RgOAQgggDwDEICAtwQQQG9dj+EQgAACyDMAAQh4SwAB9Nb1GA4BCCCAPAMQgIC3BBBAb12P4RCAAALIMwABCHhLAAH01vUYDgEIIIA8AxCAgLcEEEBvXY/hEIAAAs |
gzAAEIeEsAAfTW9RgOAQgggDwDEICAtwQQQG9dj+EQgAACyDMAAQh4SwAB9Nb1GA4BCCCAPAMQgIC3BBBAb12P4RCAAALIMwABCHhLAAH01vUYDgEIIIA8AxCAgLcEEEBvXY/hEIAAAsgzAAEIeEsAAfTW9RgOAQgggDwDEICAtwQQQG9dj+EQgAACyDMAAQh4SwAB9Nb1GA4BCCCAPAMQgIC3BBBAb12P4RCAAALIMwABCHhLAAH01vUYDgEIIIA8AxCAgLcEEEBvXY/hEIAAAsgzAAEIeEsAAfTW9RgOAQgggDwDEICAtwQQQG9dj+EQgAACyDMAAQh4SwAB9Nb1GA4BCCCAPAMQgIC3BBBAb12P4RCAAALIMwABCHhLoLm3lmN4LAIffvih0ee
}
},
"cell_type": "markdown",
"id": "cf69bb3f-94e6-4dba-92cd-ce08df117d67",
"metadata": {},
"source": [
"
"\n",
"\n",
"Sklearn based models are slightly finicky to get into a suitable onnx format. By default most tree based models will export into something that looks like this: \n",
"\n",
"\n",
"\n",
"\n",
"\n",
"Processing such nodes can be difficult and error prone. It would be much better if the operations of the tree were represented as a proper graph, possibly ... like this: \n",
"\n",
"\n",
"\n",
"\n",
"\n",
"This notebook showcases how to do that using the `hummingbird-ml` python package ! "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "95613ee9",
"metadata": {},
"outputs": [],
"source": [
"
"try:\n",
"
" |
import google.colab\n",
" |
import subprocess\n",
" |
import sys\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"ezkl\"])\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"onnx\"])\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"hummingbird-ml\"])\n",
"\n",
"
"except:\n",
" pass\n",
"\n",
"\n",
"
"\n",
"
" |
import json\n",
" |
import numpy as np\n",
"from sklearn.datasets |
import load_iris\n",
"from sklearn.model_selection |
import train_test_split\n",
"from sklearn.tree |
import DecisionTreeClassifier as De\n",
"from hummingbird.ml |
import convert\n",
" |
import torch\n",
" |
import ezkl\n",
" |
import os\n",
"\n",
"\n",
"\n",
"iris = load_iris()\n",
"X, y = iris.data, iris.target\n",
"X = X.astype(np.float32)\n",
"X_train, X_test, y_train, y_test = train_test_split(X, y)\n",
"clr = De()\n",
"clr.fit(X_train, y_train)\n",
"\n",
"circuit = convert(clr, \"torch\", X_test[:1]).model\n",
"\n",
"\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b37637c4",
"metadata": {},
"outputs": [],
"source": [
"model_path = os.path.join('network.onnx')\n",
"compiled_model_path = os.path.join('network.compiled')\n",
"pk_path = os.path.join('test.pk')\n",
"vk_path = os.path.join('test.vk')\n",
"settings_path = os.path.join('settings.json')\n",
"\n",
"witness_path = os.path.join('witness.json')\n",
"data_path = os.path.join('input.json')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "82db373a",
"metadata": {},
"outputs": [],
"source": [
"\n",
"\n",
"
"
"\n",
"
"shape = X_train.shape[1:]\n",
"x = torch.rand(1, *shape, requires_grad=True)\n",
"torch_out = circuit(x)\n",
"
"torch.onnx.export(circuit,
"
" x,\n",
"
" \"network.onnx\",\n",
" export_params=True,
" o |
pset_version=10,
" do_constant_folding=True,
" input_names=['input'],
" output_names=['output'],
" dynamic_axes={'input': {0: 'batch_size'},
" 'output': {0: 'batch_size'}})\n",
"\n",
"d = ((x).detach().numpy()).reshape([-1]).tolist()\n",
"\n",
"data = dict(input_shapes=[shape],\n",
" input_data=[d],\n",
" output_data=[((o).detach().numpy()).reshape([-1]).tolist() for o in torch_out])\n",
"\n",
"
"json.dump(data, open(\"input.json\", 'w'))\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d5e374a2",
"metadata": {},
"outputs": [],
"source": [
"!RUST_LOG=trace\n",
"
"res = ezkl.gen_settings(model_path, settings_path)\n",
"assert res == True\n",
"\n",
"res = ezkl.calibrate_settings(data_path, model_path, settings_path, \"resources\")\n",
"assert res == True"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
"cal_data = {\n",
" \"input_data\": [(torch.rand(20, *shape)).flatten().tolist()],\n",
"}\n",
"\n",
"cal_path = os.path.join('val_data.json')\n",
"
"with open(cal_path, \"w\") as f:\n",
" json.dump(cal_data, f)\n",
"\n",
"res = ezkl.calibrate_settings(cal_path, model_path, settings_path, \"resources\")"
] |
},
{
"cell_type": "code",
"execution_count": null,
"id": "3aa4f090",
"metadata": {},
"outputs": [],
"source": [
"res = ezkl.compile_circuit(model_path, compiled_model_path, settings_path)\n",
"assert res == True"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8b74dcee",
"metadata": {},
"outputs": [],
"source": [
"
"res = ezkl.get_srs( settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "18c8b7c7",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"res = ezkl.gen_witness(data_path, compiled_model_path, witness_path)\n",
"assert os.path.isfile(witness_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b1c561a8",
"metadata": {},
"outputs": [],
"source": [
"\n",
"
"
"
"
"\n",
"\n",
"\n",
"res = ezkl.setup(\n",
" compiled_model_path,\n",
" vk_path,\n",
" pk_path,\n",
" \n",
" )\n",
"\n",
"assert res == True\n",
"assert os.path.isfile(vk_path)\n",
"assert os.path.isfile(pk_path)\n",
"assert os.path.isfile(settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c384cbc8",
"metadata": {},
"outputs": [],
"source": [ |
"
"\n",
"\n",
"proof_path = os.path.join('test.pf')\n",
"\n",
"res = ezkl.prove(\n",
" witness_path,\n",
" compiled_model_path,\n",
" pk_path,\n",
" proof_path,\n",
" \n",
" \"single\",\n",
" )\n",
"\n",
"print(res)\n",
"assert os.path.isfile(proof_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "76f00d41",
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"res = ezkl.verify(\n",
" proof_path,\n",
" settings_path,\n",
" vk_path,\n",
" \n",
" )\n",
"\n",
"assert res == True\n",
"print(\"verified\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.15"
}
},
"nbformat": 4,
"nbformat_minor": 5
} |
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "n8QlFzjPRIGN"
},
"source": [
"
"\n",
"**Learning Objectives**\n",
"1. Learn some basic AI/ML techniques by training a toy model in pytorch to perform classification\n",
"2. Convert the toy model into zk circuit with ezkl to do provable inference\n",
"3. Create a solidity verifier and deploy it on Remix (you can deploy it however you like but we will use Remix as it's quite easy to setup)\n",
"\n",
"\n",
"**Important Note**: You might want to avoid calling \"Run All\". There's some file locking issue with Colab which can cause weird bugs. To mitigate this issue you should run cell by cell on Colab."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "dx81GOIySIpa"
},
"source": [
"
"\n",
"For this demo we will use a toy data set called the Iris dataset to demonstrate how training can be performed. The Iris dataset is a collection of Iris flowers and is one of the earliest dataset used to validate classification methodologies.\n",
"\n",
"[More info in the dataset](https:
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "JhHE2WMvS9NP"
},
"source": [
"First, we will need to |
import all the various dependencies required to train the model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "gvQ5HL1bTDWF"
},
"outputs": [],
"source": [
" |
import pandas as pd\n",
"from sklearn.datasets |
import load_iris\n",
"from sklearn.model_selection |
import train_test_split\n",
"from sklearn.metrics |
import accuracy_score, precision_score, recall_score\n",
" |
import numpy as np\n",
" |
import torch\n",
" |
import torch.nn as nn\n",
" |
import torch.nn.functional as F\n",
"from torch.autograd |
import Variable\n",
" |
import tqdm"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Op9SHfZHUkaR"
},
"source": [
"Inspect the dataset. Note that for the Iris dataset we have 3 targets.\n",
"\n",
"0 = Iris-setosa\n",
"\n",
"1 = Iris-versicolor\n",
"\n",
"2 = Iris-virginica"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https:
"height": 424
},
"id": "C4XXA1hoU30c",
"outputId": "4fbd47ec-88d1-4ef7-baee-3e3894cc29db"
},
"outputs": [],
"source": [
"iris = load_iris()\n",
"dataset = pd.DataFrame(\n",
" data= np.c_[iris['data'], iris['target']],\n",
" columns= iris['feature_names'] + ['target'])\n",
"dataset"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "I8RargmGTWN2"
},
"source": [
"Next, we can begin defining the neural net model. For this dataset we will use a small fully connected neural net.\n",
"\n",
"<br />\n",
"\n",
"**Note:**\n",
"For the 1st layer we use 4x20, because there are 4 features we want as inputs. After which we add a ReLU.\n",
"\n",
"For the 2nd layer we use 20x20, then add a ReLU.\n",
"\n",
"And for the last layer we use 20x3, because there are 3 classes we want to classify, then add a ReLU.\n",
"\n",
"The last ReLU function gives us an array of 3 elements where the position of the largest value gives us the target that we want to classify.\n",
"\n",
"For example, if we get [0, 0.001, 0.002] as the output of the last ReLU. As, 0.002 is the largest value, the inferred value is 2.\n",
"\n",
"\n",
":\n",
"
" def __init__(self):\n",
" super(Model, self).__init__()\n",
" self.fc1 = nn.Linear(4, 20)\n",
" self.fc2 = nn.Linear(20, 20)\n",
" self.fc3 = nn.Linear(20, 3)\n",
" self.relu = nn.ReLU()\n",
"\n",
" def forward(self, x):\n",
" x = self.fc1(x)\n",
" x = self.relu(x)\n",
" x = self.fc2(x)\n",
" x = self.relu(x)\n",
" x = self.fc3(x)\n",
" x = self.relu(x)\n",
"\n",
" return x\n",
"\n",
"
"model = Model()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "SfC03XLNXDPZ"
},
"source": [
"We will now need to split the dataset into a training set and testing set for ML. This is done fairly easily with the `train_test_split` helper function from sklearn."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https:
},
"id": "agmbEdmfUO1-",
"outputId": "87766edd-50db-48af-aa5d-3f4fc164f8b7"
},
"outputs": [],
"source": [
"train_X, test_X, train_y, test_y = train_test_split(\n",
" dataset[dataset.columns[0:4]].values,
" dataset.target,
" test_size=0.2
")\n",
"\n",
"
"
"
"print(\"train_y: \", train_y)\n",
"print(\"test_y: \", test_y)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "_FrQXhAGZGS3"
},
"source": [
"We can now define the parameters for training, we will use the [Cross Entropy Loss](https:
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https:
},
"id": "9PjADXnuXbdk",
"outputId": " |
81602926-c386-4f68-a9ee-ae2d5837fe47"
},
"outputs": [],
"source": [
"
"loss_fn = nn.CrossEntropyLoss()\n",
"\n",
"
"optimizer = torch.optim.SGD(model.parameters(), lr=0.01)\n",
"\n",
"\n",
"
"EPOCHS = 800\n",
"\n",
"
"train_X = Variable(torch.Tensor(train_X).float())\n",
"test_X = Variable(torch.Tensor(test_X).float())\n",
"train_y = Variable(torch.Tensor(train_y.values).long())\n",
"test_y = Variable(torch.Tensor(test_y.values).long())\n",
"\n",
"\n",
"loss_list = np.zeros((EPOCHS,))\n",
"accuracy_list = np.zeros((EPOCHS,))\n",
"\n",
"\n",
"
"for epoch in tqdm.trange(EPOCHS):\n",
"\n",
"
" predicted_y = model(train_X)\n",
"\n",
"
" loss = loss_fn(predicted_y, train_y)\n",
"\n",
"
" loss_list[epoch] = loss.item()\n",
"\n",
"
" optimizer.zero_grad()\n",
" loss.backward()\n",
" optimizer.step()\n",
"\n",
"
"
" with torch.no_grad():\n",
" y_pred = model(test_X)\n",
" correct = (torch.argmax(y_pred, dim=1) == test_y).type(torch.FloatTensor)\n",
" accuracy_list[epoch] = correct.mean()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https:
"height": 546
},
"id": "2fHJAgvwboCe",
"outputId": "513c73b7-2663-4bb3-f7b4-cae208940070"
},
"outputs": [],
"source": [
"
"\n",
"
" |
import matplotlib.pyplot as plt\n",
"\n",
"plt.style.use('ggplot')\n",
"\n",
"\n",
"fig, (ax1, ax2) = plt.subplots(2, figsize=(12, 6), sharex=True)\n",
"\n",
"ax1.plot(accuracy_list)\n",
"ax1.set_ylabel(\"Accuracy\")\n",
"ax2.plot(loss_list)\n",
"ax2.set_ylabel(\"Loss\")\n",
"ax2.set_xlabel(\"epochs\");"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "djB-UtvgYbF2"
},
"source": [
"
"\n",
"**Exercise:** The model provided is very simplistic, what are other ways the model can be improved upon?"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "JgtwrbMZcgla"
},
"source": [
"
"\n",
"Now that we have the Neural Network trained, we can use ezkl to easily ZK our model.\n",
"\n",
"To proceed we will now need to install `ezkl`\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "C_YiqknhdDwN"
},
"outputs": [],
"source": [
"
"try:\n",
" |
import google.colab\n",
" |
import subprocess\n",
" |
import sys\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"ezkl\"])\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"onnx\"])\n",
"\n",
"
"except:\n",
" pass\n",
"\n",
" |
import os\n",
" |
import json\n",
" |
import ezkl"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-b_z_d2FdVTB"
},
"source": [
"Next, we will need to export the neural network to a `.onnx` file. ezkl reads this `.onnx` file and converts it into a circuit which then allows you to generate proofs as well as verify proofs"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "YeKWP0tFeCpq"
},
"outputs": [],
"source": [
"
"\n",
"model_path = os.path.join('network.onnx')\n",
"compiled_model_path = os.path.join('network.ezkl')\n",
"pk_path = os.path.join('test.pk')\n",
"vk_path = os.path.join('test.vk')\n",
"settings_path = os.path.join('settings.json')\n",
"\n",
"witness_path = os.path.join('witness.json')\n",
"data_path = os.path.join('input.json')\n",
"cal_data_path = os.path.join('cal_data.json')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https:
},
"id": "cQeNw_qndQ8g",
"outputId": "2d40f14e-7fbb-4377-e9ee-0e7678edb2ce"
},
"outputs": [],
"source": [
"
"\n",
"
"x = test_X[0].reshape(1, 4)\n",
"\n",
"
"model.eval()\n",
"\n",
"
"torch.onnx.export(model,
" x,
" model_path,
" export_params=True,
" opset_version=10,
" do_constant_folding=True,
" input_names = ['input'],
" output_names = ['output'],
" dynamic_axes={'input' : {0 : 'batch_size'},
" 'output' : {0 : 'batch_size'}})\n",
"\n", |
"data_array = ((x).detach().numpy()).reshape([-1]).tolist()\n",
"\n",
"data = dict(input_data = [data_array])\n",
"\n",
"
"json.dump(data, open(data_path, 'w'))\n",
"\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "9P4x79hIeiLO"
},
"source": [
"After which we can proceed to generate the settings file for `ezkl` and run calibrate settings to find the optimal settings for `ezkl`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "cY25BIyreIX8"
},
"outputs": [],
"source": [
"!RUST_LOG=trace\n",
"
"res = ezkl.gen_settings(model_path, settings_path)\n",
"assert res == True\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
"cal_data = dict(input_data = test_X.flatten().tolist())\n",
"\n",
"
"json.dump(data, open(cal_data_path, 'w'))\n",
"\n",
"
"
"res = ezkl.calibrate_settings(cal_data_path, model_path, settings_path, \"resources\", max_logrows = 12, scales = [2])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "MFmPMBQ1jYao"
},
"source": [
"Next, we will compile the model. The compilation step allow us to generate proofs faster."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "De5XtpGUerkZ"
},
"outputs": [],
"source": [
"res = ezkl.compile_circuit(model_path, compiled_model_path, settings_path)\n",
"assert res == True"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "UbkuSVKljmhA"
},
"source": [
"Before we can setup the circuit params, we need a SRS (Structured Reference |
String). The SRS is used to generate the proofs."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "amaTcWG6f2GI"
},
"outputs": [],
"source": [
"res = ezkl.get_srs( settings_path)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Y92p3GhVj1Jd"
},
"source": [
"Now run setup, this will generate a proving key (pk) and verification key (vk). The proving key is used for proving while the verification key is used for verificaton."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "fdsteit9jzfK"
},
"outputs": [],
"source": [
"res = ezkl.setup(\n",
" compiled_model_path,\n",
" vk_path,\n",
" pk_path,\n",
" )\n",
"\n",
"\n",
"assert res == True\n",
"assert os.path.isfile(vk_path)\n",
"assert os.path.isfile(pk_path)\n",
"assert os.path.isfile(settings_path)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "QYlqpP3jkExm"
},
"source": [
"Now, we can generate a proof and verify the proof as a sanity check. We will use the \"evm\" transcript. This will allow us to provide proofs to the EVM."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "yoz5Vks5kaHI"
},
"outputs": [],
"source": [
"
"\n",
"
"witness_path = os.path.join('witness.json')\n",
"\n",
"res = ezkl.gen_witness(data_path, compiled_model_path, witness_path)\n",
"assert os.path.isfile(witness_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https:
},
"id": "eKkFBZX1kBdE",
"outputId": "48c67e19-a491-4515-f09c-a560df8c3834" |
},
"outputs": [],
"source": [
"
"\n",
"proof_path = os.path.join('proof.json')\n",
"\n",
"proof = ezkl.prove(\n",
" witness_path,\n",
" compiled_model_path,\n",
" pk_path,\n",
" proof_path,\n",
" \"single\",\n",
" )\n",
"\n",
"print(proof)\n",
"assert os.path.isfile(proof_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https:
},
"id": "DuuH-qcOkQf1",
"outputId": "375fdd63-1c0b-4c3c-eddd-f890a752923c"
},
"outputs": [],
"source": [
"
"\n",
"res = ezkl.verify(\n",
" proof_path,\n",
" settings_path,\n",
" vk_path,\n",
" )\n",
"\n",
"assert res == True\n",
"print(\"verified\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "TOSRigalkwH-"
},
"source": [
"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "flrg3NOGwsJh"
},
"source": [
"\n",
"
"Now that we have the circuit setup, we can proceed to deploy the verifier onchain.\n",
"\n",
"We will need to setup `solc=0.8.20` for this."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https:
},
"id": "CVqMeMYqktvl",
"outputId": "60ef81a5-867e-4a27-a0a1-0a492244e7f7"
},
"outputs": [],
"source": [
"
"try:\n",
" |
import google.colab\n",
" |
import subprocess\n",
" |
import sys\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"solc-select\"])\n",
" !solc-select install 0.8.20\n",
" !solc-select use 0.8.20\n",
" !solc --version\n",
"\n",
"
"except:\n",
" pass"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "HRHvkMjVlfWU"
},
"source": [
"With solc in our environment we can now create the evm verifier."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "gYlw20VZkva7"
},
"outputs": [],
"source": [
"sol_code_path = os.path.join('Verifier.sol')\n",
"abi_path = os.path.join('Verifier.abi')\n",
"\n",
"res = ezkl.create_evm_verifier(\n",
" vk_path,\n",
" \n",
" settings_path,\n",
" sol_code_path,\n",
" abi_path\n",
" )\n",
"\n",
"assert res == True\n",
"assert os.path.isfile(sol_code_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https:
},
"id": "jQSAVMvxrBQD",
"outputId": "691484fa-ef21-4b40-e179-9d2d90abd3d0"
},
"outputs": [],
"source": [
"onchain_input_array = []\n",
"\n",
"
"
"formatted_output = \"[\"\n",
"for i, value in enumerate(proof[\"instances\"]):\n",
" for j, field_element in enumerate(value):\n",
" onchain_input_array.append(ezkl.felt_to_big_endian(field_element))\n",
" formatted_output += '\"' + str(onchain_input_array[-1]) + '\"'\n",
" if j != len(value) - 1:\n",
" formatted_output += \", \"\n",
" if i != len(proof[\"instances\"]) - 1:\n",
" formatted_output += \", \"\n",
"formatted_output += \" |
]\"\n",
"\n",
"
"
"
"print(\"pubInputs: \", formatted_output)\n",
"print(\"proof: \", proof[\"proof\"])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "zrzPxPvZmX9b"
},
"source": [
"We will exit colab for the next steps. At the left of colab you can see a folder icon. Click on that.\n",
"\n",
"\n",
"You should see a `Verifier.sol`. Right-click and save it locally.\n",
"\n",
"Now go to [https:
"\n",
"Create a new file within remix and copy the verifier code over.\n",
"\n",
"Finally, compile the code and deploy. For the demo you can deploy to the test environment within remix.\n",
"\n",
"If everything works, you would have deployed your verifer onchain! Copy the values in the cell above to the respective fields to test if the verifier is working.\n",
"\n",
"**Note that right now this setup accepts random values!**\n",
"\n",
"This might not be great for some applications. For that we will want to use a data attested verifier instead. [See this tutorial.](https:
"\n",
"
"\n",
"If you have followed the whole tutorial, you would have deployed a neural network inference model onchain! That's no mean feat!"
]
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.15"
}
},
"nbformat": 4,
"nbformat_minor": 0
} |
{
"cells": [
{
"cell_type": "markdown",
"id": "5fe9feb6-2b35-414a-be9d-771eabdbb0dc",
"metadata": {
"id": "5fe9feb6-2b35-414a-be9d-771eabdbb0dc"
},
"source": [
"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "nGcl_1sltpRq",
"metadata": {
"colab": {
"base_uri": "https:
},
"id": "nGcl_1sltpRq",
"outputId": "642693ac-970f-4ad9-80f5-e58c69f04ee9"
},
"outputs": [],
"source": [
"!pip install torch-scatter torch-sparse torch-geometric"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1005303a-cd48-4766-9c43-2116f94ed381",
"metadata": {
"id": "1005303a-cd48-4766-9c43-2116f94ed381"
},
"outputs": [],
"source": [
" |
import numpy as np\n",
"\n",
" |
import torch\n",
"from torch |
import nn\n",
" |
import torch.nn.functional as F\n",
"\n",
"
"try:\n",
"
" |
import google.colab\n",
" |
import subprocess\n",
" |
import sys\n",
" for e in [\"ezkl\", \"onnx\", \"torch\", \"torchvision\", \"torch-scatter\", \"torch-sparse\", \"torch-geometric\"]:\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", e])\n",
"\n",
"
"except:\n",
" pass"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "89e5732e-a97b-445e-9174-69689e37e72c",
"metadata": {
"colab": {
"base_uri": "https:
},
"id": "89e5732e-a97b-445e-9174-69689e37e72c",
"outputId": "24049b0a-439b-4327-a829-4b4045490f0f"
},
"outputs": [],
"source": [
" |
import torch\n",
"from torch_geometric.data |
import Data\n",
"\n",
"edge_index = torch.tensor([[2, 1, 3],\n",
" [0, 0, 2]], dtype=torch.long)\n",
"x = torch.tensor([[1], [1], [1]], dtype=torch.float)\n",
"\n",
"data = Data(x=x, edge_index=edge_index)\n",
"data"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "73b34e81-63cb-44b0-9f95-f8490e844676",
"metadata": {
"id": "73b34e81-63cb-44b0-9f95-f8490e844676"
},
"outputs": [],
"source": [
" |
import torch\n",
" |
import math\n",
"from torch_geometric.nn |
import MessagePassing\n",
"from torch.nn.modules.module |
import Module\n",
"\n",
"def glorot(tensor):\n",
" if tensor is not None:\n",
" stdv = math.sqrt(6.0 / (tensor.size(-2) + tensor.size(-1)))\n",
" tensor.data.uniform_(-stdv, stdv)\n",
"\n",
"\n",
"def zeros(tensor):\n",
" if tensor is not None:\n",
" tensor.data.fill_(0)\n",
"\n",
" |
class GCNConv(Module):\n",
" def __init__(self, in_channels, out_channels):\n",
" super(GCNConv, self).__init__()
" self.lin = torch.nn.Linear(in_channels, out_channels)\n",
"\n",
" self.reset_parameters()\n",
"\n",
" def reset_parameters(self):\n",
" glorot(self.lin.weight)\n",
" zeros(self.lin.bias)\n",
"\n",
" def forward(self, x, adj_t, deg):\n",
" x = self.lin(x)\n",
" adj_t = self.normalize_adj(adj_t, deg)\n",
" x = adj_t @ x\n",
"\n",
" return x\n",
"\n",
" def normalize_adj(self, adj_t, deg):\n",
" deg.masked_fill_(deg == 0, 1.)\n",
" deg_inv_sqrt = deg.pow_(-0.5)\n",
" deg_inv_sqrt.masked_fill_(deg_inv_sqrt == 1, 0.)\n",
" adj_t = adj_t * deg_inv_sqrt.view(-1, 1)
" adj_t = adj_t * deg_inv_sqrt.view(1, -1)
"\n",
" return adj_t"
]
},
{
"cell_type": "markdown",
"id": "ae70bc34-def7-40fd-9558-2500c6f29323",
"metadata": {
"id": "ae70bc34-def7-40fd-9558-2500c6f29323"
},
"source": [
"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7ca117a1-7473-42a6-be95-dc314eb3e251",
"metadata": {
"colab": {
"base_uri": "https:
},
"id": "7ca117a1-7473-42a6-be95-dc314eb3e251",
"outputId": "edacee52-8a88-4c02-9a71-fd094e89c7b9"
},
"outputs": [],
"source": [
" |
import os\n",
" |
import os.path as osp\n",
" |
import torch\n",
" |
import torch.nn.functional as F\n",
"from torch_geometric.datasets |
import Planetoid\n",
" |
import torch_geometric.transforms as T\n",
"\n",
"path = osp.join(os.getcwd(), 'data', 'Cora')\n",
"dataset = Planetoid(path, 'Cora')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "807f4d87-6acc-4cbb-80e4-8eb09feb994c",
"metadata": {
"id": "807f4d87-6acc-4cbb-80e4-8eb09feb994c"
},
"outputs": [],
"source": [
" |
import time\n",
"\n",
"from torch |
import tensor\n",
"from torch.optim |
import Adam\n",
"\n",
"
"num_feat = 10\n",
"\n",
"def run(dataset, model, runs, epochs, lr, weight_decay, early_stopping):\n",
"\n",
" val_losses, accs, durations = [], [], []\n",
" for _ in range(runs):\n",
" data = dataset[0]\n",
" data = data.to(device)\n",
"\n",
" model.to(device).reset_parameters()\n",
" optimizer = Adam(model.parameters(), lr=lr, weight_decay=weight_decay)\n",
"\n",
" if torch.cuda.is_available():\n",
" torch.cuda.synchronize()\n",
"\n",
" t_start = time.perf_counter()\n",
"\n",
" best_val_loss = float('inf')\n",
" test_acc = 0\n",
" val_loss_history = []\n",
"\n",
" for epoch in range(1, epochs + 1):\n",
" train(model, optimizer, data)\n",
" eval_info = evaluate(model, data)\n",
" eval_info['epoch'] = epoch\n",
"\n",
" if eval_info['val_loss'] < best_val_loss:\n",
" best_val_loss = eval_info['val_loss']\n",
" test_acc = eval_info['test_acc']\n",
"\n",
" val_loss_history.append(eval_info['val_loss'])\n",
" if early_stopping > 0 and epoch > epochs
" tmp = tensor(val_loss_history[-(early_stopping + 1):-1])\n",
" if eval_info['val_loss'] > tmp.mean().item():\n",
" break\n",
"\n",
" if torch.cuda.is_available():\n",
" torch.cuda.synchronize()\n",
"\n",
" t_end = time.perf_counter()\n",
"\n",
" val_losses.append(best_val_loss)\n",
" accs.append(test_acc)\n",
" durations.append(t_end - t_start)\n",
"\n",
" loss, acc, duration = tensor(val_loss |
es), tensor(accs), tensor(durations)\n",
"\n",
" print('Val Loss: {:.4f}, Test Accuracy: {:.3f} ± {:.3f}, Duration: {:.3f}'.\n",
" format(loss.mean().item(),\n",
" acc.mean().item(),\n",
" acc.std().item(),\n",
" duration.mean().item()))\n",
"\n",
"\n",
"def train(model, optimizer, data):\n",
" model.train()\n",
" optimizer.zero_grad()\n",
"\n",
" E = data.edge_index.size(1)\n",
" N = data.x.size(0)\n",
" x = data.x[:, :num_feat]\n",
" adj_t = torch.sparse_coo_tensor(data.edge_index, torch.ones(E), size=(N, N)).to_dense().T\n",
" deg = torch.sum(adj_t, dim=1)\n",
" out = model(x, adj_t, deg)\n",
" loss = F.nll_loss(out[data.train_mask], data.y[data.train_mask])\n",
" loss.backward()\n",
" optimizer.step()\n",
"\n",
"\n",
"def evaluate(model, data):\n",
" model.eval()\n",
"\n",
" with torch.no_grad():\n",
"\n",
" E = data.edge_index.size(1)\n",
" N = data.x.size(0)\n",
" x = data.x[:, :num_feat]\n",
" adj_t = torch.sparse_coo_tensor(data.edge_index, torch.ones(E), size=(N, N)).to_dense().T\n",
" deg = torch.sum(adj_t, dim=1)\n",
" logits = model(x, adj_t, deg)\n",
"\n",
" outs = {}\n",
" for key in ['train', 'val', 'test']:\n",
" mask = data['{}_mask'.format(key)]\n",
" loss = F.nll_loss(logits[mask], data.y[mask]).item()\n",
" pred = logits[mask].max(1)[1]\n",
" acc = pred.eq(data.y[mask]).sum().item() / mask.sum().item()\n",
"\n",
" outs['{}_loss'.format(key)] = loss\n",
" outs['{}_acc'.format(key)] = acc\n",
"\n",
" return outs"
]
},
{
"cell_ |
type": "code",
"execution_count": null,
"id": "28b3605e-e6fd-45ff-ae4b-607065f4849c",
"metadata": {
"colab": {
"base_uri": "https:
},
"id": "28b3605e-e6fd-45ff-ae4b-607065f4849c",
"outputId": "b3ea504c-b57c-46d4-b382-aa54c9a4786f"
},
"outputs": [],
"source": [
"runs = 1\n",
"epochs = 200\n",
"lr = 0.01\n",
"weight_decay = 0.0005\n",
"early_stopping = 10\n",
"hidden = 16\n",
"dropout = 0.5\n",
"device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n",
"\n",
"\n",
" |
class Net(torch.nn.Module):\n",
" def __init__(self, dataset, num_feat):\n",
" super(Net, self).__init__()\n",
"
" self.conv1 = GCNConv(num_feat, hidden)\n",
" self.conv2 = GCNConv(hidden, dataset.num_classes)\n",
"\n",
"\n",
" def reset_parameters(self):\n",
" self.conv1.reset_parameters()\n",
" self.conv2.reset_parameters()\n",
"\n",
" def forward(self, x, adj_t, deg):\n",
" x = F.relu(self.conv1(x, adj_t, deg))\n",
" x = F.dropout(x, p=dropout, training=self.training)\n",
" x = self.conv2(x, adj_t, deg)\n",
" return F.log_softmax(x, dim=1)\n",
"\n",
"model = Net(dataset, num_feat)\n",
"run(dataset, model, runs, epochs, lr, weight_decay, early_stopping)"
]
},
{
"cell_type": "markdown",
"id": "4cc3ffed-74c2-48e3-86bc-a5e51f44a09a",
"metadata": {
"id": "4cc3ffed-74c2-48e3-86bc-a5e51f44a09a"
},
"source": [
"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "92585631-ff39-402e-bd1c-aaebdce682e5",
"metadata": {
"id": "92585631-ff39-402e-bd1c-aaebdce682e5"
},
"outputs": [],
"source": [
" |
import os\n",
" |
import ezkl\n",
"\n",
"\n",
"model_path = os.path.join('network.onnx')\n",
"compiled_model_path = os.path.join('network.compiled')\n",
"pk_path = os.path.join('test.pk')\n",
"vk_path = os.path.join('test.vk')\n",
"settings_path = os.path.join('settings.json')\n",
"\n",
"witness_path = os.path.join('witness.json')\n",
"data_path = os.path.join('input.json')\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d80d3169-cc70-4aee-bdc2-df9a435b3116",
"metadata": {
"id": "d80d3169-cc70-4aee-bdc2-df9a435b3116"
},
"outputs": [],
"source": [
"
"num_node = 5\n",
"\n",
"
"filter_row = []\n",
"filter_col = []\n",
"row, col = dataset[0].edge_index\n",
"for idx in range(row.size(0)):\n",
" if row[idx] < num_node and col[idx] < num_node:\n",
" filter_row.append(row[idx])\n",
" filter_col.append(col[idx])\n",
"filter_edge_index = torch.stack([torch.tensor(filter_row), torch.tensor(filter_col)])\n",
"num_edge = len(filter_row)\n",
"\n",
"\n",
"x = dataset[0].x[:num_node, :num_feat]\n",
"edge_index = filter_edge_index\n",
"\n",
"adj_t = torch.sparse_coo_tensor(edge_index, torch.ones(num_edge), size=(num_node, num_node)).to_dense().T\n",
"deg = torch.sum(adj_t, dim=1)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "46367b2f-951d-403b-9346-e689de0bee3f",
"metadata": {
"colab": {
"base_uri": "https:
},
"id": "46367b2f-951d-403b-9346-e689de0bee3f",
"outputId": "f063bf1b-e518-4fdb-b8ad-507c521acaa3"
},
"outputs": [],
"source": [
" |
import json\n",
"\n",
"
"model.eval()\n",
"model.to('cpu')\n",
"\n",
"
"torch.onnx.export(model,
" (x, adj_t, deg),
" model_path,
" export_params=True,
" opset_version=11,
" do_constant_folding=True,
" input_names = ['x', 'edge_index'],
" output_names = ['output'])
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9e6da242-540e-48dc-bc20-d08fcd192af4",
"metadata": {
"id": "9e6da242-540e-48dc-bc20-d08fcd192af4"
},
"outputs": [],
"source": [
"torch_out = model(x, adj_t, deg)\n",
"x_shape = x.shape\n",
"adj_t_shape=adj_t.shape\n",
"deg_shape=deg.shape\n",
"\n",
"x = ((x).detach().numpy()).reshape([-1]).tolist()\n",
"adj_t = ((adj_t).detach().numpy()).reshape([-1]).tolist()\n",
"deg = ((deg).detach().numpy()).reshape([-1]).tolist()\n",
"\n",
"data = dict(input_shapes=[x_shape, adj_t_shape, deg_shape],\n",
" input_data=[x, adj_t, deg],\n",
" output_data=[((torch_out).detach().numpy()).reshape([-1]).tolist()])\n",
"json.dump(data, open(data_path, 'w'))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3393a884-7a14-435e-bb9e-4fa4fcbdc76b",
"metadata": {
"id": "3393a884-7a14-435e-bb9e-4fa4fcbdc76b",
"tags": []
},
"outputs": [],
"source": [
"!RUST_LOG=trace\n",
" |
import ezkl\n",
"\n",
"run_args = ezkl.PyRunArgs()\n",
"run_args.input_scale = 5\n",
"run_args.param_scale = 5\n",
"
"res = ezkl.gen_settings(model_path, settings_path, py_run_args=run_args)\n",
"assert res == True\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"\n",
"res = ezkl.calibrate_settings(data_path, model_path, settings_path, \"resources\")\n",
"assert res == True"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8f86fceb",
"metadata": {
"id": "8f86fceb"
},
"outputs": [],
"source": [
"res = ezkl.compile_circuit(model_path, compiled_model_path, settings_path)\n",
"assert res == True"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3b55c925",
"metadata": {
"id": "3b55c925"
},
"outputs": [],
"source": [
"
"res = ezkl.get_srs( settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d6478bab",
"metadata": {
"id": "d6478bab"
},
"outputs": [],
"source": [
"
"\n",
"res = ezkl.gen_witness(data_path, compiled_model_path, witness_path)\n",
"assert os.path.isfile(witness_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b500c1ba",
"metadata": {
"id": "b500c1ba"
},
"outputs": [],
"source": [
"
"
"
"
"\n",
"\n",
"\n",
"res = ezkl.setup(\n",
" compiled_model_path,\n",
" vk_path,\n",
" pk_path,\n",
" )\n",
"\n",
"assert res == True\n",
"assert os.path.isfile(vk_path)\n",
"assert os.path.isfile(pk_path)\n", |
"assert os.path.isfile(settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ae152a64",
"metadata": {
"colab": {
"base_uri": "https:
},
"id": "ae152a64",
"outputId": "599cc9b8-ee85-407e-f0da-b2360634d2a8"
},
"outputs": [],
"source": [
"
"\n",
"\n",
"proof_path = os.path.join('test.pf')\n",
"\n",
"res = ezkl.prove(\n",
" witness_path,\n",
" compiled_model_path,\n",
" pk_path,\n",
" proof_path,\n",
" \"single\",\n",
" )\n",
"\n",
"print(res)\n",
"assert os.path.isfile(proof_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a2548b00",
"metadata": {
"colab": {
"base_uri": "https:
},
"id": "a2548b00",
"outputId": "e2972113-c079-4cb2-bfc5-6f7ad2842195"
},
"outputs": [],
"source": [
"
"\n",
"res = ezkl.verify(\n",
" proof_path,\n",
" settings_path,\n",
" vk_path,\n",
" )\n",
"\n",
"assert res == True\n",
"print(\"verified\")"
]
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "Python 3.11.4 ('.env': venv)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.15"
},
"vscode": {
"interpreter": {
"hash": "af2b032f4d5a009ff33cd3ba5ac25dedfd7d71c9736fbe82aa90983ec2fc3628"
}
}
},
"nbformat": 4,
"nbformat_minor": 5
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.