text
stringlengths 1
2.05k
|
---|
image/png": "iVBORw0KGgoAAAANSUhEUgAAArMAAAKTCAYAAAAKS4AtAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjguMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/H5lhTAAAACXBIWXMAAA9hAAAPYQGoP6dpAAA6CElEQVR4nO3de3RW5Z3o8V8CJKCSRG4JaADvICIoN5GuwUoUqlbxMiLHCzhOmY53UatYBKvTYr1UvFQde9p6tLVSr22p4iCotZKighcU8DgdFBUDWiQoKGCyzx8e3poKCMhreODzWetdrOz97Hc/OzvUbzf73SnIsiwLAABIUGFjTwAAADaXmAUAIFliFgCAZIlZAACSJWYBAEiWmAUAIFliFgCAZDVt7Ak0hvr6+li0aFG0bNkyCgoKGns6AAD8gyzL4sMPP4wOHTpEYeH6r79ulzG7aNGiqKysbOxpAADwJd56663Ydddd17t+u4zZli1bRsRn35ySkpJGng0AAP9o+fLlUVlZmeu29dkuY3btrQUlJSViFgBgK/Zlt4T6ABgAAMkSswAAJEvMAgCQrO3ynlkA2JbU1dXFmjVrGnsasEmaNWsWTZo0+crvI2YBIFFZlkVNTU0sW7assacCm6WsrCwqKiq+0nP/xSwAJGptyLZr1y522GEHvwiIZGRZFitXrowlS5ZERET79u03+73ELAAkqK6uLheyrVu3buzpwCZr0aJFREQsWbIk2rVrt9m3HPgAGAAkaO09sjvssEMjzwQ239qf369yz7eYBYCEubWAlG2Jn18xCwBAssQsAADJErMAwFapoKAgHn744by9/8iRI2Po0KFf6T2efPLJKCgo8Hi0RiRmAYCvzciRI6OgoCAKCgqiWbNmUV5eHocddlj84he/iPr6+gZj33333fjWt76Vt7nceOONceedd36l9zj44IPj3XffjdLS0i0zqf8v3yF/yCGHxPnnn5+39/86iVkA4Gs1ZMiQePfdd+ONN96IRx99NL75zW/GeeedF0cddVR8+umnuXEVFRVRXFy8xfdfV1cX9fX1UVpaGmVlZV/pvYqKir7yQ
"text/plain": [
"<Figure size 800x800 with 1 Axes>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"from matplotlib.pyplot |
import plot, legend\n",
"figure(figsize=(8,8))\n",
"plot(dlosses[100:], label=\"Discriminator Loss\")\n",
"plot(glosses[100:], label=\"Generator Loss\")\n",
"legend()"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[1m1/1\u001b[0m \u001b[32mββββββββββββββββββββ\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 32ms/step\n",
"\u001b[1m1/1\u001b[0m \u001b[32mββββββββββββββββββββ\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 47ms/step\n",
"\u001b[1m1/1\u001b[0m \u001b[32mββββββββββββββββββββ\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 9ms/step\n",
"\u001b[1m1/1\u001b[0m \u001b[32mββββββββββββββββββββ\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 9ms/step\n",
"\u001b[1m1/1\u001b[0m \u001b[32mββββββββββββββββββββ\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 9ms/step\n",
"\u001b[1m1/1\u001b[0m \u001b[32mββββββββββββββββββββ\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 8ms/step\n",
"\u001b[1m1/1\u001b[0m \u001b[32mββββββββββββββββββββ\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 8ms/step\n",
"\u001b[1m1/1\u001b[0m \u001b[32mββββββββββββββββββββ\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 8ms/step\n",
"\u001b[1m1/1\u001b[0m \u001b[32mββββββββββββββββββββ\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 8ms/step\n",
"\u001b[1m1/1\u001b[0m \u001b[32mββββββββββββββββββββ\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 8ms/step\n"
]
},
{
"data": {
"text/plain": [ |
"<matplotlib.image.AxesImage at 0x34f32bd70>"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAakAAAGiCAYAAABd6zmYAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjguMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/H5lhTAAAACXBIWXMAAA9hAAAPYQGoP6dpAAEAAElEQVR4nOz9WYytWZqehz1r+oc9x3zmnCu7pp67i01R1kSZsARfGIZAwIDBC18ZIGGhr9i0QKJNwpQF2KAB8oK3vmteWRYkUIBbalKim2Srh+rqqsqqzMo8J88UcSJiR+zxn9bgi+/f+1SDbLJk092V9FnAQeaJExF77/Wv9Y3v+34qpZR4s96sN+vNerPerB/Dpf+k38Cb9Wa9WW/Wm/Vm/VHrjZN6s96sN+vNerN+bNcbJ/VmvVlv1pv1Zv3YrjdO6s16s96sN+vN+rFdb5zUm/VmvVlv1pv1Y7veOKk36816s96sN+vHdr1xUm/Wm/VmvVlv1o/teuOk3qw36816s96sH9v1xkm9WW/Wm/VmvVk/tuuNk3qz3qw36816s35s15+Yk/o7f+fv8Pbbb1MUBd/4xjf4p
"text/plain": [
"<Figure size 640x480 with 1 Axes>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"x = []\n",
"for i in range(10):\n",
" x.append(np.concatenate(gm.predict(np.random.normal(size=(10,ZDIM))), axis=1))\n",
"imshow(np.concatenate(x, axis=0))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we export the _generator_ to onnx"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\ |
"><span style=\"font-weight: bold\">Model: \"sequential\"</span>\n",
"</pre>\n"
],
"text/plain": [
"\u001b[1mModel: \"sequential\"\u001b[0m\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">βββββββββββββββββββββββββββββββββββ³βββββββββββββββββββββββββ³ββββββββββββββββ\n",
"β<span style=\"font-weight: bold\"> Layer (type) </span>β<span style=\"font-weight: bold\"> Output Shape </span>β<span style=\"font-weight: bold\"> Param
"β‘βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©\n",
"β dense_2 (<span style=\"color:
"βββββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββΌββββββββββββββββ€\n",
"β batch_normalization_3 β (<span style=\"color:
"β (<span style=\"color:
"βββββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββΌββββββββββββββββ€\n",
"β elu_3 (<span style=\"color:
"βββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββ΄ββββββββββββββββ\n",
"</pre>\n"
],
"text/plain": [
"βββββββββββββββββββββββββββββββββββ³βββββββββββββββββββββββββ³ββββββββββββββββ\n",
"β\u001b[1m \u001b[0m\u001b[1mLayer (type) \u001b[0m\u001b[1m \u001b[0mβ\u001b[1m \u001b[0m\u001b[1mOutput Shape \u001b[ |
0m\u001b[1m \u001b[0mβ\u001b[1m \u001b[0m\u001b[1m Param
"β‘βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©\n",
"β dense_2 (\u001b[38;5;33mDense\u001b[0m) β (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m3136\u001b[0m) β \u001b[38;5;34m316,736\u001b[0m β\n",
"βββββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββΌββββββββββββββββ€\n",
"β batch_normalization_3 β (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m3136\u001b[0m) β \u001b[38;5;34m12,544\u001b[0m β\n",
"β (\u001b[38;5;33mBatchNormalization\u001b[0m) β β β\n",
"βββββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββΌββββββββββββββββ€\n",
"β elu_3 (\u001b[38;5;33mELU\u001b[0m) β (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m3136\u001b[0m) β \u001b[38;5;34m0\u001b[0m β\n",
"βββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββ΄ββββββββββββββββ\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\"> Total params: </span><span style=\"color:
"</pre>\n"
],
"text/plain": [
"\u001b[1m Total params: \u001b[0m\u001b[38;5;34m329,280\u001b[0m (1.26 MB)\n"
]
},
"metadata": {},
"output_type": " |
display_data"
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\"> Trainable params: </span><span style=\"color:
"</pre>\n"
],
"text/plain": [
"\u001b[1m Trainable params: \u001b[0m\u001b[38;5;34m323,008\u001b[0m (1.23 MB)\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\"> Non-trainable params: </span><span style=\"color:
"</pre>\n"
],
"text/plain": [
"\u001b[1m Non-trainable params: \u001b[0m\u001b[38;5;34m6,272\u001b[0m (24.50 KB)\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">Model: \"sequential_1\"</span>\n",
"</pre>\n"
],
"text/plain": [
"\u001b[1mModel: \"sequential_1\"\u001b[0m\n"
]
},
"metadata": {}, |
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">βββββββββββββββββββββββββββββββββββ³βββββββββββββββββββββββββ³ββββββββββββββββ\n",
"β<span style=\"font-weight: bold\"> Layer (type) </span>β<span style=\"font-weight: bold\"> Output Shape </span>β<span style=\"font-weight: bold\"> Param
"β‘βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©\n",
"β reshape_1 (<span style=\"color:
"βββββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββΌββββββββββββββββ€\n",
"β conv2d_transpose β (<span style=\"color:
"β (<span style=\"color:
"βββββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββΌββββββββββββββββ€\n",
"β batch_normalization_4 β (<span style=\"color:
"β (<span style=\"color:
"βββββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββΌββββββββββββββββ€\n",
"β elu_4 (<span style=\"color:
"βββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββ΄ββββββββββββββββ\n",
"</pre>\n"
],
"text/plain": [
"βββββββββββββββββββββββββββββββββββ³βββββββββββββββββββββββββ³ββββββββββββββββ\n",
"β\u001b[1m \u001b[0m\u001b[1mLayer (type) \u001b[0m\u001b[1m \u001b[0mβ\u001b[1m \u001b[0m\u001b[1mOutput Shape \u001b[0m\u001b[1m \u001b[0mβ\u001b[1m \u001b[0m\u001b[1m Param |
"β‘βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©\n",
"β reshape_1 (\u001b[38;5;33mReshape\u001b[0m) β (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m7\u001b[0m, \u001b[38;5;34m7\u001b[0m, \u001b[38;5;34m64\u001b[0m) β \u001b[38;5;34m0\u001b[0m β\n",
"βββββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββΌββββββββββββββββ€\n",
"β conv2d_transpose β (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m14\u001b[0m, \u001b[38;5;34m14\u001b[0m, \u001b[38;5;34m128\u001b[0m) β \u001b[38;5;34m204,928\u001b[0m β\n",
"β (\u001b[38;5;33mConv2DTranspose\u001b[0m) β β β\n",
"βββββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββΌββββββββββββββββ€\n",
"β batch_normalization_4 β (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m14\u001b[0m, \u001b[38;5;34m14\u001b[0m, \u001b[38;5;34m128\u001b[0m) β \u001b[38;5;34m512\u001b[0m β\n",
"β (\u001b[38;5;33mBatchNormalization\u001b[0m) β β β\n",
"βββββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββΌββββββββββββββββ€\n",
"β elu_4 (\u001b[38;5;33mELU\u001b[0m) β (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m14\u001b[0m, \u001b[38;5;34m14\u001b[0m, \u001b[38;5;34m128\u001b[0m) β \u001b[38;5;34m0\u001b[0m β\n",
"βββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββ΄ββββββββββββββββ\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<pre style=\"white-s |
pace:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\"> Total params: </span><span style=\"color:
"</pre>\n"
],
"text/plain": [
"\u001b[1m Total params: \u001b[0m\u001b[38;5;34m205,440\u001b[0m (802.50 KB)\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\"> Trainable params: </span><span style=\"color:
"</pre>\n"
],
"text/plain": [
"\u001b[1m Trainable params: \u001b[0m\u001b[38;5;34m205,184\u001b[0m (801.50 KB)\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\"> Non-trainable params: </span><span style=\"color:
"</pre>\n"
],
"text/plain": [
"\u001b[1m Non-trainable params: \u001b[0m\u001b[38;5;34m256\u001b[0m (1.00 KB)\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": { |
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">Model: \"sequential_2\"</span>\n",
"</pre>\n"
],
"text/plain": [
"\u001b[1mModel: \"sequential_2\"\u001b[0m\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">βββββββββββββββββββββββββββββββββββ³βββββββββββββββββββββββββ³ββββββββββββββββ\n",
"β<span style=\"font-weight: bold\"> Layer (type) </span>β<span style=\"font-weight: bold\"> Output Shape </span>β<span style=\"font-weight: bold\"> Param
"β‘βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©\n",
"β conv2d_transpose_1 β (<span style=\"color:
"β (<span style=\"color:
"βββββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββΌββββββββββββββββ€\n",
"β activation (<span style=\"color:
"βββββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββΌββββββββββββββββ€\n",
"β reshape_2 (<span style=\"color:
"βββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββ΄ββββββββββββββββ\n",
"</pre>\n"
],
"text/plain": [
"βββββββββββββββββββββββββββββββββββ³βββββββββββββββββββββββββ³ββββ |
ββββββββββββ\n",
"β\u001b[1m \u001b[0m\u001b[1mLayer (type) \u001b[0m\u001b[1m \u001b[0mβ\u001b[1m \u001b[0m\u001b[1mOutput Shape \u001b[0m\u001b[1m \u001b[0mβ\u001b[1m \u001b[0m\u001b[1m Param
"β‘βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©\n",
"β conv2d_transpose_1 β (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m28\u001b[0m, \u001b[38;5;34m28\u001b[0m, \u001b[38;5;34m1\u001b[0m) β \u001b[38;5;34m3,201\u001b[0m β\n",
"β (\u001b[38;5;33mConv2DTranspose\u001b[0m) β β β\n",
"βββββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββΌββββββββββββββββ€\n",
"β activation (\u001b[38;5;33mActivation\u001b[0m) β (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m28\u001b[0m, \u001b[38;5;34m28\u001b[0m, \u001b[38;5;34m1\u001b[0m) β \u001b[38;5;34m0\u001b[0m β\n",
"βββββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββΌββββββββββββββββ€\n",
"β reshape_2 (\u001b[38;5;33mReshape\u001b[0m) β (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m28\u001b[0m, \u001b[38;5;34m28\u001b[0m) β \u001b[38;5;34m0\u001b[0m β\n",
"βββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββ΄ββββββββββββββββ\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\"> Total params: </span><span style=\"color: |
"</pre>\n"
],
"text/plain": [
"\u001b[1m Total params: \u001b[0m\u001b[38;5;34m3,201\u001b[0m (12.50 KB)\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\"> Trainable params: </span><span style=\"color:
"</pre>\n"
],
"text/plain": [
"\u001b[1m Trainable params: \u001b[0m\u001b[38;5;34m3,201\u001b[0m (12.50 KB)\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\"> Non-trainable params: </span><span style=\"color:
"</pre>\n"
],
"text/plain": [
"\u001b[1m Non-trainable params: \u001b[0m\u001b[38;5;34m0\u001b[0m (0.00 B)\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"transpose_input for input_0: shape must be rank 4, ignored\n"
]
}
],
"source": |
[
"\n",
" |
import numpy as np\n",
" |
import tf2onnx\n",
" |
import tensorflow as tf\n",
" |
import json\n",
"\n",
"
"gm2 = tf.keras.models.Sequential(gm.layers[0:4])\n",
"
"gm2.summary()\n",
"gm2.output_names=['output']\n",
"\n",
"gm3 = tf.keras.models.Sequential(gm.layers[4:8])\n",
"
"gm3.summary() \n",
"gm3.output_names=['output']\n",
"\n",
"gm4 = tf.keras.models.Sequential(gm.layers[8:])\n",
"
"gm4.summary()\n",
"gm4.output_names=['output'] \n",
"\n",
"
"x = 0.1*np.random.rand(1,*[1, ZDIM])\n",
"inter_x1 = gm2(x[0])\n",
"inter_x2 = gm3(inter_x1)\n",
"\n",
"output_path = \"network_split_0.onnx\"\n",
"spec = tf.TensorSpec([1, ZDIM], tf.float32, name='input_0')\n",
"tf2onnx.convert.from_keras(gm2, input_signature=[spec], inputs_as_nchw=['input_0'], opset=12, output_path=output_path)\n",
"output_path = \"network_split_1.onnx\"\n",
"spec = tf.TensorSpec(inter_x1.shape, tf.float32, name='elu1')\n",
"tf2onnx.convert.from_keras(gm3, input_signature=[spec], inputs_as_nchw=['input_1'], opset=12, output_path=output_path)\n",
"output_path = \"network_split_2.onnx\"\n",
"spec = tf.TensorSpec(inter_x2.shape, tf.float32, name='elu2')\n",
"tf2onnx.convert.from_keras(gm4, input_signature=[spec], inputs_as_nchw=['input_2'], opset=12, output_path=output_path)\n",
"\n",
"data_array = x.reshape([-1]).tolist()\n",
"\n",
"data = dict(input_data = [data_array])\n",
"inter_x1 = inter_x1.numpy().reshape([-1]).tolist()\n",
"inter_x2 = inter_x2.numpy().reshape([-1]).tolist()\n",
"data_2 = dict(input_data = [inter_x1])\n", |
"data_3 = dict(input_data = [inter_x2])\n",
"\n",
"
"data_path = os.path.join('gan_input_0.json')\n",
"json.dump( data, open(data_path, 'w' ))\n",
"data_path = os.path.join('gan_input_1.json')\n",
"json.dump( data_2, open(data_path, 'w' ))\n",
"data_path = os.path.join('gan_input_2.json')\n",
"json.dump( data_3, open(data_path, 'w' ))\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"
"\n",
"the visibility parameters are:\n",
"- `input_visibility`: \"polycommit\"\n",
"- `param_visibility`: \"public\"\n",
"- `output_visibility`: polycommit"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
" |
import ezkl\n",
"\n",
"srs_path = os.path.join('kzg.srs')\n",
"\n",
"run_args = ezkl.PyRunArgs()\n",
"run_args.input_visibility = \"polycommit\"\n",
"run_args.param_visibility = \"fixed\"\n",
"run_args.output_visibility = \"polycommit\"\n",
"run_args.variables = [(\"batch_size\", 1)]\n",
"run_args.input_scale = 0\n",
"run_args.param_scale = 0\n",
"run_args.logrows = 18\n",
"\n",
"ezkl.get_srs(logrows=run_args.logrows, commitment=ezkl.PyCommitments.KZG)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"\n",
" <------------- Numerical Fidelity Report (input_scale: 0, param_scale: 0, scale_input_multiplier: 10) ------------->\n",
"\n",
"+----------------+--------------+-------------+--------------+----------------+------------------+---------------+-----------------+--------------------+--------------------+------------------------+\n",
"| mean_error | median_error | max_error | min_error | mean_abs_error | median_abs_error | max_abs_error | min_abs_error | mean_squared_error | mean_percent_error | mean_abs_percent_error |\n",
"+----------------+--------------+-------------+--------------+----------------+------------------+---------------+-----------------+--------------------+--------------------+------------------------+\n",
"| - |
0.00045216593 | 0.0071961936 | 0.059581105 | -0.051913798 | 0.011681631 | 0.0071961936 | 0.059581105 | 0.0000062934123 | 0.0002161761 | 1 | 1 |\n",
"+----------------+--------------+-------------+--------------+----------------+------------------+---------------+-----------------+--------------------+--------------------+------------------------+\n",
"\n",
"\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Setting up split model 0\n",
"Setting up split model 1\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"\n",
" <------------- Numerical Fidelity Report (input_scale: 0, param_scale: 0, scale_input_multiplier: 10) ------------->\n",
"\n",
"+----------------+--------------+-------------+--------------+----------------+------------------+---------------+---------------+--------------------+--------------------+------------------------+\n",
"| mean_error | median_error | max_error | min_error | mean_abs_error | median_abs_error | max_abs_error | min_abs_error | mean_squared_error | mean_percent_error | mean_abs_percent_error |\n",
"+----------------+--------------+-------------+--------------+----------------+------------------+---------------+---------------+--------------------+--------------------+------------------------+\n",
"| -0.00008474619 | -0.002256453 | 0.003519658 | -0.003081262 | 0.0018818051 | 0.002256453 | 0.003519658 | 0.00017167516 | 0.000003900568 | 1 |
| 1 |\n",
"+----------------+--------------+-------------+--------------+----------------+------------------+---------------+---------------+--------------------+--------------------+------------------------+\n",
"\n",
"\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Setting up split model 2\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"\n",
" <------------- Numerical Fidelity Report (input_scale: 0, param_scale: 0, scale_input_multiplier: 10) ------------->\n",
"\n",
"+-------------+--------------+-------------+-------------+----------------+------------------+---------------+---------------+--------------------+--------------------+------------------------+\n",
"| mean_error | median_error | max_error | min_error | mean_abs_error | median_abs_error | max_abs_error | min_abs_error | mean_squared_error | mean_percent_error | mean_abs_percent_error |\n",
"+-------------+--------------+-------------+-------------+----------------+------------------+---------------+---------------+--------------------+--------------------+------------------------+\n",
"| -0.49951223 | -0.49951398 | -0.49951398 | -0.49951398 | 0.49951223 | 0.49951398 | 0.49951398 | 0.49951398 | 0.24951272 | -0.9980509 | 0.9980509 |\n",
"+-------------+--------------+-------------+-------------+----------------+------------------+---------------+---------------+--------------------+--------------------+------ |
------------------+\n",
"\n",
"\n"
]
}
],
"source": [
"
"\n",
"def setup(i):\n",
" print(\"Setting up split model \"+str(i))\n",
"
" model_path = os.path.join('network_split_'+str(i)+'.onnx')\n",
" settings_path = os.path.join('settings_split_'+str(i)+'.json')\n",
" data_path = os.path.join('gan_input_'+str(i)+'.json')\n",
" compiled_model_path = os.path.join('network_split_'+str(i)+'.compiled')\n",
" pk_path = os.path.join('test_split_'+str(i)+'.pk')\n",
" vk_path = os.path.join('test_split_'+str(i)+'.vk')\n",
" witness_path = os.path.join('witness_split_'+str(i)+'.json')\n",
"\n",
" if i > 0:\n",
" prev_witness_path = os.path.join('witness_split_'+str(i-1)+'.json')\n",
" witness = json.load(open(prev_witness_path, 'r'))\n",
" data = dict(input_data = witness['outputs'])\n",
"
" json.dump(data, open(data_path, 'w' ))\n",
" else:\n",
" data_path = os.path.join('gan_input_0.json')\n",
"\n",
"
" res = ezkl.gen_settings(model_path, settings_path, py_run_args=run_args)\n",
" res = ezkl.calibrate_settings(data_path, model_path, settings_path, \"resources\", scales=[run_args.input_scale], max_logrows=run_args.logrows)\n",
" assert res == True\n",
"\n",
"
" settings = json.load(open(settings_path, 'r'))\n",
" settings['run_args']['logrows'] = run_args.logrows\n",
" json.dump(settings, open(settings_path, 'w' ))\n", |
"\n",
" res = ezkl.compile_circuit(model_path, compiled_model_path, settings_path)\n",
"\n",
"\n",
" res = ezkl.setup(\n",
" compiled_model_path,\n",
" vk_path,\n",
" pk_path,\n",
" )\n",
"\n",
" assert res == True\n",
" assert os.path.isfile(vk_path)\n",
" assert os.path.isfile(pk_path)\n",
" res = ezkl.gen_witness(data_path, compiled_model_path, witness_path, vk_path)\n",
" run_args.input_scale = settings[\"model_output_scales\"][0]\n",
"\n",
"for i in range(3):\n",
" setup(i)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"print(\"Proving split models\")\n",
"\n",
"\n",
"def prove_model(i): \n",
" proof_path = os.path.join('proof_split_'+str(i)+'.json')\n",
" witness_path = os.path.join('witness_split_'+str(i)+'.json')\n",
" compiled_model_path = os.path.join('network_split_'+str(i)+'.compiled')\n",
" pk_path = os.path.join('test_split_'+str(i)+'.pk')\n",
" vk_path = os.path.join('test_split_'+str(i)+'.vk')\n",
" settings_path = os.path.join('settings_split_'+str(i)+'.json')\n",
"\n",
" res = ezkl.prove(\n",
" witness_path,\n",
" compiled_model_path,\n",
" pk_path,\n", |
" proof_path,\n",
" \"for-aggr\",\n",
" )\n",
"\n",
" print(res)\n",
" assert os.path.isfile(proof_path)\n",
"\n",
"
" if i > 0:\n",
"
" prev_witness_path = os.path.join('witness_split_'+str(i-1)+'.json')\n",
" prev_witness = json.load(open(prev_witness_path, 'r'))\n",
"\n",
" witness = json.load(open(witness_path, 'r'))\n",
"\n",
" print(prev_witness[\"processed_outputs\"])\n",
" print(witness[\"processed_inputs\"])\n",
"\n",
" witness[\"processed_inputs\"] = prev_witness[\"processed_outputs\"]\n",
"\n",
"
" with open(witness_path, \"w\") as f:\n",
" json.dump(witness, f)\n",
"\n",
" res = ezkl.swap_proof_commitments(proof_path, witness_path)\n",
"\n",
" res = ezkl.verify(\n",
" proof_path,\n",
" settings_path,\n",
" vk_path,\n",
" )\n",
"\n",
" assert res == True\n",
" print(\"verified\")\n",
"\n",
"\n",
"for i in range(3):\n",
" print(\"----- proving split \"+str(i))\n",
" prove_model(i)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also mock aggregate the split proofs into a single proof. This is useful if you want to verify the proof on chain at a lower cost. Here we mock aggregate the proofs to save time. You can use other |
notebooks to see how to aggregate in full ! "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
"proofs = []\n",
"for i in range(3):\n",
" proof_path = os.path.join('proof_split_'+str(i)+'.json')\n",
" proofs.append(proof_path)\n",
"\n",
"ezkl.mock_aggregate(proofs, logrows=22, split_proofs = True)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "ezkl",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.2"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
} |
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Credits to [geohot](https:
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
"try:\n",
"
" |
import google.colab\n",
" |
import subprocess\n",
" |
import sys\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"ezkl\"])\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"tf2onnx\"])\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"onnx\"])\n",
"\n",
"
"except:\n",
" pass\n",
"\n",
"\n",
" |
import os\n",
" |
import time\n",
" |
import random\n",
"\n",
" |
import tensorflow as tf\n",
" |
import tensorflow.keras.backend as K\n",
"from tensorflow.keras.optimizers |
import Adam\n",
"from tensorflow.keras.layers |
import *\n",
"from tensorflow.keras.models |
import Model\n",
"from tensorflow.keras.losses |
import mse\n",
"from tensorflow.keras.datasets |
import mnist\n",
"(x_train, y_train), (x_test, y_test) = mnist.load_data()\n",
"x_train, x_test = [x/255.0 for x in [x_train, x_test]]\n",
"y_train, y_test = [tf.keras.utils.to_categorical(x) for x in [y_train, y_test]]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ZDIM = 4\n",
"\n",
"def get_encoder():\n",
" x = in1 = Input((28,28))\n",
" x = Reshape((28,28,1))(x)\n",
"\n",
" x = Conv2D(64, (5,5), padding='same', strides=(2,2))(x)\n",
" x = BatchNormalization()(x)\n",
" x = ELU()(x)\n",
"\n",
" x = Conv2D(128, (5,5), padding='same', strides=(2,2))(x)\n",
" x = BatchNormalization()(x)\n",
" x = ELU()(x)\n",
"\n",
" x = Flatten()(x)\n",
" x = Dense(ZDIM)(x)\n",
" return Model(in1, x)\n",
"\n",
"def get_decoder():\n",
" x = in1 = Input((ZDIM,))\n",
"\n",
" x = Dense(7*7*64)(x)\n",
" x = BatchNormalization()(x)\n",
" x = ELU()(x)\n",
" x = Reshape((7,7,64))(x)\n",
"\n",
" x = Conv2DTranspose(128, (5,5), strides=(2,2), padding='same')(x)\n",
" x = BatchNormalization()(x)\n",
" x = ELU()(x)\n",
"\n",
" x = Conv2DTranspose(1, (5,5), strides=(2,2), padding='same')(x)\n",
" x = Activation('sigmoid')(x)\n",
" x = Reshape((28,28))(x)\n",
" return Model(in1, x)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [ |
"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
"enc = get_encoder()\n",
"dec = get_decoder()\n",
"ae = Model(enc.input, dec(enc.output))\n",
"ae.compile('adam', 'mse')\n",
"ae.summary()\n",
"
"ae.fit(x_train, x_train, batch_size=128, epochs=1, shuffle=1, validation_data=(x_test, x_test))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
" |
import numpy as np\n",
"from matplotlib.pyplot |
import figure, imshow\n",
"imshow(np.concatenate(ae.predict(np.array([random.choice(x_test) for i in range(10)])), axis=1))\n",
"figure(figsize=(16,16))\n",
"imshow(np.concatenate(ae.layers[-1].predict(np.random.normal(size=(10, ZDIM))), axis=1))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
" |
import os \n",
"\n",
"model_path = os.path.join('ae.onnx')\n",
"compiled_model_path = os.path.join('ae.compiled')\n",
"pk_path = os.path.join('ae.pk')\n",
"vk_path = os.path.join('ae.vk')\n",
"settings_path = os.path.join('ae_settings.json')\n",
"srs_path = os.path.join('ae_kzg.srs')\n",
"witness_path = os.path.join('ae_witness.json')\n",
"data_path = os.path.join('ae_input.json')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we export the decoder (which presumably is what we want) -- to onnx"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"\n",
" |
import numpy as np\n",
" |
import tf2onnx\n",
" |
import tensorflow as tf\n",
" |
import json\n",
"\n",
"shape = [1, ZDIM]\n",
"
"x = 0.1*np.random.rand(1,*shape)\n",
"\n",
"spec = tf.TensorSpec(shape, tf.float32, name='input_0')\n",
"\n",
"\n",
"tf2onnx.convert.from_keras(dec, input_signature=[spec], inputs_as_nchw=['input_0'], opset=12, output_path=model_path)\n",
"\n",
"data_array = x.reshape([-1]).tolist()\n",
"\n",
"data = dict(input_data = [data_array])\n",
"\n",
"
"json.dump( data, open(data_path, 'w' ))\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
" |
import ezkl\n",
"\n",
"!RUST_LOG=trace\n",
"res = ezkl.gen_settings(model_path, settings_path)\n",
"assert res == True\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"cal_path = os.path.join(\"calibration.json\")\n",
"\n",
"data_array = (0.1 * np.random.rand(20, *shape)).reshape([-1]).tolist()\n",
"\n",
"data = dict(input_data = [data_array])\n",
"\n",
"
"json.dump(data, open(cal_path, 'w'))\n",
"\n",
"\n",
"ezkl.calibrate_settings(cal_path, model_path, settings_path, \"resources\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res = ezkl.compile_circuit(model_path, compiled_model_path, settings_path)\n",
"assert res == True"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
"res = ezkl.get_srs( settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
"witness_path = \"ae_witness.json\"\n",
"\n",
"res = ezkl.gen_witness(data_path, compiled_model_path, witness_path)\n",
"assert os.path.isfile(witness_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [ |
"res = ezkl.mock(witness_path, compiled_model_path)\n",
"assert res == True"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"\n",
"
"
"
"
"\n",
"res = ezkl.setup(\n",
" compiled_model_path,\n",
" vk_path,\n",
" pk_path,\n",
" \n",
" )\n",
"\n",
"assert res == True\n",
"assert os.path.isfile(vk_path)\n",
"assert os.path.isfile(pk_path)\n",
"assert os.path.isfile(settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"\n",
"proof_path = os.path.join('ae.pf')\n",
"\n",
"res = ezkl.prove(\n",
" witness_path,\n",
" compiled_model_path,\n",
" pk_path,\n",
" proof_path,\n",
" \n",
" \"single\",\n",
" )\n",
"\n",
"print(res)\n",
"assert os.path.isfile(proof_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
"res = ezkl.verify(\n",
" proof_path,\n",
" settings_path,\n",
" vk_path,\n",
" \n",
" )\n",
"\n", |
"assert res == True\n",
"print(\"verified\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"in1 = Input((28,28))\n",
"x = get_encoder()(in1)\n",
"\n",
"
"z_mu = Dense(ZDIM)(x)\n",
"z_log_var = Dense(ZDIM)(x)\n",
"z = Lambda(lambda x: x[0] + K.exp(0.5 * x[1]) * K.random_normal(shape=K.shape(x[0])))([z_mu, z_log_var])\n",
"dec = get_decoder()\n",
"dec.output_names=['output']\n",
"\n",
"out = dec(z)\n",
"\n",
"mse_loss = mse(Reshape((28*28,))(in1), Reshape((28*28,))(out)) * 28 * 28\n",
"kl_loss = 1 + z_log_var - K.square(z_mu) - K.exp(z_log_var)\n",
"kl_loss = -0.5 * K.mean(kl_loss, axis=-1)\n",
"\n",
"vae = Model(in1, out)\n",
"vae.add_loss(K.mean(mse_loss + kl_loss))\n",
"vae.compile('adam')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
"test = Model(in1, [z, z_mu, z_log_var])\n",
"test.predict(x_train[0:1])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"vae.fit(x_train, batch_size=128, epochs=1, shuffle=1, validation_data=(x_test, None))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
" |
outputs": [],
"source": [
"imshow(np.concatenate(vae.predict(np.array([random.choice(x_test) for i in range(10)])), axis=1))\n",
"figure(figsize=(16,16))\n",
"imshow(np.concatenate(vae.layers[5].predict(np.random.normal(size=(10, ZDIM))), axis=1))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
" |
import os \n",
"\n",
"model_path = os.path.join('vae.onnx')\n",
"compiled_model_path = os.path.join('vae.compiled')\n",
"pk_path = os.path.join('vae.pk')\n",
"vk_path = os.path.join('vae.vk')\n",
"settings_path = os.path.join('vae_settings.json')\n",
"srs_path = os.path.join('vae_kzg.srs')\n",
"witness_path = os.path.join('vae_witness.json')\n",
"data_path = os.path.join('vae_input.json')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"\n",
" |
import numpy as np\n",
" |
import tf2onnx\n",
" |
import tensorflow as tf\n",
" |
import json\n",
"\n",
"
"x = 0.1*np.random.rand(1,*[1, ZDIM])\n",
"\n",
"spec = tf.TensorSpec([1, ZDIM], tf.float32, name='input_0')\n",
"\n",
"\n",
"tf2onnx.convert.from_keras(dec, input_signature=[spec], inputs_as_nchw=['input_0'], opset=12, output_path=model_path)\n",
"\n",
"data_array = x.reshape([-1]).tolist()\n",
"\n",
"data = dict(input_data = [data_array])\n",
"\n",
"
"json.dump( data, open(data_path, 'w' ))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
" |
import ezkl\n",
"\n",
"!RUST_LOG=trace\n",
"res = ezkl.gen_settings(model_path, settings_path)\n",
"assert res == True\n",
"\n",
"res = ezkl.calibrate_settings(data_path, model_path, settings_path, \"resources\")\n",
"assert res == True\n",
"print(\"verified\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"res = ezkl.compile_circuit(model_path, compiled_model_path, settings_path)\n",
"assert res == True"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
"res = ezkl.get_srs( settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
"witness_path = \"vae_witness.json\"\n",
"\n",
"res = ezkl.gen_witness(data_path, compiled_model_path, witness_path)\n",
"assert os.path.isfile(witness_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
"
"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"\n",
"
"
"
"
"\n",
"res = ezkl.setup(\n",
" compiled_model_path,\n",
" vk_path, |
\n",
" pk_path,\n",
" \n",
" )\n",
"\n",
"\n",
"assert res == True\n",
"assert os.path.isfile(vk_path)\n",
"assert os.path.isfile(pk_path)\n",
"assert os.path.isfile(settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
"\n",
"\n",
"proof_path = os.path.join('test.pf')\n",
"\n",
"res = ezkl.prove(\n",
" witness_path,\n",
" compiled_model_path,\n",
" pk_path,\n",
" proof_path,\n",
" \n",
" \"single\",\n",
" )\n",
"\n",
"print(res)\n",
"assert os.path.isfile(proof_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
"res = ezkl.verify(\n",
" proof_path,\n",
" settings_path,\n",
" vk_path,\n",
" \n",
" )\n",
"\n",
"assert res == True\n",
"print(\"verified\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python", |
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.15"
}
},
"nbformat": 4,
"nbformat_minor": 2
} |
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "TB8jFLoLwZ8K"
},
"source": [
"
"In this tutorial we utilize the N-BEATS (Neural basis expansion analysis for interpretable time series forecasting\n",
") for forecasting the price of ethereum.\n",
"\n",
"For more details regarding N-BEATS, visit this link [https:
"\n",
"The code for N-BEATS used is adapted from [nbeats-pytorch](https:
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Ccy0MgZLwY1Z"
},
"outputs": [],
"source": [
" |
import pandas as pd\n",
" |
import torch\n",
"from torch |
import nn, optim\n",
"from torch.nn |
import functional as F\n",
"from torch.nn.functional |
import mse_loss, l1_loss, binary_cross_entropy, cross_entropy\n",
"from torch.optim |
import Optimizer\n",
" |
import matplotlib.pyplot as plt\n",
" |
import requests\n",
" |
import json\n",
"from torch.utils.data |
import DataLoader, TensorDataset\n",
" |
import numpy as np\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ovxRhGv0xS0i"
},
"outputs": [],
"source": [
"
"coins = [\"ETH\"]\n",
"days_ago_to_fetch = 2000
"coin_history = {}\n",
"hist_length = 0\n",
"average_returns = {}\n",
"cumulative_returns = {}\n",
"\n",
"def index_history_coin(hist):\n",
" hist = hist.set_index('time')\n",
" hist.index = pd.to_datetime(hist.index, unit='s')\n",
" return hist\n",
"\n",
"def filter_history_by_date(hist):\n",
" result = hist[hist.index.year >= 2017]\n",
" return result\n",
"\n",
"def fetch_history_coin(coin):\n",
" endpoint_url = \"https:
" res = requests.get(endpoint_url)\n",
" hist = pd.DataFrame(json.loads(res.content)['Data'])\n",
" hist = index_history_coin(hist)\n",
" hist = filter_history_by_date(hist)\n",
" return hist\n",
"\n",
"def get_history_from_file(filename):\n",
" return pd.read_csv(filename)\n",
"\n",
"\n",
"for coin in coins:\n",
" coin_history[coin] = fetch_history_coin(coin)\n",
"\n",
"hist_length = len(coin_history[coins[0]])\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "CNeFMmvpx5ig"
},
"outputs": [],
"source": [
"
"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "_wPJcU8EyOsF"
},
"outputs": [],
"source": [
"
"coin_history['ETH'] = get_history_from_file(\"eth_price.csv\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https:
},
"id": "ij_DBZl7yQqE", |
"outputId": "3b7838de-fa00-4560-cbcb-62c11e311a0f"
},
"outputs": [],
"source": [
"
"\n",
"def add_all_returns():\n",
" for coin in coins:\n",
" hist = coin_history[coin]\n",
" hist['return'] = (hist['close'] - hist['open']) / hist['open']\n",
" average = hist[\"return\"].mean()\n",
" average_returns[coin] = average\n",
" cumulative_returns[coin] = (hist[\"return\"] + 1).prod() - 1\n",
" hist['excess_return'] = hist['return'] - average\n",
" coin_history[coin] = hist\n",
"\n",
"add_all_returns()\n",
"\n",
"
"cumulative_returns"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https:
},
"id": "nW3xWLCNyeGN",
"outputId": "a5d7f42b-447b-4ec4-8b42-4bd845ba3b3b"
},
"outputs": [],
"source": [
"
"average_returns"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https:
},
"id": "J9AOW3czykk-",
"outputId": "23d9564c-ae0e-4bb4-cf3e-bdb46e9e1639"
},
"outputs": [],
"source": [
"
"excess_matrix = np.zeros((hist_length, len(coins)))\n",
"\n",
"for i in range(0, hist_length):\n",
" for idx, coin in enumerate(coins):\n",
" excess_matrix[i][idx] = coin_history[coin].iloc[i]['excess_return']\n",
"\n",
"excess_matrix"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https:
"height": 424
},
"id": "A3GuFe_-yq_Q",
"outputId": "3313aa46-88ef-4dbb-e07d-6cc6c9584eb9"
},
"outputs": [],
"source": [
"
"pretty_matrix = pd.DataFrame(exces |
s_matrix).copy()\n",
"pretty_matrix.columns = coins\n",
"pretty_matrix.index = coin_history[coins[0]].index\n",
"\n",
"pretty_matrix"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https:
},
"id": "rRYQ63frys8K",
"outputId": "d4df3245-f4e6-4511-dd8a-7c7e52cb4982"
},
"outputs": [],
"source": [
"
"\n",
"
"product_matrix = np.matmul(excess_matrix.transpose(), excess_matrix)\n",
"var_covar_matrix = product_matrix / hist_length\n",
"\n",
"var_covar_matrix"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https:
"height": 81
},
"id": "F_0y258Iz2-1",
"outputId": "f161c55d-5600-41da-f065-821c87340f33"
},
"outputs": [],
"source": [
"
"pretty_var_covar = pd.DataFrame(var_covar_matrix).copy()\n",
"pretty_var_covar.columns = coins\n",
"pretty_var_covar.index = coins\n",
"\n",
"pretty_var_covar"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "_HS7sq_Xz4Js"
},
"outputs": [],
"source": [
"
"\n",
"std_dev = np.zeros((len(coins), 1))\n",
"neg_std_dev = np.zeros((len(coins), 1))\n",
"\n",
"for idx, coin in enumerate(coins):\n",
" std_dev[idx][0] = np.std(coin_history[coin]['return'])\n",
" coin_history[coin]['downside_return'] = 0\n",
"\n",
" coin_history[coin].loc[coin_history[coin]['return'] < 0,\n",
" 'downside_return'] = coin_history[coin]['return']**2\n",
" neg_std_dev[idx][0] = np.sqrt(coin_history[coin]['downside_return'].mean())"
]
},
{
"cell_type": "code",
"execution_count": |
null,
"metadata": {
"colab": {
"base_uri": "https:
"height": 81
},
"id": "gLpf-k1az77u",
"outputId": "d6dc063f-05b7-4a9f-de59-cc60b0cfee5e"
},
"outputs": [],
"source": [
"
"pretty_std = pd.DataFrame(std_dev).copy()\n",
"pretty_neg_std = pd.DataFrame(neg_std_dev).copy()\n",
"pretty_comb = pd.concat([pretty_std, pretty_neg_std], axis=1)\n",
"\n",
"pretty_comb.columns = ['std dev', 'neg std dev']\n",
"pretty_comb.index = coins\n",
"\n",
"pretty_comb"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "MAd19PQcz9A5"
},
"outputs": [],
"source": [
"
"std_product_matrix = np.matmul(std_dev, std_dev.transpose())\n",
"\n",
"
"neg_std_product_matrix = np.matmul(neg_std_dev, neg_std_dev.transpose())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https:
"height": 81
},
"id": "O8aD5ZiYz-8E",
"outputId": "945391f5-8e3b-4369-cc0c-1cf927f72ef2"
},
"outputs": [],
"source": [
"pretty_std_prod = pd.DataFrame(std_product_matrix).copy()\n",
"pretty_std_prod.columns = coins\n",
"pretty_std_prod.index = coins\n",
"\n",
"pretty_std_prod"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https:
"height": 81
},
"id": "KecwUguO0Ago",
"outputId": "f77f4bd2-3314-4d7e-8525-078825a83a8c"
},
"outputs": [],
"source": [
"
"corr_matrix = var_covar_matrix / std_product_matrix\n",
"pretty_corr = pd.DataFrame(corr_matrix).copy()\n",
"pretty_corr.columns = coins\n",
"pretty_corr.index = coins\n",
"\n",
"prett |
y_corr"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https:
"height": 424
},
"id": "m62kaZFu0C0p",
"outputId": "b3c10014-afe1-4cdb-e5a3-3a1361b54501"
},
"outputs": [],
"source": [
"
"coin_history['ETH']"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https:
},
"id": "JdtkDBc90rM6",
"outputId": "19d1c7af-809e-4b1b-a1a0-b50bba67d425"
},
"outputs": [],
"source": [
"def simulate_portfolio_growth(initial_amount, daily_returns):\n",
" portfolio_value = [initial_amount]\n",
" for ret in daily_returns:\n",
" portfolio_value.append(portfolio_value[-1] * (1 + ret))\n",
" return portfolio_value\n",
"\n",
"initial_investment = 100000\n",
"\n",
"eth_portfolio = simulate_portfolio_growth(initial_investment, coin_history[\"ETH\"]['return'])\n",
"\n",
"print(\"ETH Portfolio Growth:\", eth_portfolio)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https:
"height": 564
},
"id": "ADBacxHA27v0",
"outputId": "3f5f8f35-4efc-473d-a5af-12515fa897b6"
},
"outputs": [],
"source": [
"
"plt.figure(figsize=(10,6))\n",
"plt.plot(eth_portfolio, label='ETH Portfolio', color='blue')\n",
"plt.title('Portfolio Growth Over Time')\n",
"plt.xlabel('Days')\n",
"plt.ylabel('Portfolio Value')\n",
"plt.legend()\n",
"plt.grid(True)\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https:
},
"id": "fWAx5OZ-302J",
"out |
putId": "a94346ac-4c16-428d-eac1-d8fbbd9208b4"
},
"outputs": [],
"source": [
"
"eth_df = coin_history['ETH'][['close']].copy()\n",
"\n",
"
"close_tensor = torch.tensor(eth_df.values)\n",
"\n",
"
"eth_df = coin_history['ETH'][['return']].copy()\n",
"\n",
"
"return_tensor = torch.tensor(eth_df.values)\n",
"\n",
"
"print(close_tensor)\n",
"print(return_tensor)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "4M6hIqZ15aqs"
},
"outputs": [],
"source": [
"
"
"\n",
"def squeeze_last_dim(tensor):\n",
" if len(tensor.shape) == 3 and tensor.shape[-1] == 1:
" return tensor[..., 0]\n",
" return tensor\n",
"\n",
"\n",
"def seasonality_model(thetas, t, device):\n",
" p = thetas.size()[-1]\n",
" assert p <= thetas.shape[1], 'thetas_dim is too big.'\n",
" p1, p2 = (p
" s1 = torch.tensor(np.array([np.cos(2 * np.pi * i * t) for i in range(p1)])).float()
" s2 = torch.tensor(np.array([np.sin(2 * np.pi * i * t) for i in range(p2)])).float()\n",
" S = torch.cat([s1, s2])\n",
" return thetas.mm(S.to(device))\n",
"\n",
"\n",
"def trend_model(thetas, t, device):\n",
" p = thetas.size()[-1]\n",
" assert p <= 4, 'thetas_dim is too big.'\n",
" T = torch.tensor(np.array([t ** i for i in range(p)])).float()\n",
" return thetas.mm(T.to(device))\n",
"\n",
"\n",
"def linear_space(backcast_length, forecast_length, is_forecast=True):\n",
" horizon = forecast_length if is_forecast else backcast_length\n",
" return np.arange(0, horizon) / horizon\n",
"\n",
" |
class Block(nn.Module):\n",
"\n",
" def __init__(self, units, thetas_dim, device, backcast_length=10, forecast_length=5, share_thetas=False,\n",
" nb_harmonics=None):\n",
" super(Block, self).__init__()\n",
" self.units = units\n",
" self.thetas_dim = thetas_dim\n",
" self.backcast_length = backcast_length\n",
" self.forecast_length = forecast_length\n",
" self.share_thetas = share_thetas\n",
" self.fc1 = nn.Linear(backcast_length, units)\n",
" self.fc2 = nn.Linear(units, units)\n",
" self.fc3 = nn.Linear(units, units)\n",
" self.fc4 = nn.Linear(units, units)\n",
" self.device = device\n",
" self.backcast_linspace = linear_space(backcast_length, forecast_length, is_forecast=False)\n",
" self.forecast_linspace = linear_space(backcast_length, forecast_length, is_forecast=True)\n",
" if share_thetas:\n",
" self.theta_f_fc = self.theta_b_fc = nn.Linear(units, thetas_dim, bias=False)\n",
" else:\n",
" self.theta_b_fc = nn.Linear(units, thetas_dim, bias=False)\n",
" self.theta_f_fc = nn.Linear(units, thetas_dim, bias=False)\n",
"\n",
" def forward(self, x):\n",
" x = squeeze_last_dim(x)\n",
" x = F.relu(self.fc1(x.to(self.device)))\n",
" x = F.relu(self.fc2(x))\n",
" x = F.relu(self.fc3(x))\n",
" x = F.relu(self.fc4(x))\n",
" return x\n",
"\n",
" def __str__(self):\n",
" block_type = type(self).__name__\n",
" return f'{block_type}(units={self.units}, thetas_dim={self.thetas_dim}, ' \\\n",
" f'backcast_length={self.backcast_length}, forecast_length={self.forecast_length}, ' \\\n",
" f'share_thetas={self |
.share_thetas}) at @{id(self)}'\n",
"\n",
"\n",
" |
class SeasonalityBlock(Block):\n",
"\n",
" def __init__(self, units, thetas_dim, device, backcast_length=10, forecast_length=5, nb_harmonics=None):\n",
" if nb_harmonics:\n",
" super(SeasonalityBlock, self).__init__(units, nb_harmonics, device, backcast_length,\n",
" forecast_length, share_thetas=True)\n",
" else:\n",
" super(SeasonalityBlock, self).__init__(units, forecast_length, device, backcast_length,\n",
" forecast_length, share_thetas=True)\n",
"\n",
" def forward(self, x):\n",
" x = super(SeasonalityBlock, self).forward(x)\n",
" backcast = seasonality_model(self.theta_b_fc(x), self.backcast_linspace, self.device)\n",
" forecast = seasonality_model(self.theta_f_fc(x), self.forecast_linspace, self.device)\n",
" return backcast, forecast\n",
"\n",
"\n",
" |
class TrendBlock(Block):\n",
"\n",
" def __init__(self, units, thetas_dim, device, backcast_length=10, forecast_length=5, nb_harmonics=None):\n",
" super(TrendBlock, self).__init__(units, thetas_dim, device, backcast_length,\n",
" forecast_length, share_thetas=True)\n",
"\n",
" def forward(self, x):\n",
" x = super(TrendBlock, self).forward(x)\n",
" backcast = trend_model(self.theta_b_fc(x), self.backcast_linspace, self.device)\n",
" forecast = trend_model(self.theta_f_fc(x), self.forecast_linspace, self.device)\n",
" return backcast, forecast\n",
"\n",
"\n",
"\n",
" |
class GenericBlock(Block):\n",
"\n",
" def __init__(self, units, thetas_dim, device, backcast_length=10, forecast_length=5, nb_harmonics=None):\n",
" super(GenericBlock, self).__init__(units, thetas_dim, device, backcast_length, forecast_length)\n",
"\n",
" self.backcast_fc = nn.Linear(thetas_dim, backcast_length)\n",
" self.forecast_fc = nn.Linear(thetas_dim, forecast_length)\n",
"\n",
" def forward(self, x):\n",
"
" x = super(GenericBlock, self).forward(x)\n",
"\n",
" theta_b = self.theta_b_fc(x)\n",
" theta_f = self.theta_f_fc(x)\n",
"\n",
" backcast = self.backcast_fc(theta_b)
" forecast = self.forecast_fc(theta_f)
"\n",
" return backcast, forecast\n",
"\n",
"\n",
" |
class NBEATS(nn.Module):\n",
" SEASONALITY_BLOCK = 'seasonality'\n",
" TREND_BLOCK = 'trend'\n",
" GENERIC_BLOCK = 'generic'\n",
"\n",
" def __init__(\n",
" self,\n",
" device=torch.device(\"cpu\"),\n",
" stack_types=(GENERIC_BLOCK, GENERIC_BLOCK),\n",
" nb_blocks_per_stack=1,\n",
" forecast_length=7,\n",
" backcast_length=14,\n",
" theta_dims=(2,2),\n",
" share_weights_in_stack=False,\n",
" hidden_layer_units=32,\n",
" nb_harmonics=None,\n",
" ):\n",
" super(NBEATS, self).__init__()\n",
" self.forecast_length = forecast_length\n",
" self.backcast_length = backcast_length\n",
" self.hidden_layer_units = hidden_layer_units\n",
" self.nb_blocks_per_stack = nb_blocks_per_stack\n",
" self.share_weights_in_stack = share_weights_in_stack\n",
" self.nb_harmonics = nb_harmonics
" self.stack_types = stack_types\n",
" self.stacks = nn.ModuleList()\n",
" self.thetas_dim = theta_dims\n",
" self.device = device\n",
" print('| N-Beats')\n",
" for stack_id in range(len(self.stack_types)):\n",
" stack = self.create_stack(stack_id)\n",
" self.stacks.append(stack)\n",
" self.to(self.device)\n",
"
"
"\n",
"\n",
" def create_stack(self, stack_id):\n",
" stack_type = self.stack_types[stack_id]\n",
" print(f'| -- Stack {stack_type.title()} (
" blocks = nn.ModuleList()\n",
" for block_id in range(self.nb_blocks_per_stack):\n",
" block_init = NBEATS.select_block(stack_type)\n",
" if self.share_weights_in_stack and block_id != 0 |
:\n",
" block = blocks[-1]
" else:\n",
" block = block_init(\n",
" self.hidden_layer_units, self.thetas_dim[stack_id],\n",
" self.device, self.backcast_length, self.forecast_length,\n",
" self.nb_harmonics\n",
" )\n",
" print(f' | -- {block}')\n",
" blocks.append(block)\n",
" return blocks\n",
"\n",
" @staticmethod\n",
" def select_block(block_type):\n",
" if block_type == NBEATS.SEASONALITY_BLOCK:\n",
" return SeasonalityBlock\n",
" elif block_type == NBEATS.TREND_BLOCK:\n",
" return TrendBlock\n",
" else:\n",
" return GenericBlock\n",
"\n",
"\n",
" def get_generic_and_interpretable_outputs(self):\n",
" g_pred = sum([a['value'][0] for a in self._intermediary_outputs if 'generic' in a['layer'].lower()])\n",
" i_pred = sum([a['value'][0] for a in self._intermediary_outputs if 'generic' not in a['layer'].lower()])\n",
" outputs = {o['layer']: o['value'][0] for o in self._intermediary_outputs}\n",
" return g_pred, i_pred,\n",
"\n",
" def forward(self, backcast):\n",
" self._intermediary_outputs = []\n",
" backcast = squeeze_last_dim(backcast)\n",
" forecast = torch.zeros(size=(backcast.size()[0], self.forecast_length,))
" for stack_id in range(len(self.stacks)):\n",
" for block_id in range(len(self.stacks[stack_id])):\n",
" b, f = self.stacks[stack_id][block_id](backcast)\n",
" backcast = backcast.to(self.device) - b\n",
" forecast = forecast.to(self.device) + f\n",
" block_type = self.sta |
cks[stack_id][block_id].__class__.__name__\n",
" layer_name = f'stack_{stack_id}-{block_type}_{block_id}'\n",
"\n",
" return backcast, forecast\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https:
},
"id": "tTH313qbRLMG",
"outputId": "34275438-5548-4a8b-f4a9-d96056ebde1c"
},
"outputs": [],
"source": [
"from torch.utils.data |
import Dataset, DataLoader\n",
"\n",
" |
class TimeSeriesDataset(Dataset):\n",
" def __init__(self, close_data, return_data, backcast_length, forecast_length, shuffle=True):\n",
" self.close_data = close_data\n",
" self.return_data = return_data\n",
" self.backcast_length = backcast_length\n",
" self.forecast_length = forecast_length\n",
" self.indices = list(range(len(self.close_data) - self.backcast_length - self.forecast_length + 1))\n",
" if shuffle:\n",
" np.random.shuffle(self.indices)\n",
"\n",
" def __len__(self):\n",
" return len(self.close_data) - self.backcast_length - self.forecast_length + 1\n",
"\n",
" def __getitem__(self, idx):\n",
" start = idx\n",
" end = idx + self.backcast_length\n",
" x = self.close_data[start:end]
" y = self.close_data[end:end+self.forecast_length]
" return x, y\n",
"\n",
"
"BACKCAST_LENGTH = 14\n",
"FORECAST_LENGTH = 7\n",
"\n",
"train_length = round(len(close_tensor) * 0.7)\n",
"train_dataset = TimeSeriesDataset(close_tensor[0:train_length], return_tensor[0:train_length], BACKCAST_LENGTH, FORECAST_LENGTH)\n",
"test_dataset = TimeSeriesDataset(close_tensor[train_length:], return_tensor[train_length:], BACKCAST_LENGTH, FORECAST_LENGTH)\n",
"train_loader = DataLoader(train_dataset)\n",
"\n",
"model = NBEATS(forecast_length=FORECAST_LENGTH, backcast_length=BACKCAST_LENGTH, device=('cuda' if torch.cuda.is_available() else 'cpu'))\n",
"model = model.to('cuda' if torch.cuda.is_available() else 'cpu')\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https:
},
"id": "JJ8-nh2GLKN_",
"outputId": "0d761daa-0f14-4a50-be41-b17993a4a182"
},
"outputs": [], |
"source": [
"EPOCHS = 1\n",
"\n",
"num_parameters = sum(p.numel() for p in model.parameters() if p.requires_grad)\n",
"print(f\"Number of trainable parameters in model: {num_parameters}\")\n",
"\n",
"criterion = torch.nn.L1Loss()\n",
"optimizer = torch.optim.Adam(model.parameters(), lr=0.001)\n",
"\n",
"for epoch in range(EPOCHS):\n",
" total_loss = 0.0\n",
" for batch_idx, (x, y) in enumerate(train_loader):\n",
"
" optimizer.zero_grad()\n",
"\n",
" x = x.clone().detach().to(dtype=torch.float)\n",
" x = x.to('cuda' if torch.cuda.is_available() else 'cpu')\n",
" y = y.clone().detach().to(dtype=torch.float)\n",
" y = y.to('cuda' if torch.cuda.is_available() else 'cpu')\n",
"\n",
"\n",
"
" forecast = model(x)\n",
"\n",
" loss = criterion(forecast[0], y)\n",
"\n",
"
" loss.backward()\n",
" optimizer.step()\n",
"\n",
"
" total_loss += loss
"\n",
" avg_loss = total_loss / len(train_loader)\n",
" print(f\"Epoch {epoch+1}/{EPOCHS}, Average Loss: {avg_loss:.4f}\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "jWLKwNFLYDOk"
},
"outputs": [],
"source": [
"
"try:\n",
"
" |
import google.colab\n",
" |
import subprocess\n",
" |
import sys\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"ezkl\"])\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"onnx\"])\n",
"\n",
"
"except:\n",
" pass\n",
"\n",
" |
import ezkl\n",
" |
import os\n",
" |
import json"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "dhOHiCmt4pUn"
},
"outputs": [],
"source": [
"model_path = os.path.join('network.onnx')\n",
"compiled_model_path = os.path.join('network.compiled')\n",
"pk_path = os.path.join('test.pk')\n",
"vk_path = os.path.join('test.vk')\n",
"settings_path = os.path.join('settings.json')\n",
"\n",
"witness_path = os.path.join('witness.json')\n",
"data_path = os.path.join('input.json')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https:
},
"id": "xsZ9xg7I48l4",
"outputId": "6dec08c6-f55e-4df1-b957-55d025286018"
},
"outputs": [],
"source": [
"
"x_export = None\n",
"for batch_idx, (x, y) in enumerate(train_loader):\n",
" x_export = x.clone().detach().to(dtype=torch.float)\n",
" break\n",
"\n",
"
"model.eval()\n",
"\n",
"
"torch.onnx.export(model,
" x_export,
" model_path,
" export_params=True,
" opset_version=10,
" do_constant_folding=True,
" input_names = ['input'],
" output_names = ['output'],
" dynamic_axes={'input' : {0 : 'batch_size'},
" 'output' : {0 : 'batch_size'}})\n",
"\n",
"data_array = ((x).detach().numpy()).reshape([-1]).tolist()\n",
"\n",
"data = dict(input_data = [data_array])\n",
"\n",
"
"json.dump( data, open(data_path, 'w' ))"
]
},
{
"cell_type": "code" |
,
"execution_count": null,
"metadata": {
"id": "5qdEFK_75GUb"
},
"outputs": [],
"source": [
"run_args = ezkl.PyRunArgs()\n",
"run_args.input_visibility = \"private\"\n",
"run_args.param_visibility = \"fixed\"\n",
"run_args.output_visibility = \"public\"\n",
"run_args.variables = [(\"batch_size\", 1)]\n",
"\n",
"!RUST_LOG=trace\n",
"
"res = ezkl.gen_settings(model_path, settings_path)\n",
"assert res == True\n",
"\n",
"res = ezkl.calibrate_settings(data_path, model_path, settings_path, \"resources\", max_logrows = 20, scales = [3])\n",
"assert res == True"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "pxDJPz-Q5LPF"
},
"outputs": [],
"source": [
"res = ezkl.compile_circuit(model_path, compiled_model_path, settings_path)\n",
"assert res == True"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ptcb4SGA5Qeb"
},
"outputs": [],
"source": [
"
"res = ezkl.get_srs( settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "OE7t0okU5WBQ"
},
"outputs": [],
"source": [
"res = ezkl.gen_witness(data_path, compiled_model_path, witness_path)\n",
"assert os.path.isfile(witness_path)"
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {
"id": "12YIcFr85X9-"
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"spawning module 2\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"quotient_poly_degree 4\n",
"n 262144\n",
"extended_k 20\n"
]
} |
],
"source": [
"res = ezkl.setup(\n",
" compiled_model_path,\n",
" vk_path,\n",
" pk_path,\n",
" \n",
" )\n",
"\n",
"assert res == True\n",
"assert os.path.isfile(vk_path)\n",
"assert os.path.isfile(pk_path)\n",
"assert os.path.isfile(settings_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "CSbWeZB35awS"
},
"outputs": [],
"source": [
"proof_path = os.path.join('test.pf')\n",
"\n",
"res = ezkl.prove(\n",
" witness_path,\n",
" compiled_model_path,\n",
" pk_path,\n",
" proof_path,\n",
" \n",
" \"single\",\n",
" )\n",
"\n",
"print(res)\n",
"assert os.path.isfile(proof_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "aGt8f4LS5dTP"
},
"outputs": [],
"source": [
"
"\n",
"res = ezkl.verify(\n",
" proof_path,\n",
" settings_path,\n",
" vk_path,\n",
" \n",
" )\n",
"\n",
"assert res == True\n",
"print(\"verified\")"
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"gpuType": "T4",
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.15"
}
},
"nbformat": 4,
"nbformat_minor": 0
} |
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"
"\n",
"Here we showcase how to split a larger circuit into multiple smaller proofs. This is useful if you want to prove over multiple machines, or if you want to split a proof into multiple parts to reduce the memory requirements.\n",
"\n",
"We showcase how to do this in the case where:\n",
"- intermediate calculations can be public (i.e. they do not need to be kept secret) and we can stitch the circuits together using instances\n",
"- intermediate calculations need to be kept secret (but not blinded !) and we need to use the low overhead kzg commitment scheme detailed [here](https:
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"First we |
import the necessary dependencies and set up logging to be as informative as possible. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"
"try:\n",
"
" |
import google.colab\n",
" |
import subprocess\n",
" |
import sys\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"ezkl\"])\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"onnx\"])\n",
"\n",
"
"except:\n",
" pass\n",
"\n",
"from torch |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.